Military

Q&A: Lt. Cmdr. Dylan Schmorrow: Empathetic Avionics: Beyond Interactivity

By Charlotte Adams | May 1, 2005
Send Feedback

The ultimate aircraft’s avionics should be able to sense the pilot’s mental state in real time and automatically adapt to the pilot’s needs. This is the vision of the U.S. Defense Advanced Research Projects Agency’s (DARPA’s) Improving Warfighter Information Intake Under Stress program–aka Augmented Cognition–headed by Lt. Cmdr. Dylan Schmorrow. Schmorrow earned a Ph.D. in experimental psychology from Western Michigan University and master of science degrees from the Naval Postgraduate School.

Avionics: What is the Improving Warfighter Information Intake Under Stress program about?

Schmorrow: The goal is to develop closed-loop computational systems, where computers adapt to the cognitive state of warfighters to enable significant improvements in performance. The program seeks to extend human information management capacity by addressing bottlenecks in human-computer interactions. Users’ cognitive status is assessed in real time, employing noninvasive neurophysiological sensors. The outputs from these sensors and associated gauges provide information that is used to dynamically adapt the computers to the users’ states.

Avionics: That’s augmented cognition?

Schmorrow: Yes. Augmented cognition utilizes knowledge about human capabilities to avoid overwhelming users with information. It employs continuous background sensing, learning and inferencing to understand trends, patterns and situations relevant to a user’s context and goals. Any augmented cognition system should contain at least four components: sensors to determine the user’s state, gauges to evaluate incoming sensor information, an adaptive user interface, and an underlying computational architecture to integrate these components.

Avionics: How does it relate to pilots?

Schmorrow: The cockpit is where the military first acknowledged the need for human factors research and implementation due to the complex nature of the domain. At times the demands placed on the pilot can exceed the resources the pilot has to rely on.

As a naval aerospace experimental psychologist, I am interested in expanding the envelope of performance in naval aviation. Before I came to DARPA my dream was to see cockpit systems that could dynamically adapt to the individual pilot’s cognitive state and performance. But I realized we couldn’t get there by trying to address everything in the cockpit at once. We decided to tear the problem apart and solve it incrementally and then put it all back together again.

Avionics: How did you do that?

Schmorrow: The first step was to build the underlying technology to measure cognitive states. Phase 2 tested the detection technology in a variety of platforms, focusing on individual cognitive bottlenecks. These experiments included simulations of interfaces for future dismounted soldiers and operators of cruise missiles, unmanned air vehicles and ground vehicles.

Avionics: Your research has a wide range of benefits?

Schmorrow: There are many benefits of this technology. First, we are improving the design of operational information systems that normally would overwhelm a user. Second, the lessons learned and strategies discovered can be applied to the dynamic cockpit environment in a way that mitigates all cognitive bottlenecks.

Avionics: How can this help the pilot?

Schmorrow: A tactical aircraft pilot has one of the toughest jobs in the world. When you’re under physiological stress, psychological stress and time stress, cognitive decrements start appearing. Our brains no longer function in the way we might expect. Wouldn’t it be great if an aircraft knew what was going on with the pilot and could communicate and dynamically adopt a mitigation strategy?

You’re flying a mission, for example. Maybe information is coming in to you at a rapid rate. Before long your ability to deal with new information starts to decrease. That is, your working memory is being overtaxed. Wouldn’t it be nice to have, say, a little gauge on your head that says: "My working memory is all full right now. Don’t tell me another thing." Or an attention gauge? The idea is not to allow the information technology system to just crush you.

Avionics: How did you address the issue?

Schmorrow: We initiated a "divide and conquer" approach, with the objective of eventally addressing all four bottlenecks in a single, integrated, augmented cognition system. We broke the problem down into specific cognitive bottlenecks and associated each one with a specific military application. The results have been positive enough that the various services are funding further development on their platforms.

With the Army’s Future Force Warrior initiative, we focused on attention. They were able to do intelligent sequencing of messages to the soldier–that is, control the timing of the delivery of certain messages. If the soldier was engaged in an "attentive" task and should not be interrupted, the computer system would avoid bothering him with medium- or low-priority messages. High-priority information would still be transmitted.

Additionally, one could switch the modality of the information to be displayed. In the ground vehicle example, the computer system would determine whether or not to display information on the dashboard vs. an audio format. It would do this by assessing whether or not the visual parts of the driver’s brain are overwhelmed. If they are, the car would speak to the driver over the phone–like "low on fuel."

Avionics: Is there another example?

Schmorrow: In the J-UCAS [Joint Unmanned Combat Air Systems] simulation, the system would automatically declutter the map as soon as the information started to overload the operator. The map wouldn’t remain decluttered because it has valuable situational awareness information. The mitigation only would occur when the situation called for it.

Avionics: What about the cockpit?

Schmorrow: QinetiQ is proving the possibility of integrating all of the concepts for the four bottlenecks into a single cockpit. QinetiQ will have a full simulator built and delivered in the U.S. this summer. It will debut at the Augmented Cognition International conference in Las Vegas in July. QinetiQ, involved in the DARPA program since the start, had a previous cognitive cockpit research program with the UK Ministry of Defence. We have been drawing on QinetiQ’s experiences throughout all phases of the program. In phase 4 QinetiQ will show the feasibility of this integration.

Avionics: Could you describe phase 1?

Schmorrow: In the first 18 months we had to prove that we could build gauges to characterize cognitive bottlenecks in working memory, decision making, attention and sensory input.

We needed to build gauges associated with each of the four cognitive bottlenecks to assess an individual’s cognitive resources in real time. We were able to characterize this for all four cognitive bottlenecks by monitoring the neurophysiological signals from people’s brains.

Avionics: How did you do that?

Schmorrow: We use sensors such as functional near-infrared [NIR] technology to look into the brain. Using NIR sensing, we can measure the blood flow in the brain. If you know that a certain part of the brain is burning a lot of fuel, or oxygen, and you know what that part of the brain is associated with, you can make correlations on what’s going on. With other sensor inputs, the computer will develop a picture of the pilot’s cognitive state and know what to do to mitigate performance decrements.

So it’s not a far stretch of the imagination to have a decision matrix that says: "We’re seeing an extraordinary amount of activity in the verbal memory area of the brain." The computer–the aircraft–has some important information to tell you. If it has a gauge, saying, "Verbal memory is all filled up, but spatial memory is pretty empty," it can present the information pictorially.

Avionics: Like multiplexing?

Schmorrow: Yes.

Avionics: You measure mental capacity?

Schmorrow: We would give people something really hard to do and then monitor that person’s brain waves. And, with specialized algorithms, we would be able to say, "the person is at this spot on the performance curve." While the person performed a simulation–an air defense simulation, for example–we’d characterize the brain waves and measure the neurophysiological signals.

If that person is scoring high, we’d have an indication of what the associated brain waves look like. If the person is overwhelmed because we sent too many bad guys at him, we’d also know what those brain waves look like.

At the completion of phase 1, after 18 months, we were able to demonstrate that we could quantify neurophysiological activity and correlate it to performance. We proved we could build the gauges–the algorithms–to interpret and correlate the brain signals.

Avionics: What did you do in phase 2?

Schmorrow: In phase 2, which just finished in late fall of 2004, we wanted to prove it was possible to build systems that could dynamically adapt to the cognitive state of the user. In phase 2 four contractors, each focusing on a particular cognitive bottleneck, worked on four major platforms. The teams leveraged developments in phase 1 to develop mitigation strategies.

Avionics: What did you accomplish?

Schmorrow: All four contractors–Lockheed Martin, Boeing, Honeywell and DaimlerChrysler–met the established metrics, such as a 500 percent increase in working memory throughput and a 100 percent improvement in recall. They accomplished this by performing experiments and simulations to see whether people could achieve these metrics. We proved the possibility, in the four individual platforms, of measuring the neurophysiology and having the systems dynamically adapt to users.

Avionics: What results have you seen?

Schmorrow: We have been able to show that we can drive computers to change the way information is presented to the operator–verbally, auditorily, spatially or visually–and we have demonstrated performance gains in the overall tasks.

Avionics: What are you doing in phase 3?

Schmorrow: We’re trying to prove it’s feasible to actually use these mitigations in stressful operational conditions. The services are contributing money for the development of those applications. We’re going to run experiments, evaluate prototypes and see if we’re able to observe the same performance gains in very rich, dynamic, operational environments. Phase 3 is about adding realistic stress.

We also will validate the cognitive cockpit under stressful conditions and demonstrate that we can integrate all of the assessment and mitigation strategies developed for the four bottlenecks into the cockpit.

Avionics: Can we use `AugCog’ today?

Schmorrow: We’ve been working closely with the Human Systems Division of the Naval Air Systems Command to build and deploy something like a G-LOC sensor as a very first step. G-LOC is short for G-induced loss of consciousness. A G-LOC sensor probably would be relatively simple: detecting the levels of oxygenated blood in the brain, so as to accurately detect if a pilot is experiencing G-LOC–a life-threatening state for any operator. Another transition opportunity could be advanced concept development for the Joint Strike Fighter.

Avionics: Nonoperational applications?

Schmorrow: We’re getting feedback about how this technology can be used in training simulators. A trainer would be able to measure progress along the novice-to-expert continuum and determine which students need help, based on quantitative neurological brain activity. We’re learning that these systems would dramatically help students progress through their curriculums faster and more effectively.

Avionics That Feel Your Pain

Cockpit computers should be able to sense the pilot’s mental state in real time and select the information presentation best suited to break through the mental logjams experienced in combat. Understanding and invoking such mental enhancements in different equipment and situations is the aim of the Improving Warfighter Information Intake Under Stress program at the U.S. Defense Advanced Research Projects Agency (DARPA), the organization that designed the original Internet.

In 30 years augmented cognition (AugCog) is what people will remember about DARPA, argues program manager, Lt. Cmdr. Dylan Schmorrow. He firmly believes that computers can do much more for pilots than they do today. Instead of merely reacting to pilot, sensor and other avionics inputs, the avionics of tomorrow could detect the pilot’s internal state and automatically decrease distractions, declutter screens, cue memory or communicate through a different sensory channel–his ears vs. his eyes, for example. The system would use the behavioral, psychophysiological and neurophysiological data it collects from the pilot to adapt or augment the interface to improve the human’s performance.

Now entering the third of four phases, the program aims to increase the amount of data a pilot, soldier or sailor can assimilate and act upon under extreme time, psychological and physical stress. Performance gains already have been demonstrated and will be further tested in more demanding scenarios.

If successful, the research could improve man-machine interfaces across a wide range of applications. The DARPA program already has spawned research at the National Science Foundation (NSF) and the National Institutes of Health (NIH) in related areas, Schmorrow says. NSF has a program called Collaborative Research in Computational Neuroscience, and NIH has a wide variety of programs focused on advancing the state of the art in the brain sciences.

Program Overview

Title: Improving Warfighter Information Intake Under Stress
Status: Entering phase 3 of four phases
Duration: FY02-FY06
Contractors: Lockheed Martin, Boeing, Honeywell, DaimlerChrysler and QinetiQ
Accomplishments: Demonstrated performance enhancements in noncockpit command and control simulations associated with operational or developmental programs
Phase 3 Goal: Prove the feasibility of augmented cognition in more complex and realistic simulations; prove the possibility of integrating the technology into a cockpit simulator
Funding: $59 million

Program Phases

  • Phase 1: Detect and measure the user’s cognitive state
  • Phase 2: Enhance the user’s cognitive state via technologies developed in phase 1 in simulations keyed to military programs
  • Phase 3: Automate the enhancement of cognitive state and demonstrate it in increasingly complex and demanding simulations
  • Phase 4: Demonstrate and validate technologies in operational scenarios

Phase 2/3 Contractors

  • Lockheed Martin–Tomahawk cruise missile weapon control system (working memory)
  • Boeing–Joint Unmanned Combat Air Systems (J-UCAS) control station (executive function)
  • Honeywell–Future Force Warrior System (attention)
  • DaimlerChrysler–Marine Expeditionary Family of Fighting Vehicles (sensory input)
  • QinetiQ–Cognitive cockpit, evaluation and integration team lead

Receive the latest avionics news right to your inbox