ATM Modernization, Business & GA, Commercial, Military

Empathetic Avionics: The Pilot as an Avionics System

By Charlotte Adams | December 1, 2005
Send Feedback

Pilots of the future may activate decision support aids with their brains rather than their fingers. Like an avionics system, they may communicate with the aircraft electrically via brain signals. Impossible as it sounds, military researchers see this as a distinct possibility.

A pilot of a damaged aircraft finds himself targeted by air- and ground-launched missiles. His attention is torn. Aircraft software, however, sensing the pilot’s impending mental gridlock, prompts him to activate mitigations. While he peers through the canopy to maneuver away from the threats, the software releases chaff, initiates jamming and triggers the master arm switch. At the same time the radar warning receiver display appears–greatly enlarged–in the center of his field of view. The pilot recovers, switches off the automation and finishes the mission.

This scene shows what may, in coming decades, become possible through the use of "empathetic avionics"–cockpit systems that continuously assess the pilot’s cognitive state and adapt the presentation of information to reduce workload and improve performance in extreme conditions (see May 2005, page 28, and October 2005, page 58). The pilot for a time becomes an avionics system, interacting electrically with the aircraft.

AugCog

Known as augmented cognition (augcog), this field of research uses electrical and infrared signals collected by electroencephalograms (EEGs), electrocardiograms (ECGs/EKGs), functional near infrared (fNIR), and other sensors as inputs to the weapons system. Cognitive data then drives mitigations, such as simplifying displays, releasing chaff, rescheduling tasks or presenting data to the ears rather than the eyes. Less dramatic but perhaps nearer-term applications include evaluating display designs, assessing student pilots and predicting the onset of g-induced loss of consciousness.

The U.S. Defense Advanced Research Projects Agency (DARPA) accelerated research in augmented cognition through its Improving Warfighter Information Intake Under Stress, aka AugCog, program. Participants have proved that they can measure subjects’ cognitive states while performing simple tasks and then enhance the subjects’ performance, compared with the baseline, by employing mitigations as the cognitive load builds. Researchers have moved from highly controlled, narrowly focused, "Psych 101" experiments in the lab to more demanding scenarios. The program is expected to enter its fourth and final phase next year, demonstrating these closed-loop augcog systems in operational scenarios with military service partners.

But augcog is still a research area, says Richard Edwards, manager of human systems integration at Boeing Phantom Works, which leads a team investigating visual mitigations that could be used to manage command and control (C2) tasks. "I would be amazed if you saw augcog used in complex displays in an operational environment in less than 10 years," he says.

Lt. Cmdr. Jim Patrey, assistant program manager with the Naval Air Systems Command’s (NAVAIR’s) Aviation Training Systems office, heartily agrees. From a training perspective, the neuroscience and decision support technologies underpinning augcog are still relatively immature, he says. To be considered for use, they must not consume excessive training time, require extraordinary preparation (e.g., gels), cause discomfort, or cost more than they are worth. Currently, however, "augcog technologies require significant preparation time, primarily use gel-affixed sensors, and are quite expensive without [having] documented training value," Patrey says.

Sensors also need to be more sensitive. They need to be able to anticipate a cognitive problem before the pilot enters the "red zone," or even the "yellow zone," Edwards says. A lot of today’s sensors are "binary," showing a problem or no problem, he adds. "You need at least a three-state, or ideally a five- or seven-state gauge."

Researchers also need to prove that a brain location that has been mapped to a cognitive activity in a pure lab environment maps to the same activity when the subject performs a complex task. In a real-world environment, a task may involve several cognitive activities, such as listening, seeing, remembering and focusing one’s attention: the brain is a sophisticated parallel processor. How well will achievements in the lab generalize to more complex, operational tasks? Right now it’s a "leap of faith," concedes Edwards.

How long should automation be activated? Not all the time, researchers say. When the pilot is on top of things, automation would decrease situational awareness. And should automation be discretionary? If the pilot controls the triggering of automations and mitigations, at least they would not intrude themselves at the wrong time.

It is also possible to apply mitigations developed in the augcog context without using brain or heart sensors. If aircraft software could compare a pilot’s rate of performance of a set of tasks with the optimal rate in an air-to-air engagement, for example, the software could alert him to what needs to be done. This approach would reduce cost and complexity and improve reliability.

Cognitive Cockpit

Work by QinetiQ in the late 1990s for the UK Ministry of Defence spawned the cognitive cockpit, or "cogpit." This is a test bed for investigating automated decision support and augcog technologies in the single-seat, fast jet environment, says Blair Dickson, QinetiQ’s principal investigator. The aim is to develop "trustworthy automation" that is sensitive to context. Data from the pilot and aircraft sensors is used to produce a series of tasks that can be executed automatically or presented to the pilot as messages on a display.

Based on the F-16, QinetiQ’s cogpit is a simulator equipped with software that assesses the pilot’s workload, using EEG and control inputs, as well as other measures. There are six levels of automation, from fully manual to fully automatic: pilots were allowed to choose their level of automation in the preplanning process. Mitigations can include automating the master arm switch, activating the defensive aids suite, and turning on the targeting pod.

QinetiQ ran six pilots through six multisegment air-to-ground missions in the cogpit. Increasing cognitive loads activated increasing levels of automation. The cognitive augmentation seems to have improved survivability, Dickson says, although more trials would be necessary to confirm this.

NAVAIR’s human-systems integration lab, teamed with BMH Associates Inc., Norfolk, Va., is pursuing a similar line of research. It is running as many as eight subjects through a simulated close air support mission. They spend 10 minutes over the target area to identify and destroy up to four enemy tanks while avoiding surface-to-air missiles. Funded by DARPA’s AugCog program, the lab is using EEG, EKG and electroculogram (EOG) sensors–the EOG to record gross eye movements. Mitigations, all of which must be approved by the pilot, include automated chaff release and slewing to the target. Subjects fly eight equally difficult missions–four without decision support and four with it–during which mitigations are randomly deployed.

The goal is to demonstrate a 50 percent improvement in targeting and a 50 percent decrease in friendly fire, compared with the baseline. The experiment was incomplete at press time, but results at that time indicated a 200 percent increase in targeting and no incidents of friendly fire.

UAV Control

A core area of augcog research concerns adapting the visual interface to reduce workload. In Phase 3 of the DARPA program Boeing Phantom Works developed visual mitigations to a command and control screen. The application was unmanned air vehicle (UAV) control, but the technology could apply to C2 workstations in manned aircraft, too.

Boeing looked at the ability to focus attention on critical tasks in a high-workload environment, using fNIR sensors, which employ IR light to measure the level of blood oxygenation in the brain. (The higher the workload, the more oxygen is consumed.) The idea is to avoid reaching the point where attention is shifting so rapidly between tasks that it becomes "divided" and performance deteriorates. Based on neurophysiological data, researchers are trying to distinguish the difference between attention shift and divided attention.

Boeing’s experiment presented a UAV controller, at equal intervals, with four, eight and 12 UAVs, stepping back down successively to eight and four. Six test subjects were used. After establishing a baseline without augcog, Boeing used subjects’ brain inputs to manipulate the information displayed. As the tasks became more difficult, and sensors detected increasing cognitive workloads, the display adapted to highlight only the essentials.

The workload was considerable. Subjects assigned sensors to image targets, studied the images, assigned weapons, aligned weapons with targets, and authorized attacks while tracking all of the vehicles. They also made sure the UAVs were in the correct positions, dealt with vehicle malfunctions, and complied with requests arriving through the simulated C2 network–all in a compressed time period and for a steadily increasing number of UAVs.

So far, the Boeing team has met and exceeded DARPA’s goal of a 50 percent reduction in the loss of potentially recoverable aircraft, Edwards says. Data analysis for the second goal–a 50 percent improvement in the speed of weapon/target pairings–was still in progress as this issue went to press.

Augcog technologies could feed into Boeing’s Joint Unmanned Combat Air Systems (J-UCAS) program at some point, Edwards says. The DARPA augcog project assumes highly automated, intelligent, collaborative vehicles controlled by a single operator–like J-UCAS. In fact, three of the subjects in the augcog simulations are from the Boeing J-UCAS program.

Boeing has developed the ability to roll in layers of computerized "fog" onto a UAV pilot’s nav screen, so that only the most urgent tasks are easily visible. The fog is translucent, however, so the remaining information is detectable, should it be necessary. In functional terms, the fog layer helps the operator organize and "group" screen objects into an efficient mental model of the tactical situation, thus reducing workload, explains a Boeing researcher. At the highest level, the operator sees only the top three UAVs and their targets above the fog. As the brain’s oxygen consumption decreases, the mitigations are gradually removed until the screen returns to normal.

Boeing has developed an algorithm that filters out the noise in the fNIR signal resulting from heart beat and respiration. Another block of code then moves the mitigation level up or down, depending upon what the signal indicates about cognitive performance. Boeing expects eventually to find a use for the "noise" as important physiological signals in their own right.

Cockpit Design

Augcog techniques also could speed the evaluation of new display designs. NAVAIR’s human-systems integration lab is proposing work in that area under a solicitation released by the Office of Naval Research (ONR), says Lt. Jeff Grubb, the lab’s program officer.

Currently simulator-based evaluations are rather imprecise. The pilot fills out a workload questionnaire at the end of a run, when recall of early flight segments is less sharp. Or the simulation could be artificially interrupted–while memory of a flight segment is still fresh–another suboptimal approach.

If researchers, however, can demonstrate that augcog sensors accurately measure workload, then cockpit designers wouldn’t have this dilemma, and the data collected would be immediate and precise. The technology promises a "window into the mind" to quickly address different display options in the rapid prototyping process, Boeing’s Edwards says. The Navy lab has teamed with BMH on the proposal.

Novice or Expert?

Training is another low-hanging fruit. Sensors would not have to be miniaturized, hardened and integrated into avionics systems. UK-based QinetiQ, an augcog pioneer and contractor in the DARPA program, is studying "electrophysiological markers" that may distinguish novices from experts. Under an ONR grant QinetiQ is trying to determine whether it is possible to discern meaningful differences between novices’ and experts’ brain activity from analysis of EEG data collected in multiple wave bands.

If researchers can prove empirically–from analysis of EEG data–that there are broadly discernable differences between the ways novices and experts process data, it could be possible to more precisely and rapidly assess a student’s progress in recurrent training, for example. Students now are assessed by observing their behavior and considering what they say about their progress. But they are sometimes reluctant to admit that they aren’t ready to advance to the next level. At press time, QinetiQ was preparing its project report, but was able to summarize some high-level points.

One way to distinguish brain signals emitted by a novice from those emitted by an expert is to analyze the frequency components of the EEG data associated with a specific task, Dickson says. You can look for changes in the power level in an EEG frequency band associated with a certain cognitive activity, while a student is learning a task and after the student becomes an expert at it. The power level in an EEG band is "related to the activity of populations of neurons in the brain," he explains.

The meaning of the change in the power level depends on factors such as the frequency band in question. If the subject is new to a task, more power may exist in the band associated with cognitive workload than for a person who has already learned the task. On the other hand, an expert at a memory recall task may generate more activity in the wave band associated with long-term memory, when performing the task, than a novice would.

QinetiQ focused on brain activity associated with long-term memory. Highly controlled laboratory tasks involved associating letters of the alphabet with different key presses. Later in the experiment subjects were asked to remember the key presses.

BMH Associates also is pursuing novice-vs.-expert research under an ONR grant. The company has not yet finalized its approach on this project, but the concept–like QinetiQ’s–involves comparing cognitive measurements of novices and experts.

In this case, however, BMH plans to put the subjects through their paces in its own cogpit. The novice would not be a raw beginner, but perhaps someone who had completed basic flight school and is learning air-to-air interception tasks, explains Gary Kollmorgen, a BMH program manager. By comparing the cognitive load data of the novice to that of the expert, researchers hope to identify which areas the instructor should emphasize.

Receive the latest avionics news right to your inbox