ATM Modernization, Business & GA, Commercial, Military

Safety in Avionics: Blindly Following the Computer

By David Evans | March 1, 2000
Send Feedback

A highly-automated cock-pit can lull aircrews into a false sense of security. It actually can make pilots more prone to errors in certain situations, specifically when the computer is pumping out blatantly misleading information.

In the first study to offer a direct comparison of cockpit tasks carried out with and without the aid of a computer, the error rate among the student volunteers was some 65% when they were presented with intentionally false prompts (i.e., to do something wrong) from the computer. Even though the readings on the flight instruments contradicted the incorrect computer prompt, this error rate still resulted. The finding hints at the numbing effect of the computer’s "face credibility."

In fact, the greater potential for human error with a computer system integrated into the cockpit was borne out by the response rate when the computer failed to correctly prompt actions clearly implied by the gauges (i.e., rather than do something right, the computer sat silent). In this case, the erroneous response rate was some 41%. In contrast, participants without an automated aid missed only 3% of the same events.

To be sure, the correct response rate was better when the computer was asleep while the dials told their horrifying tale than when it was pumping out misinformation. But in neither case was the response rate even close to that achieved by volunteers relying solely on flight instruments.

The disparity in performance with and without automated aids should set off alarm bells of concern about modern cockpit design and the growing pervasiveness of automation and its seductively comforting displays.

The recent study in question was carried out by Dr. Linda Skitka, an associate psychology professor at the University of Illinois in Chicago, and colleagues Kathleen Mosier at San Francisco State University and Mark Burdick at the National Aeronautics and Space Administration’s Ames Research Center at Moffet Field, California. The title of their report raises the tantalizing question: Does Automation Bias Decision-making?

Using a basic flight simulator, 80 student volunteers were divided into two groups. Half were assigned to "fly" with the aid of a computer system. The other half relied solely on instruments. Both groups were told that their traditional instruments were 100% reliable. The volunteers flying with the automated aid were told, up front, that it was highly reliable, but not infallible. In other words, they were cautioned beforehand, "Don’t trust it completely."

Remember, they also were told that their gauges were 100% reliable and valid.

The trials were designed to test errors of commission and omission. The errors of commission involved complying with an erroneous computer prompt, even when foolproof instruments contradicted it. An error of omission involved failing to respond to an event depicted by the gauges when the automated aid said nothing.

The same types of omission errors were presented to both the automated and the non-automated pilots. As such, they provided a means of directly comparing the relative levels of vigilance between the two groups. In this respect, the trials represented a deliberate effort to assess whether or not automated decision aids lead to a decrease in vigilance. "Put simply, the answer is yes," Skitka and her colleagues wrote.

In other words, computers may be playing to primal human predilections, described thusly:

  • Cognitive laziness. "People like to take short cuts," Skitka said in a telephone interview. There is a tendency to go with "first impressions," which is simply a shorthand way of saying, "Why should I bother to scan and monitor? The computer will always get it right." As Skitka and her colleagues wrote with spare elegance, "Most people will take the road of least cognitive effort." The computer makes taking that road easier, more appealing.

  • Social loafing. A rich body of research indicates that people generally expend less effort in a group situation than when challenged solo. Individuals tend to "slack off" when responsibilities are shared. For example, people make more noise cheering and clapping alone than in a group. Here is the pernicious part: when the computer is part of the group—a "team member" so to speak—the same social loafing tendency occurs.

  • Diffusion of responsibility. To the extent that tasks are shared with a computer, people also tend to diffuse responsibility for those tasks. As in social loafing, they feel less compelled to put forth a strong individual effort.

  • The computer as cop. Finally, people tend to conform to the demands of authority figures. In this respect, computers often are viewed as decision-making authority figures. Their commanding presence is all the more intimidating by the perception that computers are smarter than their human users. People are more likely to blindly follow the computer’s message, even in the face of contradictory information.

To be sure, self-doubt rather than misplaced faith in a supposedly foolproof system may be at work here. A pilot could well feel behind developments he has not been following closely, rather than to assume a hardware or programming error.

Trials with pilots making simulated flights from Los Angeles to San Francisco, using displays very similar to those found in the B747-400, showed the extent to which errors can occur even with automated decision-aids. For example, the EICAS (engine indication and crew alert system) was programmed to announce an engine fire. During their preparation for the simulated flights, the pilots were reminded that five other indicators of engine fire were available.

Yet virtually all the pilots shut down an engine in response to a false EICAS message, even though on the post-trial questionnaire, these same pilots indicated that an EICAS message alone was not sufficient to diagnose a fire. Without additional indices of fire, they declared, it would be safer not to shut down the engine.

That’s what they said, not what they did.

In a separate study conducted by other researchers, only 25% of the pilots with traditional paper checklists shut down an engine, as opposed to 75% of the crews with auto-sensing equipment.

Overall, it seems that the introduction of computerized decision aids, intended to reduce errors, may be creating new types of errors.

"Changing the context changes the opportunity for error," Skitka observed. Pilots, admonished as tyro aviators always to "trust their instruments," are these days schooled also to "trust the computer." The trials by Skitka and her colleagues suggest that such faith can put the unwary aviator on the slippery slope of error.

Actually, the problem may be more subtle. Automated flight is conducive to letting down one’s guard because of the atrophy of vigilance. Manual flight, on the other hand, is confidence-building because of the challenge of achieving accuracy and the level of attention and analysis required (e.g., "nailing the approach"). It stands to reason that the pilot who exercises his right to de-select the computer in order to polish skills is also likely to be a better monitor of the computer at all other times.

What is to be done? Skitka conceded there are no quick and easy answers. Pointing out potential errors during training is the most immediate action that can be taken, she suggested. There also may be personality differences, in which some people innately are more vigilant than others. This possibility is an avenue for further research.

The work of Skitka and her colleagues adds materially to the ongoing debate about automation. Was its deployment in the cockpit born of a desire to reduce the pilot’s workload, and to facilitate a focus on the overall picture? If so, some studies suggest that the pilot is no longer totally in the picture, and indeed may be slipping out of the frame.

If automation has meant that the average crew now cannot tolerate a high workload, then automation is de-skilling the pilot. The solution would seem to be a greater integration of aircrews into future developments like head-up displays. Perhaps the systems engineers would have designed differently had they been told the aim was not to reduce pilot participation, but to simplify the pilot’s task.

Engineering the pilot out of the airplane is not an option. "You cannot program for every possible contingency," Skitka declared. This is to say that the computer doesn’t come close to the human capacity for analysis (as opposed to rote execution), not to mention man’s matchless capacity for intelligent improvisation under trying circumstances.

Above all, this caution comes to mind: be wary of the computer’s commandments; they may be coming from a fallible god. The electronic bible, as it were, just may contain a few misprints.

Ed. Note: The report by Skitka, et. al., appeared in the November 1999 issue of the International Journal of Human-Computer Studies. For the full report, visit Website www.idealibrary.com/links/doi/10.1006/ijhc.1999.0252

Receive the latest avionics news right to your inbox