The development of new avionics systems is an unusual discipline. It is highly technical but subtly different from traditional "engineering." When engineers say they understand something, they mean they can model it. In order to design a system, an engineer needs to fully understand how it interacts with other systems and the environment within which it operates.
What about avionics systems? They are designed for complex interaction with a human operator. How well can we model how the human brain processes information? Do we understand how a pilot will use various cues to control an aircraft? Do we know how long after training pilots will retain their skills in the operation of complex systems?
Accident investigators are fond of pointing out that the majority of aircraft accidents result from human failure. The implication that the human "malfunctioned" is misleading, however, and would apply only where the pilot suffered some form of psychosis or incapacitation. An aneurism, for example, is a human failure. In the vast majority of accident or incident reports, the pilots were functioning quite properly when the flight came to grief, mapping learned skills onto complex systems with a finite rate of success.
A more productive point of view, one being increasingly emphasized by regulators, is that of "design-related failure," whereby the coupled man-machine system suffers a failure, but the fault is not placed solely with the human operator. Admittedly, it is the human element that is most prone to unpredictability.
Any pilot who has sat frozen with indecision, his index finger poised above a flight management system (FMS) keyboard, knows that mental gymnastics are occasionally required to navigate the physical interface and operative logic of avionics systems. Finding the means to implement a seldom used FMS feature may involve extensive trial and error, since pilots cannot be expected to recall the structure of multiple levels of data pages or reams of arcane syntax.
Most pilots would agree that it is desirable for a cockpit system interface to be "intuitive." But what does that mean? It could be argued that a system is intuitive if its operation conforms to the cognitive processes that are naturally applied by the human operator. We undoubtedly have an insufficient understanding of the workings of the human brain to make claims about whether a system is truly intuitive. Nevertheless, much ink is spilled about how a particular system is more or less "intuitive"--odd terminology indeed for something that cannot be measured directly.
Usability tests are helpful but really provide only anecdotal evidence of how one black box interacts with another. Even the concept of pilot workload, which is the current regulatory basis for human factors engineering in avionics certification, is a concept sufficiently vague and subjective to defy clear definition. Workload, intuitiveness (and its sinister little brother, "situational awareness") are commonly used terms that mask our profound lack of understanding of the mechanisms of human perception and performance. The research community has its work cut out for it, forging an understanding of human perception that enables man-machine interactions to be modeled with confidence. However, it must be accepted that for the foreseeable future the tools of the trade in avionics development and certification will be based upon qualitative metrics.
Finding more determinate and robust methods for evaluating the qualities of human-machine interaction is a challenge. Useful precedents exist, however, in the field of handling-qualities flight testing. Where the qualities of the human-vehicle interaction are being evaluated, measurement of pilot workload has long been eschewed in favor of assessing overall pilot compensation [for aircraft deficiencies--ed.]. Experience has shown that whether or not the operator is busy is not necessarily the issue. Equally important is the sophistication of the required response--a concept embodied in the term "compensation."
This distinction between workload and compensation is the key point made in the seminal 1969 paper by Cooper and Harper, "The Use of Pilot Ratings in the Evaluation of Aircraft Handling Qualities." The concepts outlined in that paper have guided the methodologies applied to piloted handling-qualities evaluations for a generation. Whether the subject of an evaluation is a black box or an entire aircraft, the system under test remains a coupled human-machine system. Established principles in handling qualities flight testing provide sound and applicable guidance.
Robert Erdos is an experimental test pilot for the Institute for Aerospace Research at the National Research Council of Canada.