Business & GA, Commercial

Human Factors in Avionics Certification

By Victor Riley | September 1, 2004
Send Feedback

Recognizing the role that design-related human error plays in aviation accidents, regulators are placing greater emphasis on human factors in avionics certification. To strengthen the regulatory basis for addressing human factors in certification, the Transport Airplane Engines and Issues Group of the Federal Aviation Administration’s (FAA’s) Aviation Rulemaking Advisory Committee recently approved a proposed new rule and advisory material on the subject.

Although the proposed rule was unavailable at press time, it soon will be passed on to regulators in the United States, Canada and Europe for approval. The increased focus on human factors, coupled with evolving regulatory requirements, will change the way that avionics manufacturers–small or large–will have to address human factors in the design and certification of new or updated products.

The current regulatory basis for human factors considerations in avionics systems is pilot workload. A new rule, however, which creates a regulatory basis for design-related human error, would, in effect, recognize that design-related human errors are as much a concern to regulators as workload in today’s highly automated cockpits. Regulators have seen the need to adjust the regulatory basis to address the changing flightdeck environment.

If regulators needed a catalyst to examine design-related safety issues, several accidents could serve. In 1991, for example, an Airbus A320 operated by French carrier Air Inter crashed into a mountain while on approach to the Strasbourg Airport in France. As the flight neared the mountain range bordering the airport, air traffic control (ATC) asked the crew to conduct a different approach from the one they had planned. In response, the crew attempted to select a 3.3-degree descending flight path angle from their current position, but erroneously commanded a 3,300-foot-per-minute negative vertical speed instead. The knob they used on the flight control unit enabled them to set the descent rate in either mode: the parameter set by the knob was controlled by a push-button that selected either a heading or track laterally and flight path angle or vertical speed vertically. When the crew selected the vertical speed mode by mistake, a much faster rate of descent resulted.

The crew’s mistake is an example of a design-related human error. Although the two modes were separated by only a single button press, they caused substantially different aircraft behavior. The mode cues available to the crew included a somewhat different flight director shape between the two modes, a slight change to the brightness and size of the yellow lubber line on the navigation display, and the position of the decimal point in the vertical speed window of the glareshield controller. However, the Air Inter crew used a head-up display (HUD), so they may not have noticed these cues.

Design-related pilot error has been a concern ever since a string of bomber accidents during World War II gave rise to the modern practice of human factors. (In these accidents pilots raised the landing gear instead of the flaps after landing because the two handles were co-located and had similar shapes.)

In recent years, a similar series of accidents involving pilot error, including an Airbus A300 in Nagoya, Japan, and a Boeing 757 in Cali, Colombia, have renewed the focus on flight deck human factors. In fact, FAA has determined that a majority of aircraft accidents are due to human error. Boeing data shows that 67 percent of the jet aircraft hull-loss accidents from 1959 through 2002 involved flight crew factors, of which design-related error is a component. Consequently, FAA and Europe’s Joint Aviation Authority (JAA) have started to pay much more attention to human factors in the avionics certification process.

Evolution of Interest

Although a proposed rule is only now on the horizon, regulators have worked on human factors issues for some time. FAA’s "Human Factors Team Report on the Interfaces Between Flight Crews and Modern Flight Deck Systems," published in 1996, laid out a set of emerging human factors challenges for aircraft certification. These include automation, flight crew situational awareness, crew coordination and communication, cultural and language differences, and regulatory processes. The recommendations included:

  • Enhanced ability to monitor and measure accident precursors,
  • Increased focus on design-related pilot error,

  • Development and conveyance of automation philosophies for highly automated systems,

  • Better understanding of why flight crews deviate from procedures, and

  • Specific recommendations for addressing autopilot anomalies, approaches, how air traffic procedures relate to flight deck automation, charts, training and other areas.

The report also recommended updating airworthiness regulations to better address emerging human factors issues.

The Human Factors Team’s advice to update airworthiness regulations led directly to the formation of the Human Factors Harmonization Working Group. The objectives of this joint international effort between FAA, JAA, British and Canadian authorities, and industry were to harmonize the airworthiness regulations relating to human factors between the United States, Canada and Europe and, where needed, to update regulations or guidance material to better address emerging human factors issues. This work has culminated in the emerging rule on design-related human error, which will be proposed for inclusion in the airworthiness regulations (CFR 14 in the U.S.), and associated advisory material. Although the proposed rule is not yet available, it will appear at the FAA Web site before it is officially published as a notice of proposed rulemaking (NPRM). (See www.faa.gov/avr/arm/arac/wg_tae_HumanFactorsHarmonization.cfm.)

FAA and JAA officials, however, already have begun requesting that specific attention be paid to human factors during avionics certification. In addition, FAA is sponsoring the development of several tools to help address human factors more systematically during certification. These tools also may be made available to manufacturers to better address human factors issues during design and to help manufacturers anticipate how FAA will evaluate their products.

Certification Plans

Some avionics companies are working with regulators to adopt "human factors certification plans" that contain tests to generate valid human performance evidence or justification for critical design decisions. They are incorporating human factors methods into design and development. Designers meet with regulators early in the design process to lay out a plan for how human factors issues will be addressed. The applicant’s approach to human factors may be included in the general certification plan or spelled out in a separate human factors certification plan.

The latter plan’s content depends on the equipment, the presence of new features, and the specific concerns regulators may have about them. For example, if system functions previously contained on the overhead panel are moved to multifunction displays, regulators may be concerned about workload, visual attention–due to the loss of the ability to identify controls by touch– usability in a smoke-filled environment, and rapid access to critical functions when shared on a single display. They thus may ask for an objective comparison of the time required to complete tasks on a traditional panel vs. a multifunction display implementation, where scenarios are defined to explore worst-case task combinations. The recommended content of the human factors certification plan is described in several recent FAA policy documents that can be found at http://www.airweb.faa.gov/Regulatory_and_Guidance_Library/rgPolicy.nsf.

The plan includes:

  • System description from an operational perspective,

  • System layout, automation logic and operation,

  • Pilot characteristics,

  • Training requirements,

  • Methods used to address usability, and

  • System safety assessment methods.

Some manufacturers have adopted human-centered design processes that address human factors concerns from the beginning of product concept development. These processes recognize that product usability often goes beyond the interface and depends on the design of system functions and logic. This is because the difficulties that many users have with modern electronic equipment are not related so much to the user interface but rather to the logic underlying the interface.

For example, it may take a pilot using a traditional flight management system (FMS) or a new GPS box several minutes of trial and error to enter a holding pattern around an unpublished waypoint into the flight plan. This is not because the FMS or GPS interface itself is poor, but rather because the required sequence of steps is not intuitive and difficult to remember.

As technology advances and equipment becomes more complex, increasing numbers of features are being squeezed into the limited physical space available for the user interface. In this context, usability issues become more cognitive than physical, and how a product’s functions are defined becomes more important than how the user interface is designed. Using a product becomes more a matter of learning and remembering sequences of steps and multifunction control logic than simply recognizing the operation of dedicated controls.

Avionics’ increasing complexity, resulting from the availability of new features, represents perhaps the biggest usability challenge for avionics manufacturers, in part because cognitive issues often are more difficult to analyze and resolve than issues involving the physical design of controls and displays. Without a systematic and objective basis for addressing issues of cognitive usability, manufacturers and regulators may find agreement on some critical design questions difficult.

Rapid Prototyping

Many manufacturers are using rapid prototyping during the conceptual design phase in order to get early feedback from regulators and potential users. Rapid prototyping is most useful for part-task simulations that focus on specific design elements. Common measures used in part-task simulations include the following:

  • Reaction time to signals such as alerts or new information,

  • Time required to start tasks,

  • Time required to complete tasks, and

  • The number and types of errors.

As the design matures and a more complete simulation environment becomes available–such as fixed-based or full motion-based simulators–a greater range of objective and subjective measures becomes available for evaluating a design. It is helpful to build good data capture capabilities into such simulators to provide the measurements that may be needed for human factors certification.

In some cases formal usability testing of the mature product may be needed to demonstrate acceptability. This is not a focus group activity run by marketing but rather a scientific test with rigorous measures. Where new design features may affect task performance, objective measures may be required. For example, the FAA policy memorandum on "Guidance for the Certification of Honeywell Primus Epic Systems" notes that the use of graphic icons for multifunction display item selection, instead of conventional text labels, departs from the traditional means of complying with 14 CFR 25.1555(a), which requires that cockpit controls be plainly marked as to function and method of operation. (See document)

To address this concern, the policy recommends that applicants demonstrate that the use of graphic icons instead of text labels does not increase task performance time in certain worst-case scenarios. As graphic displays and other new technologies become more ubiquitous in the flight deck, more and more manufacturers may have to address such issues through appropriate testing. Technology trends that may lead to such testing requirements include:

  • Direct graphical manipulation of onscreen objects,

  • Menu-driven interfaces,

  • Multifunction controls,

  • Electronic flight bags (EFBs), which replace paper documents with multiapplication electronic devices,

  • Graphical data products such as weather, and

  • Small-panel integrated avionics in which critical functions such as traffic alert collision avoidance may have to be accessed through multiple pilot actions.

Just as important as these objective measures is systematically derived, subjective data. Ultimately, FAA test pilots will make subjective, qualitative judgments about the design in order to ensure compliance with many of the subjective regulations. The use of structured, unambiguous and easy-to-use surveys, designed to elicit unbiased responses and facilitate valid statistical analysis, can help provide convincing evidence for subjective criteria.

Not All Can Be Tested

JAA also is concerned with how an applicant will address human factors. For new designs, the agency requires the same process that FAA does and, in addition, requires another analysis called a human error analysis. Like FAA, JAA allows the applicant to propose the method used to test for human error and its verification during flight test.

Regulators recognize that not all issues and design decisions can be tested and that attempting to do so can be prohibitively expensive. For the majority of design decisions, regulators probably will merely want a good explanation for why the decision was made the way it was. Testing is most likely to be required when the design is particularly novel or complex. Ideally, the rationale should be based on accepted human factors principles, such as cultural conventions, user expectations and the application of accepted guidelines. This is particularly true for ergonomic considerations, such as text size, key spacing, use of color and other characteristics for which there are already accepted standards.

Victor Riley can be reached at [email protected].

Analytic Tools

Because human factors is a broad and complex field, there are a large number of related standards and guidelines. The Federal Aviation Administration (FAA) sponsors the development of several analytic tools to help regulators and manufacturers access the most useful and relevant standards and guidelines. The tools are intended to make the design and certification processes faster, more comprehensive and more predictable to applicants, significantly reducing certification cost and risk.

The largest tool under development is FAA’s Aircraft Certification Job Aid for Flight Deck Human Factors, which is in beta test. This computer-based decision support tool contains a database of human factors considerations, cross-referenced with databases of regulatory material and other relevant documents. It will allow certification team members, for example, to access the database through an equipment designation (such as "airspeed indicator") and see the human factors considerations that may be associated with it. It also allows them to begin with human factors considerations, such as the availability and readability of information, and see all the regulatory material related to that consideration.

The human factors considerations database contains libraries of human factors guidelines and research results. The user can associate human factors considerations with equipment and material, quickly identifying high-level human factors issues that relate to a particular avionics system. Certification team members also can quickly identify the regulations and advisory material that may be relevant for a given project. And they can quickly find answers to specific human factors questions, such as what is the appropriate font size or contrast ratio for a display or what is the response time of a control.

The Certification Job Aid is intended to help FAA human factors specialists to better meet the needs of certification projects, a large number of which are always ongoing. It also will help the other certification team members to better request the help of the human factors specialists. Research Integrations Inc., in Tempe, Ariz., is developing this tool with assistance from this author. The Certification Job Aid is a research program, but if it is successful, FAA may make it available to manufacturers so they can assess designs early on for potential human factors issues and better anticipate how their products will be evaluated by regulators. Because early problem identification reduces costs and risk, this should be an attractive proposition for many aircraft manufacturers and avionics vendors.

In addition to this general-purpose tool, FAA is sponsoring the development of more focused tools for specific classes of avionics products. For example, the Kansas State University Aviation Research Laboratory is developing a tool to evaluate aviation weather products for display formatting. Another tool for the evaluation of electronic flight bags (EFBs) is being developed by a team led by Divya Chandra (and including the author) at the John A. Volpe National Transportation Systems Center.

This tool has its roots in an EFB human factors guidelines document, "Human Factors Considerations in the Design and Evaluation of Electronic Flight Bags," prepared by the team. The EFB analysis tool combines a set of guidelines, recommendations and requirements for EFB design with a taxonomy of human factors issues to provide a high-level checklist of human factors issues and a detailed checklist of design considerations.

The EFB analysis tool is being designed to support teams of regulators as they review proposed EFB concepts. However, many of the items contained in the tool are generic and could apply beyond EFBs. A version of the tool is available at http://www.volpe.dot.gov/opsad/efb.

Additional information on FAA policies, certification processes, and addressing human factors during certification can be found at: www.airweb.faa.gov and www.generalaviation.org.

Receive the latest avionics news right to your inbox