By Bob Etris | February 5, 2018
Advanced technologies within the FAA’s NextGen program will provide enhanced tools to air traffic control specialists. Increased automation and decision support tools are intended to provide air traffic controllers with more accurate information earlier, so that they can manage more traffic proactively and make optimal decisions based on accurate projections.
Over time, this transition will subtly alter the air traffic controller’s role within the “system of systems” that comprises the air traffic network. Controllers may find that they are no longer necessarily tactically involved in controlling every single flight, but are managing traffic “by exception," intervening when a plan no longer holds or when something unexpected happens.
The challenge associated with introducing increased automation in any system is incorporating ways to keep the human actively aware of system performance. The operator must have “right-time” access to appropriate system information in a format that supports an accurate assessment of system performance. This is necessary so that the human can effectively step in when human intervention is necessary.
In most cases, an automated system operates perfectly well without a human operator being actively involved — after all, that is the purpose of automation. However, there are circumstances when automation does not function as intended, or when an emerging situation necessitates a manual change to the automation parameters. Paradoxically, it is precisely because the automation executes many routine system functions that the human operator is not always primed and prepared to step up when needed.
For example, many drivers will have carried out a GPS instruction when it might not have been the most appropriate action. Examples might include turning right because the satellite navigation system “suggested” it, even though the maneuver was prohibited by a road sign. In these cases, reliance on the automation might be a case of "no harm, no foul" — providing that a police officer did not observe the behavior.
Sometimes this happens even when operators are advised not to delegate their decision to the system. In the example above, the GPS instructions probably included a caveat that the human operator remained responsible for all actions, and the road sign prohibiting the right-hand turn may have been very clearly visible. So why do people trust automation, when in some situations they might be better to rely on their own judgement?
That is the million-dollar question, since trust in automation is a complex issue. There is sometimes a mismatch between the actual capabilities of a system and what the user believes are the capabilities of the system. This capability-mapping is influenced by the user’s previous experience with the system. Users tend to trust automation that helps them to achieve their goals — but in uncertain, ambiguous or non-routine situations, the user and the automation may be working toward different goals.
“Trust” in automation can also mean different things to different people. For example, having confidence that a system alarm is not a false alarm is different to a user having blind faith that an automation decision is correct, even in the presence of contradictory information. There are also cultural and generational differences in use and trust of technology and automation.
Understanding how new automation will interact with the human processes of acquiring and analyzing information, and making and executing decisions, is critical in designing tools that support the human user.
Introducing automation can sometimes seem like a Catch-22 situation. Controllers will not use automation that they do not trust, and yet interacting with the automation is the only way a controller can develop confidence in the system. Introducing the right version of the technology to the right group of people at the right time is critical to achieving a successful operational implementation.
Bob Etris is the aviation director at Evans Incorporated.