The fact that a flight management computer (FMC) would accept decision, rotate and climb speeds based on a takeoff weight that was too low by 100 metric tons raises an obvious question: what can be done to prevent a recurrence?
The case discussed in this space last month involved a distracted and hurried Singapore International Airlines (SIA) crew, who tried to lift off below stall speed, deeply gouging the tail of their B747-400 (March 2004, page 53). As recounted in New Zealand's Transport Accident Investigation Commission (TAIC) report of this March 2003 incident, the first officer wrote down on the bug card a takeoff weight beginning with a "2" rather than a "3." The bug card is an interim step between the load sheet data and the FMC data. The first officer then calculated the decision speeds for 247 tons instead of 347 tons. These were duly entered into the flight management computer, which accepted them. Result: premature rotation, bent aluminum (what was left of it), battered reputations, and embarrassment all around.
This case is not that unique. In 2002 an Air Canada A330 suffered a tailscrape on takeoff, again resulting from bum data entered into a blithely cooperative FMC. More recently a crew rotated 30 knots too slowly, avoiding a tailscrape in this case. Although the incident is still under investigation, it appears that the FMC was fed the wrong zero fuel weight-say, 78 tons instead of 178 tons. Although this value was less than the empty weight of the aircraft, the computer swallowed it.
Clearly, defenses against bad calculations are being breached. The problem is twofold: humans making errors and accommodating machines that have no error-checking algorithms. The FMC is like a handheld calculator; any error that is entered supersedes what was there and creates an immediate recomputation.
Timothy Crowch, an international airline captain with 16,000 hours, argues that a "radical overhaul" of man-machine integration is in order. He argues that the FMC represents Stone Age technology, compared with the advances made in other aircraft systems.
"Look at its hardware," he says. "The keyboard design is unique and has ignored the many years of experience with typewriters and the QWERTY layout. With early FMCs, the most one did was enter a few waypoints and coordinates, but the keypad design simply has not kept pace with the FMC's subsequent expansion to other uses."
Crowch also calls for more standardization as a key waypoint on the path to improved integration. "[T]here are a thousand ways for an airline to establish its takeoff calculation procedures-computers, thick manuals containing graphs and/or tables, data direct from dispatch, and so forth. It is time for the manufacturer to establish one process for its aircraft and that every customer accepts this process," he states.
One aircraft type. One takeoff calculation system. This standardization would enable the FMC to become a more fully integrated part of the aircraft, in which certain default values can be programmed. It also would incorporate tolerances that either will activate prompts to warn against data insertion errors or reject the erroneous input outright, Crowch says.
In the meantime, Crowch sees one threat and one questionable return on investment. First is what he calls the "bug card" threat.
"This card clearly is a relic from the old analog/three-crew [member] days," Crowch says. "The bug card serves absolutely no useful purpose, other than to heighten the system's risk exposure to human error. Data transfer from the load sheet to the bug card is irrelevant. Why should a hand-written document be checked when the load sheet and the FMC will contain all relevant data?"
Second, Crowch asks, "Is crew resource management (CRM) training, which these Singapore Airlines pilots had completed, bearing fruit in practice?"
"What we do see is that the big picture of this mission was lost, namely that this was a nine- to 10-hour flight with a very heavy takeoff. None of these idiotic figures (weights and speeds) rang any bells in any one of three heads. So much for the team effects of CRM."
A few suggestions may be worth considering. Simulator training could incorporate first and second officers who are briefed to be covertly unsupportive, paying only lip service to CRM, and introducing the odd gross error into the proceedings. It is up to the captain to detect and deter major fiascoes. To ensure that the load sheet figures are faithfully entered into the FMC, perhaps this document should only be signed after the data is entered into the FMC and confirmed. This extra step would thereby become a valid crosscheck.
Alternatively, the load controller could input the figures into the FMC directly via an ACARS (airborne communications addressing and reporting system) interface. Such a capability definitely would get the cockpit crew's attention and would have them religiously checking the figures in longhand.
Short of that option, the FMC might feature a modest dollop of gross error-detection software-for those cases where decision speeds don't seem to square with the aircraft weight. It is not impossible for even highly trained professionals to make seemingly blatant mistakes.
John Lauber, a senior safety official with Airbus and one of the recognized "fathers" of CRM, recalls a case in the early 1970s of a crew in a "classic" B747-100 (e.g., nil automation, at least regarding aircraft weights, V-speeds and such). "It was nearly identical to the SIA case, on takeoff from Frankfurt, as I recall," Lauber says. "The crew used V-speeds for a weight that was 200,000 pounds less than the aircraft actual weight."
"This forever changed our views about errors, as we observed, being artifacts of the simulator environment. This was real!" he exclaims. The outcome was the same as at Auckland, a tailstrike.
In these cases, gross error-detection software built into the flight management computer could cause the apparently odd output to blink repetitively, causing the crew to look twice before blundering. The critical trick is not to make no mistakes at all; rather it is to ensure that every mistake is spotted before reaching a commit-point on the flight deck.
David Evans can be reached by e-mail at [email protected].
Air Canada Tailstrike on Takeoff
An Air Canada A330 suffered a tailstrike on takeoff, June 14, 2002, for a flight from Frankfurt to Montreal. The following are extracts from the Transportation Safety Board of Canada accident report:
At 0752, the flight crew received the initial load figures from the aircraft communication addressing and reporting system (ACARS), indicating an estimated takeoff weight of 222.7 metric tons. A reduced takeoff thrust setting was planned.
The takeoff speeds, provided to the crew by the ACARS, were inserted into the multipurpose control display unit (MCDU) by the pilot not flying (PNF), seated in the right seat. The following speeds were inserted: decision speed (V1) 156 knots, rotation speed (VR) 157 knots, and takeoff safety speed (V2) 162 knots.
Either during the push back from the gate or during taxiing, the PNF reinserted the final load figures and takeoff speeds into the MCDU. By mistake, the PNF typed a V1 speed of 126 knots instead of 156 knots. Just prior to taking off, the pilot flying (PF) read the speeds off the MCDU as 126, 157 and 162. Neither pilot noted the incorrect V1 speed.
The MCDU is designed to display an error message if the data [is] out of range or not formatted correctly. In the case of takeoff speeds insertion, the message will appear only if the speed inserted is below 100 knots.
The aircraft tail struck the runway surface at a pitch attitude of about 10.4 degrees nose up. This class of error is known as a substitution error, where a character that was entered is substituted with erroneous information…It is possible that the number "2," which is located directly above the number "5" on the keypad of the MCDU, was accidentally hit.
(For the full TSB report, see http://www.tsb.gc.ca/en/reports/air/2002/A02F0069/A02F0069.asp)