Embedded Avionics

OPINION: Why Mission Critical Systems are Needed to Achieve Safety in Urban Air Mobility

By Will Keegan | July 1, 2022
Send Feedback

 

Will Keegan is the CTO of Lynx Software Technologies.

Artificial Intelligence (AI) is such a hot term and there’s strong interest in AI, with Gartner’s recent study finding 48% of enterprise CIOs have already deployed or plan to deploy AI and machine learning technologies this year. Interest in AI is at odds with AI maturity, however. For some industries (e.g. customer experience with chatbots), the “cost of being right” is enough to see AI experimentation and deployment. But when organizations are managing mission-critical AI applications – where the “cost of being wrong” on an outcome could result in loss of life – AI maturity is a must-have, and accuracy and security are key differences to achieving safety.

Rushing safety engineering processes, building with new technology that regulators are still grappling with, and generating an ROI on an aircraft with historical 30-year production lifecycles, isn’t a model for success. For industries like automotive and aerospace, consumer confidence that systems are safe is a must before this market progresses.

My company has partnered on several level 4 autonomy platforms and we see a common design roadblock when organizations build safety nets to mitigate individual points of failure for critical functions. The preferred choice of achieving redundancy is to replicate functions on independent sets of hardware (usually three sets to implement triple-mode redundancy).

Putting aside size, weight, power and budget issues, replicating functions on singular hardware components can lead to common mode failures – whereby redundant hardware components fail together due to internal design issues. Therefore, safety authorities expect to see redundancy implemented with dissimilar hardware.

The adoption of dynamic architectures is hotly debated in the community dealing with mission-critical applications. Safety systems have typically been built around static methods. The safety system analysis goal is to examine a system’s behavior to ensure all behavior is predictable and will operate safely for its environment.

Static systems easily allow analysis of system behavior, given the functionality and parameters to the system are revealed up front for human and automated static analysis. The concept of letting fundamental system properties change dynamically causes prominent analysis obstacles.

The debate around adoption of dynamic capabilities focuses on the notion that a system can modify its behavior to adapt to unpredictable scenarios during the flight. “Limp home mode” is a capability that gains much from harnessing a dynamic architecture. This is where a major system failure happens (e.g. a bird is caught in a propeller) and other parts of the system intelligently distribute required functions across available resources for sufficient functionality to protect human life.

AI is necessary because without human oversight, computers must decide how to control machines at multiple levels, including mission critical. The permutations of variables that can impact the state of system are plentiful; the use of model-driven system control and hazard analysis is essential to achieve level 5 autonomy safely. However, there are hundreds of nuanced artificial neural networks that all have tradeoffs. In three decades, safety standards can only support the use of a few programming languages (C, C++, Ada) with strong-enough knowledge and give clear usage guidance alongside a mature ecosystem of tool suppliers.

Clearly the wide world of neural networks should be paired down, unpacked and guided according to the objectives and principles casted in DO-178C DAL A and ISO26262 ASIL-D. The FAA publication TC-16/4’ “Verification of Adaptive Systems” discusses the challenges particularly well. However, we still don’t have strong guidelines of use and development process standards for artificial neural networks.

The foundation of advanced safety system analysis in the automotive industry is a massive model that maps passengers’ relationships with the vehicle interfaces and traces the vehicle features into functions that result in software distributed on computer parts. In the future these models would become significantly more complex when working with the dynamics of autonomous platforms. The big questions to already be thinking about for these models are a) what is sufficient and b) what is accurate?

Clearly, we need more certification. How can system validation happen for complex systems without those responsible having knowledge in technical complexities like kernel design and memory controllers, which are crucial to enforce architectural properties? Component level suppliers are generally not involved in system validation, but rather asked to develop products in line with strict documentation, coding and testing processes, and show evidence.

However, valid concerns include whether such evidence can meaningfully demonstrate intended behavior of components are consistent with system integrators’ intentions.

In the automotive industry, aggressive claims were made about the timeline for level 5 autonomous platforms (no driver, no steering wheel, no environmental limitations) to become available. The reality was very different. The avionics industry is, rightly, being more conservative. I like the framework that the European Aviation Safety Agency published last year, which focused on AI applications that provide “assistance to humans.”

Key elements of this relate to building up a “trustworthiness analysis” of the artificial intelligence block based on:

  • Learning assurance; Covering the shift from programming to learning, as the existing development assurance methods are not adapted to cover AL/ML learning processes
  • Explainability; Providing understandable information on how an AI/ML application is coming to its results
  • Safety risk mitigation; Since it is not possible to open the ‘AI black box’ to the extent needed, this provides guidelines as to how safety risk can be addressed to deal with the inherent uncertainty

From this, and from conversations we have held with customers, it seems like pragmatism is the word that describes the industry’s approach. Just like lane departure detection is becoming relatively commonplace in new vehicles, we will first see the use of AI in applications where the human remains in charge. An example would be a vision-based system that aids in-flight refueling procedures. These important but peripheral to main system functionality use cases are great places to increase trust in the technology.

From here, we will see the technical deployment of AI in increasingly more challenging systems with “switch to human operation” overrides. Some analysts have indicated we may never reach the point of fully autonomous vehicles on our streets. I do believe though we will reach the milestone of fully autonomous vehicles in the sky. Believing in the “crawl, walk, run” path the industry is currently on is exactly the right one to make that a reality.

Receive the latest avionics news right to your inbox