Autonomy & AI

EASA Awaits Industry Feedback on Initial Regulatory Guidance for Machine Learning Algorithms in Aircraft Systems

EASA has published initial guidance for regulating future machine learning applications and algorithms in aircraft systems. (Photo by Ali Shah Lakhani on Unsplash)

The European Union Aviation Safety Agency (EASA) published a preview of what standards and guidelines aerospace engineers can expect to be subjected to when submitting future aircraft systems and solutions that use artificial intelligence (AI) and machine learning (ML) algorithms, with their new concept paper, “First Usable Guidance for Level 1 Machine Learning Applications.”

In the paper, EASA officials establish a baseline of development requirements for Level 1 AI applications, or those that provide human augmentation and cognitive assistance in human decision-making and action selection. The agency sees the new concept paper as the first step in the AI development roadmap that it published last year.

While EASA focuses on Level 1 AI applications, the new paper does help to outline the type of criteria an AI system or application would need to feature in order to be considered a Level 2 or Level 3 AI application. Level 2 applications are those that feature human and AI-based collaboration, while Level 3 applications are those AI-based systems that perform decisions and actions that humans can override.

“The current breakthrough is linked with [machine learning] ML, which is the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. Deep learning (DL) is a subset of ML that emerged with deeper neural networks (NNs), leading to large improvements in performance in the recent years. DL produced significant improvements for many problems in computer vision and natural language processing (NLP), enabling use cases which were not possible before,” EASA writes in the concept paper.

According to the paper, the initial guidelines are intended to be applicable to any system being developed that uses ML techniques or incorporates ML algorithms that could be used in a safety related aircraft system application such as a flight management computer, engine control or other navigation or communication system. Specific domains mentioned by EASA are those that are already covered in their “Basic Regulation,” to include initial and continuing airworthiness equipment whose improper functioning could contribute to catastrophic or hazardous fail conditions.

Other domains that EASA is anticipating AI-based system develop project applications for include systems that support or replace pilot tasks, air traffic services, environmental protection and maintenance systems among others.

There is also an effort by the paper’s authors to provide future applicants with an understanding of why they’re developing regulatory guidance and mechanisms in the way that they’re. For example, the way Level 1 AI applications are to be classified are based on the four-stage model of human information processing and its equivalent in system functions that can be automated. These include information acquisition, information analysis, decision-making and action implementation.

The agency also explains how the current regulatory framework and its associated risk-based approach for certification of safety critical aircraft software, equipment and parts is driven by a requirements-based “development assurance” methodology. This framework has to be altered to account for the reality that AI-based systems will be much different from non-AI ones that are programmed to perform a specific range of functions, versus interpreting data or situations and making new decisions or suggesting new actions.

“Intuitively, the assurance process should be shifted on the correctness and completeness/representativeness of the data (training/validation/verification data sets) and on the learning and its verification. Most importantly, the main challenge lies in providing guarantees that the training performed on sampled data sets can generalize with an adequate performance on unseen operational data,” EASA writes in the new paper. “To this purpose, a new concept of ‘learning assurance’ is proposed to provide novel means of compliance. The objective is to gain confidence at an appropriate level that a ML application supports the intended functionality, thus opening the ‘AI black box’ as much as practicable.”

A step for “learning assurance” is added to the proper W-shaped model of AI-based system development and verification, in place of the traditional V-shaped model used by engineers today. There are also some initial methods for data lifecycle management, data collection and preparation to show future applicants the importance of establishing trustworthiness for the models and data sets that will prove to be significantly important to the success of their future applications to certify AI-based aircraft systems.

EASA’s new concept paper comes as a number of companies continue to give research and development program updates on some of the next generation machine learning applications they’re developing for aircraft systems. Daedalean, the Swiss-based startup developing an autopilot system that uses a neural network for image recognition and aiding in pilot cognitive decision-making tasks, is referenced several times in the new paper to explain where EASA draws some of its insights featured in the paper.

During a Feb. 24 “Fly AI” webinar hosted by Eurocontrol, Baptiste Lefevre, advanced technologies regulation manager for Thales, said that the French avionics supplier first approached EASA about a version of its FlytX cockpit system that will feature an AI assistant back as far back as November 2018.

EASA is taking comments on the new paper during a 10-week consultation period through June 30.

“These guidelines will evolve over the next 3 years through publication of documents respectively for Level 2 and Level 3 AI applications, while being updated based on its application to Level 1 AI applications,”  EASA writes in the new paper.  “They may evolve as well depending on the research and technology development in the dynamic field of AI research.”

 

Receive the latest avionics news right to your inbox