Venligst tjek din email for at verificere din konto
The United States Federal Aviation Administration (FAA) has released its initial Roadmap for Artificial Intelligence Safety Assurance, which recognises the growing use of AI technologies in the aviation sector, including autonomous systems.
The FAA says the aviation industry currently lacks a method for the safety assurance of AI. Its roadmap’s objectives therefore are to establish the guiding principles for assuring the safety of AI in aviation and to establish priorities and plans for its safe introduction. To help create the roadmap, the FAA held a series of technical interchange meetings to listen to a variety of industry opinions and priorities throughout 2023 and 2024.
The Safety Continuum
One of the guiding principles of the roadmap is to leverage the Safety Continuum, which refers to the spectrum of risk levels acceptable to society. The highest level of safety is expected for scheduled passenger service, transitioning to a lower safety threshold for research experimental flights and drone operations. “We can gain experience with AI in experimental aircraft, without trying to provide the assurance that would be required for that AI to be used in scheduled passenger air carrier operations,” the FAA says. “Small uncrewed aircraft also provide ideal vehicles and operations to gain early experience which can be used to further inform future versions of this roadmap. The experience that is gained can inform safety assurance methods relevant to other applications and safety objectives.”
Distinguishing between learned and learning AI
The roadmap underlines the importance of differentiating between learned (or static) AI and learning (or dynamic) AI.
“The safety assurance for a learned AI implementation can be performed as part of the system design and validation,” the FAA says. “Once completed, the AI implementation is accepted, and the in-service monitoring of the AI implementation is part of the continuous operational safety (COS) programme for the aircraft. A system designer may record in-service operational data to further train the AI implementation and deploy an updated version, but each new version is subjected to safety assurance. The developer of a learned AI implementation must consider the range of operating conditions that will be encountered throughout the product lifecycle as part of the initial safety assurance. Changes in that environment to depart from the range of operating conditions initially considered may impact the performance or functionality of the AI and would be addressed under the COS program in the same way they would be for traditionally designed systems.”
Meanwhile, the FAA says learned AI implementations with frequent offline updates pose an opportunity and a challenge. “The collection of in-service data provides a means to monitor how well the deployed product is working to quickly detect in-service deficiencies and inform how urgently they should be corrected,” the roadmap states. “The frequency of updating the version is constrained by the need to provide safety assurance of the update. This is an important consideration in the development of safety assurance methods, which should describe sufficient and proportional strategies to assure the safety of a revision.”
The FAA says a system that continues to learn in the operating environment must build its safety assurance into the operating environment or include safety assurance as part of the process of learning. It adds that learning systems may necessitate new regulations to assure the continued safety of the evolving system.
There is of course opportunity for AI to provide an additional perspective when analysing system risks and mitigations. For example, “continued operational safety programmes rely on safety metrics and key performance indicators that may benefit from an AI application to identify trends that are unrecognisable by a human observer without additional analysis. Such safety monitoring systems would serve in real time, as appropriate, to provide a safety status that can be used within, or for oversight over, an operation.” The use of AI to improve the efficient use of airspace is outside the scope of the roadmap.
Collaboration plans
The FAA says ongoing collaboration is essential. In developing and applying safety assurance methods to AI, the FAA will participate in developing industry consensus standards and deploy them as they are completed and applicable to aviation safety assurance, as appropriate. The agency is also pursuing global harmonisation with other civil aviation authorities as appropriate. As it progresses with its AI strategy, the FAA will host technical exchanges for the aviation community to share experiences and lessons learned regarding assurance concepts and methods in a non-binding and open forum.
Towards the end of 2024, the FAA expects to issue a policy statement to inform applicants who plan to use AI in their systems, or in the development of documents, that they should disclose that use and discuss a certification path with the FAA early in the development phase. The FAA says it will be directly involved in AI projects “due to novel or unusual nature”.