AI ACT IN AVIATION: EASA GUIDANCE

Introduction

Article 108 of the EU Artificial Intelligence Act (“AI Act”) introduces a significant amendment to one of the cornerstone instruments of European aviation law: Regulation (EU) 2018/1139 (the “Basic Regulation”). Because this Regulation forms the legal basis for the EU’s entire civil aviation framework, including the unmanned aviation regime, the modification has a direct impact on drone regulation(s). The core principle is the following: when delegated acts are adopted under Regulation (EU) 2018/1139 and relate to AI systems that qualify as “safety components” under the AI Act, those acts must incorporate the high-risk AI requirements laid down in Chapter III, Section II of the AI Act.

Amendments Introduced by the AI Act to Basic Regulation

Under Article 3(14) AI Act, a safety component is defined as “a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.”

In order to operationalise this new legal interface, EASA has published a two-part Notice of Proposed Amendment (“NPA 2025-07”), currently open for consultation.

The proposal expressly applies, among others, to manufacturers of unmanned aircraft. Stakeholders may submit comments through the EASA Comment-Response Tool (“CRT”) at http://hub.easa.europa.eu/crt/. The deadline for comments is 10 February 2026.

Rational Behind the Proposal

EASA’s proposal stems from several structural gaps in current aviation safety and human-factors methodologies:

  • Existing development assurance methods do not sufficiently address the stochastic and non-deterministic nature of machine-learning models.
  • Current human-factors assessment techniques do not capture the new types of interaction enabled by AI-driven interfaces. When developing human-AI teaming concepts, issues such as shared situational awareness and the realistic allocation of responsibility must be addressed, without anthropomorphising AI systems.
  • AI and machine-learning technologies raise additional ethical considerations that traditional aviation certification frameworks were not designed to address.
  • To respond to these challenges, EASA proposes a new regulatory instrument: AI-specific Detailed Specifications (DSs), complemented by AMC (Acceptable Means of Compliance) and GM (Guidance Material).

The proposed framework excludes:

  • AI systems whose risks could directly cause fatalities or multiple life-threatening injuries, typically involving loss of aircraft or major uncontained environmental effects;
  • AI systems with online learning capabilities, where any failure contribution would be more severe than “no safety effect”;
  • Logic-based, knowledge-based or hybrid AI systems when their failure contribution would be more severe than “no safety effect”;
  • AI-based verification tools used to assess AI-generated artefacts, unless every output of the verification tool is independently verified by a human.

New Requirements for AI system in aviation

To comply with the new AI framework, drone manufacturers and operators will need to implement a multi-step process. At a high level, stakeholders must:

  1. Conduct an AI mapping and automation-level classification: all AI systems must be identified and classified according to EASA’s automation levels (Level 1, Level 2, or Level 3), distinguishing between human augmentation, human assistance, cooperation, collaboration, and advanced automation;
  2. Update and integrate the Concept of Operations (“ConOps”): the ConOps must be revised to incorporate the intended AI functionalities and constraints as provided within the NPA 2025-07.
  3. Define and/or adapt the Operational Domain (OD): the OD is defined as: “the set of operating conditions under which a given AI-based system is specifically designed to function as intended, in line with the defined ConOps”.
  4. Perform AI risk and ethics assessments: two assessments are required: (i) an AI risk assessment, based on the criteria and tabular model set out in DS.AI.130; (ii) an AI ethics assessment, specifically mandated by DS.AI.140. Additionally, compliance with DS.AI.160 requires that the AI system be designed with monitoring and recording capabilities that enable continuous risk assessment throughout the system’s lifecycle.
  5. Implement an AI development and integration phase aligned with the OD: the AI system must be built and validated to ensure it performs as intended within the defined Operational Domain and at its specified performance level.
  6. Ensure proper human-AI coordination: design must account for the interaction between the human operator and the AI system, ensuring that responsibilities, control capabilities, and supervisory functions are properly aligned, consistent with the assigned automation level.

Scope of NPA 2025-07 and focus on software qualification for drones

For clarity it is important to recall that NPA 2025-07 is structured into three layers:

  • Detailed Specifications (DS): the primary technical standards and reference requirements;
  • Acceptable Means of Compliance (AMC): non-binding but authoritative ways to demonstrate compliance with the DS;
  • Guidance Material (GM): interpretative and explanatory material intended to assist operators and manufacturers in applying the rules in practice.

The analysis provided in this article is not exhaustive and cannot replace a full reading of the NPA and all other applicable EU and national sources. It must always be applied in light of the specific operational scenario, the system architecture and the applicable regulatory framework.

Given the length and complexity of NPA 2025-07, this section focuses exclusively on software and AI-system qualification for aviation domain (thus including drones), which in practice will often be the first and most critical step. UAS manufacturers and operators subject to these rules will need a defensible narrative explaining why a particular AI system has been assigned a given level of automation and risk classification.

When do the NPA 2025-07 rules apply to AI on drones?

The proposed DS on AI apply, in essence, to AI-based systems that:

  1. Have been classified as high-risk AI systems under the AI Act as a result of the risk assessment required by the DS on AI (in particular, the process outlined in DS.AI.130); and
  2. Are classified as Level 1 or Level 2 AI-based systems under DS.AI.110, i.e.:
    1. Level 1: human augmentation or assistance;
    1. Level 2: human–AI cooperation or collaboration.

For further detail on the level definitions and their practical boundaries, reference should also be made to EASA’s concept paper on Level 1 and Level 2 AI systems.

Two practical principles are worth emphasising:

  • Highest level prevails: where more than one classification could plausibly apply to the same AI system, the operator must adopt the highest level of automation applicable.
  • Alignment of responsibility and control: the allocation of responsibility to the end user must be aligned with their actual capacity to control and interact with the AI system. A purely formal allocation of responsibility is not sufficient if, in practice, the user cannot effectively supervise or override the system.

The six levels of automation for AI on drones

The NPA 2025-07 distinguishes six levels of automation, ordered by increasing AI authority. Below is a drone-oriented reading of those levels.

Level 1A: Human augmentation

At Level 1A, the tasks assigned to the AI system are limited to automating information acquisition and perception. The system may, for example, detect objects, classify obstacles, or provide enriched situational awareness to the remote pilot.

  • The AI does not constrain or shape the user’s authority.
  • The end user retains full decision-making power and, as a result, full responsibility for actions and omissions.

A typical drone example would be a perception module that detects non-cooperative traffic and highlights it on the operator’s display, without suggesting or enacting any avoidance manoeuvre.

Level 1B: Human assistance

In Level 1B, the AI system still leaves all final decisions to the human but actively supports the decision-making process. It processes data, evaluates options and may propose specific courses of action, while the drone operator remains entirely in charge.

The boundary between Level 1A and Level 1B turns on how the end user decides:

  • At Level 1A, the operator simply uses AI-generated information (e.g. predictions, alerts) to increase situational awareness and cognitive capacity.
  • At Level 1B, the AI system suggests a course of action based on its outputs, presenting the human with one or more options.

In both cases, the human operator always takes the final decision.

Example (detect and avoid non-cooperative traffic):

  • A Level 1A system would stop at intruder detection and display the intruder to the operator.
  • A Level 1B system, in the same scenario, could propose several alternative avoidance manoeuvres (e.g. climb, turn left, hold position), leaving it to the pilot to select and command the manoeuvre.

Level 2A: Human & AI cooperation

At Level 2A, the AI system goes a step further and is allowed to make automatic decisions and implement actions, for example, issuing autopilot commands, while the end user still retains full control and responsibility.

The distinction between Level 1B and Level 2A is again the shift from:

  • Decision support only (Level 1B), to
  • Automatic decision-making and action implementation by the AI system (Level 2A).

For drones, a Level 2A example in a detect-and-avoid context would be an AI module that, once an intruder is detected and classified, automatically commands the autopilot to execute a predefined avoidance manoeuvre.

Two conditions are essential for Level 2A:

  • The AI’s decisions and actions are fully monitored by the end user; and
  • The end user can override the AI at any time, for instance by disengaging the autopilot and manually commanding a different manoeuvre.

Level 2B: Human & AI collaboration

Level 2B represents a higher degree of automation. Here, the AI system undertakes both decision-making and action implementation, and although responsibility still formally rests with the end user, the user’s practical authority is limited and partial.

Both Level 2A and 2B involve automatic decision-making and execution. The key boundary is:

  • At Level 2A, the end user is expected to cross-check decisions and retains a strong, immediate override capability for each action.
  • At Level 2B, the AI can take charge of some decisions without prior cross-checking; the end user instead monitors the effects of those decisions and their implementation, and intervenes only if necessary.

A typical example from air traffic control is an executive ATCO delegating certain conflict detection and resolution tasks to a Level 2B system, which resolves some conflicts independently, while the controller monitors and can stop the automation.

For drones, a similar pattern could appear where an AI-based UTM/U-space or fleet-management system independently manages trajectory adjustments for certain UAS within predefined parameters, with the operator supervising overall outcomes and intervening only in certain cases.

Level 3A: Safeguarded advanced automation

Level 3A marks the threshold of advanced automation. The AI system has a high level of authority over both decisions and actions, while the end user’s direct oversight is significantly reduced.

The line between Level 2B and Level 3A is drawn by the extent of the AI system’s authority and the limited ability of the end user to override:

  • In Level 2 (A and B), a prerequisite is that the end user can intervene on every decision and action of the AI system.
  • In Level 3A, the end user may intervene only in specific and extreme cases, typically when an intervention is necessary to ensure safety.

In a UAS context, one could imagine an operator supervising a fleet of unmanned aircraft (swarm), where Level 3A AI systems plan and execute most mission steps autonomously. The human operator might only be able to terminate a specific UAS operation upon receiving an alert that it has exceeded its operating conditions, without being involved in each intermediate decision.

Conclusion

NPA 2025-07 offers drone manufacturers and operators a structured path to align AI-based UAS systems with the AI Act through a progressive, risk-based framework. This section has focused on one critical aspect: software and AI-system qualification, and in particular the classification of automation levels and the alignment of responsibility with real control capabilities.

It does not cover all sections of the NPA, nor does it replace a thorough legal and technical assessment of the applicable rules. However, it should help UAS stakeholders frame the right questions:

  • What level of automation does each AI function truly have?
  • Can the end user realistically supervise and override it?
  • Does the chosen classification match the risk profile identified under the AI Act?

From here, the next step is to embed these classifications into your ConOps, Operational Domain definition and assurance strategy, and where needed to seek specialist support in designing and documenting AI-enabled UAS that are both compliant and defensible.

Connect with us

Thank you for taking the time to read our article. We hope you found it informative and engaging. If you have any questions, feedback, or would like to explore our services further, we’re here to assist you.

Follow Us

Stay updated and connected with us on social media for the latest news, insights, and updates:

Linkedin Lexify
Nach oben scrollen