AI ACT IN DEFENCE: A TRUE EXEMPTION?

Introduction  

In recent years, the use of artificial intelligence (AI) in the field of defence has attracted increasing strategic and regulatory attention. The entry into force of the Regulation (EU) 2024/1689 (the “AI Act”), the European regulation aimed at governing the development and deployment of artificial intelligence systems, has raised significant questions for the defence and national security sector, which has traditionally been subject to special exemptions. However, the distinction between civil and military applications, especially in the case of dual-use technologies, proves to be far from straightforward

Application of AI Act in the defence sector 

Is it really true that the AI Act does not apply to AI systems put into service and/or placed on the market for defence or national security purposes? The question of this article arises based on the provisions of Article 2 of the AI Act. Apparently, the regulation does not apply where AI systems are placed on the market, put into service or used with or without modification exclusively for military, defence or national security purposes regardless of the entity carrying out these activities

Further implications according to Recital 24 of the AI Act

Well, an exclusion that might seem trivial but in reality, it is not since recital 24 of the AI Act itself states that: (a) “Nonetheless, if an AI system developed, placed on the market, put into service or used for military, defence or national security purposes is used outside those temporarily or permanently for other purposes, for example, civilian or humanitarian purposes, law enforcement or public security purposes, such a system would fall within the scope of this Regulation”; (b) “AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation”; and (c) “An AI system placed on the market for civilian or law enforcement purposes which is used with or without modification for military, defence or national security purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out those activities.

In summary, see the table below:

ProductPurposeEffect
Military, Defense, or National SecurityTemporarily used for civil, humanitarian, law enforcement or public safety purposesSubject to AI Act if developed, placed on the market, put into service or used for such purpose
Potentially dual-use products e.g. drones, communication software, etc.“dual-use items” means items, including software and technology, which can be used for both civil and military purposesSubject to AI Act and possibly to dual-use product rules if placed on the market or put into service
Dual use items designed as civilian but sold as military“dual-use items” means items, including software and technology, which can be used for both civil and military purposesNot subject to the AI Act if placed on the market

The White Paper of the European Defence Agency

In May 2025, the European Defence Agency published a White Paper titled “Trustworthiness for AI in the Defence Sector” (the “White Paper”), emphasizing among other things the intersections with the AI Act. The White Paper introduces distinct taxonomies compared to those established by the AI Act, specifically regarding the roles defined for AI providers. According to the White Paper, AI providers are classified as either: (i) entities that deliver AI services or products directly usable by an AI customer or user, or designed for integration into AI-driven systems alongside non-AI components; or (ii) platform providers and so-called “AI Producers,” defined as entities responsible for designing, developing, testing, and deploying products or services incorporating one or more AI systems. This distinction notably differs from the AI Act, where one of the key requirement is that AI systems are “placed on the market”:

Furthermore, the White Paper outlines several substantial requirements, including Requirement Identification (par. 3.2 of the White Paper) and Mandatory Impact Analysis (par. 8.2 of the White Paper), which significantly aligns with the principal requirements stipulated for High-Risk AI Systems under Section III, Chapter 2 of the AI Act. However, specific additional factors must be considered in the impact analysis of such AI systems within the defence sector, including but not limited to System Performance, Human-Centric Values, and Advanced System Design Characteristics. The following key considerations are examples of factors to be assessed thoroughly: HUV-02: the usage of AI technology keeps the risk of human-AI race conditions acceptable; MOP-05: reducing financial cost of conflicts through the usage of AI.; or ASDC-01: smarter and smoother interactions between human and AI produce suitable human-machine synergy (win-win strategy).

Relevant Aviation Rules for AI systems

The European Union Aviation Safety Agency (“EASA”) has recently developed a Concept Paper: Guidance for Level 1 & 2 machine learning applications that contains specific measures and guidelines on AI within the aviation sector, which operators shall carefully consider when developing such systems including Unmanned Aerial System (“UAS”). The relevant applicable parts would be Part-AI (TR); Part-AI (OR); and Part-AI (AR) among with technical standards (some already developed and some still under development such as “AIR6987/ER-027” on taxonomy for AI and aircraft vehicles; or “AIR6988/ER-022” statements and concerns on autonomous aircrafts. To date, EASA Opinion 5/2019 states that there is currently no experience with autonomous UAS operations (without remote pilot intervention); thus, this kind of UAS operations is not allowed under the Standard Scenario STS-01. On the other hand, special condition light of UAS could apply since a potential use case would fall within the category of “not intended to transport humans operated with intervention of the remote pilot or autonomous”.

Conclusion

In conclusion, AI systems utilized in the defence sector, even when falling within the exceptions provided by the AI Act, are not entirely exempt from regulatory and legal requirements that align closely with the high standards of transparency, governance, and oversight established by the European legislative framework. These systems must still adhere to rigorous criteria that, although not explicitly detailed in the Act, maintain a comparable level of scrutiny and accountability.

Lexify as Your Consultant

Lexify continuously monitors regulatory developments and assists European and international providers in launching their AI Systems. For further information or support, our legal team is at your disposal.

Connect with us

Thank you for taking the time to read our article. We hope you found it informative and engaging. If you have any questions, feedback, or would like to explore our services further, we’re here to assist you.

Follow Us

Stay updated and connected with us on social media for the latest news, insights, and updates:

Linkedin Lexify
Torna in alto