Introduction
Article 4 of the Regulation (EU) 2024/1689 (“AI Act”) imposes a new Artificial Intelligence (“AI”) literacy obligation on all providers and deployers of AI systems. In essence, these organizations “shall take measures to ensure, to their best extent, a sufficient level of AI literacy” among their staff and other persons dealing with AI systems on their behalf. The AI Act’s definition of “AI literacy” (Article 3(56)) is “skills, knowledge and understanding” enabling providers, deployers and affected persons “to make an informed deployment of AI systems” and to be aware of AI’s opportunities, risks, and potential harms. In practice, this means organizations must equip all relevant personnel with the competence to use or oversee AI responsibly. Article 4’s requirements entered into application on 2 February 2025. This means as of today providers and deployers shall have AI literacy measures in place. Therefore, there is no grace period for the obligation itself and If you have not started, you should do so immediately.
Who is involved?
The scope of Article 4 extends beyond just employees. It covers “staff and other persons” under the provider’s or deployer’s operational remit. “Persons dealing with the operation and use of AI systems on behalf of providers/deployers” can include contractors, service providers, or even clients acting under the organization’s supervision. In other words, if individuals (whether internal or external) are operating or using an AI system on the company’s behalf, the company must ensure those individuals have sufficient AI literacy. This broad scope reinforces related AI Act duties on transparency (Article 13) and human oversight (Article 14), knowledgeable personnel are better able to implement transparency measures and exercise proper oversight over high-risk AI systems. Article 4 literacy requirement thus underpins the AI Act’s wider goal of safe and informed use of AI.
Key obligations: take measures
Article 4 requires organizations to take appropriate measures to foster AI literacy. Notably, as clarified by the European Commission Q&As , there is no obligation to formally test or “measure” employees’ AI knowledge, and the AI Act does not necessarily mandate any specific certification or exam. Instead, companies are expected to make good-faith efforts (best extent) to educate and inform. This typically will involve providing training or guidance material, updating internal policies, and otherwise ensuring that anyone who designs, deploys, or interacts with AI systems on the organization’s behalf understands how those systems function, their intended use, and their potential risks. The obligation encompasses contractual or third-party personnel as well – for example, if critical AI-driven functions are outsourced, the service provider’s staff should be sufficiently AI-literate for the task at hand. Compliance officers should therefore consider extending AI training requirements to contractors via contractual clauses or joint training sessions, where relevant.
Building an AI Literacy Program
Because AI systems, organizational contexts, and employee backgrounds differ widely, Article 4 adopts a flexible, risk-based approach to AI literacy. The European Commission (via its AI Office Q&A) has outlined minimum content and factors that every organization should consider when developing an AI literacy initiative:
- ensure a baseline understanding of artificial intelligence across the organization. Staff should learn what AI is, how it works, and which AI systems are in use internally. Training should cover AI fundamentals and the specific AI tools or models the organization provides or uses. This includes discussing the opportunities AI offers as well as the dangers and limitations (e.g. bias, errors). Even employees using common generative AI tools (for tasks like drafting text or translation) need to be made aware of risks – for example, users of a tool like ChatGPT should be informed about issues such as AI “hallucinations” (confident but false outputs);
- tailor the literacy program to whether the organization is an AI provider, deployer, or both. A company developing AI products will need deep understanding among its engineers and product teams about model design, data training, and compliance by design. A company primarily using third-party AI systems will focus on knowing how to select, integrate, and monitor those tools responsibly. Ask: “Are we developing AI systems, or just using AI from others?” The answer will guide the focus of training content;
- take into account the risk level of the AI systems provided or deployed. Higher-risk AI applications (e.g. those possibly impacting health, safety or fundamental rights) demand more intensive literacy efforts. Identify what employees need to know about the particular AI system’s risks and appropriate risk mitigation. If your organization deploys an AI system classified as “high-risk” under the AI Act, additional specialized training and strict procedures are expected to ensure staff can manage those risks. Conversely, for lower-risk uses (say, an AI tool for basic content generation), the training can be proportionate and focused on basic safe-use guidelines. The principle is to match the scope of training to the severity of potential impact; and
- identify the different groups of people who need AI literacy and tailor the program to their existing knowledge and the context of AI use. Technical staff (such as data scientists or IT developers) may already have strong AI expertise, while operational or business teams might be starting from basics. Article 4 explicitly encourages taking into account each person’s technical knowledge, experience, and training when designing literacy measures. This means having different levels or modules of training as appropriate.
- Staff should not only understand how to operate AI, but also the ethical issues (bias, fairness, transparency, etc.) and the legal obligations associated with AI use. This would be also aligned with the European Securities and Market Authority (“ESMA”) public statement on the use of Artificial Intelligence in the provision of retail investment services .
- Implementing AI Literacy: Practical Guidance
- Below are key steps and best practices for implementation, drawn from Article4 guidance and industry commentary:
- begin with a thorough risk assessment of the AI systems your organization provides or uses. Identify all AI applications in operation and classify their risk level (e.g. high-risk under the AI Act or lower risk). Consider the use cases and the potential impact on individuals if something goes wrong. This assessment should therefore reflect the scope and priority of your AI literacy efforts;
- map out who in your organization (and supply chain) interacts with AI systems. Categorize personnel by role and skill level – for example, AI developers, IT administrators, business users, customer service using AI tools, management overseeing AI projects, contractors operating AI on your behalf, etc. For each group, determine the key knowledge gaps and relevant topics. Then design training modules appropriate to each group’s “proximity” to AI and baseline knowledge;
- at minimum, cover the foundational elements of AI literacy identified by the European Commission: a general understanding of AI (concepts and the AI systems used in the organization), the organization’s role in the AI ecosystem (provider vs deployer), and the specific risks and responsibilities associated with the AI systems in use;
- the European Commission’s AI Office is curating a Living Repository of AI Literacy Practices, which collects examples of literacy initiatives from various organizations. This online repository can inspire your program design with real case practices;
- beyond training sessions, embed AI literacy into your organization’s processes. This could include updating onboarding programs to cover AI tools, establishing internal AI guidelines or AI ethics codes that employees must follow, and setting up a helpdesk or community of practice for AI questions.
Demonstrating Compliance
While Article 4 does not require any formal certification or external audit of AI literacy, organizations should maintain clear internal documentation to prove they have taken the necessary measures. The European Commission explicitly states that “There is no need for a certificate. Organizations can keep an internal record of trainings and/or other guiding initiatives.”. In practice, compliance officers should ensure that records are kept of training materials or curricula used, dates of training sessions, attendance logs of participants (staff, contractors, etc.), and any other AI literacy activities (e.g. distribution of guidelines, simulation exercises, etc.). These records will be invaluable if you ever need to demonstrate compliance to regulators or defend your organization in the event of an incident.
Conclusion
At Lexify, we are one of the first law firms specializing in review compliance associated with AI technology, uniquely positioned to guide businesses and individuals in navigating these legal frameworks, as well as provide training programs and specialized in building the best AI literacy strategy that suits you.
Connect with us
Thank you for taking the time to read our article. We hope you found it informative and engaging. If you have any questions, feedback, or would like to explore our services further, we’re here to assist you.


Follow Us
Stay updated and connected with us on social media for the latest news, insights, and updates:
Linkedin Lexify