The EU AI Act is a European regulation laying down harmonised rules on AI. In particular, the AI Act sets out rules on the following matters:
The EU AI Act’s main goal is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy AI, while ensuring the protection of health, safety and fundamental rights. The EU AI Act is key to ensure AI is ethical and non-discriminatory, ensuring a balance between the growth of AI in the EU, while mitigating its risks.
The EU AI Act sets out risk-based requirements for AI systems and, therefore, applicable compliance requirements depend on the risk level of the AI system.
Specifically, high-risk AI systems must comply with the following requirements:
Additionally, all AI systems intended to interact directly with natural persons shall be designed in such a way that the natural persons concerned are informed that they are interacting with an AI system.
The EU AI Act is a general framework on AI matters, which sets out mandatory compliance requirements for all companies developing, deploying or using AI systems in the EU, regardless of the sector they operate in. The UK and the US do not have a similar general AI framework and have a fragmented approach. In the UK, AI matters are mostly regulated in sectorial legislation and regulations. As for the US, the AI regulation is a state-level issue.
AI solutions may be used in a wide range of services related to these sectors, such as AML and fraud detection, credit risk assessment, trading, customer service, regulatory compliance and risk management. AI regulations set out the criteria under which AI systems may be used in this context, ensuring that regardless of the use of AI, these services remain transparent, unbiased, and secure.
Article 4 of the EU AI Act sets out AI literacy as a requirement for providers and deployers of AI systems, setting out that these persons shall take measures to ensure a sufficient level of AI literacy of their staff and other persons involved with the AI systems that they provide or deploy. Adequate AI literacy depends on multiple factors: technical knowledge, experience, education, training and purpose of the AI system.
Exact requirements for AI literacy are context-specific and covered entities need ensure implementation of such requirements “to their best extent”. This means that such measures will depend on several factor, such as:
AI literacy obligations apply to providers and deployers of AI systems.
An AI provider is a person or entity that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
An AI deployer is a person or entity that uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Yes, the AI Act applies to companies outside the EU in the following cases:
As a general rule, regardless of whether companies providing or deploying AI systems are based in the EU or not, the AI Act shall apply when persons or entities located in the EU are affected by the AI systems provided or deployed.
Organizations shall ensure that all their staff and other persons dealing with the operation and use of AI systems on their behalf have AI literacy.
Exact requirements for AI literacy programs are context-specific. This means that the specific content of such programs will depend on several factors, such as, (a) the relevant AI system, since AI literacy standards should be higher on high risk AI systems (i.e., systems used in sectors such as biometrics, education, employment, law enforcement, administration of justice), (b) size of organizations, as AI literacy standards should increase as the size and resources of the relevant entities also increases, and (a) the work force, as the level of AI literacy of each employee or person related to the AI system must be aligned with the context in which AI systems are used by such employee and tailormade to its role. Therefore, there is no one-size-fits-all approach for AI literacy programs. Organizations should consider starting by assessing the current AI knowledge of their staff, particularly those working directly with AI systems. After identifying any gaps, role-specific training programs need to be implemented to ensure that employees understand the technical, ethical, and risk-related aspects of AI. This training should be regularly updated to reflect new developments and regulatory changes.
The EU AI Act entered into force on 2 August 2024.
Organisations should be complying with AI literacy requirements since 2 February 2025 – on this date, all employees and other persons operating and using AI systems on their behalf should possess the necessary knowledge to use AI systems responsibly.
The penalties applicable to the non-compliance with AI literacy requirements will depend from member state to member state, since the AI Act does not set out a specific penalty applicable to non-compliance with AI literacy requirements. In fact, the AI Act generally sets out that Member States shall lay down the rules on penalties and other enforcement measures applicable to non-compliance with the AI Act, which shall be effective, proportionate and dissuasive. In this sense, penalties for non-compliance with AI literacy requirements are not yet set in stone, but it is resonate to anticipate that in the future these may range from warnings to administrative fines, including also typically ancillary penalties such as the suspension of the activity for a certain period of time, the revocation of the business license.
Providing incorrect, incomplete or misleading information to competent authorities may entail the in reply to a request shall be subject to administrative fines of up to € 7,500,000 or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher (or lower, in the case of SMEs).
AI literacy is one of the key steps to ensure adequate use of AI-based services and solutions. This will help financial institutions better identify in which areas of their activities they may incorporate such services, ultimately, increasing the effectiveness and quality of the services provided and products offered, not jeopardising the quality nor security of the organisation.
Ensuring employees have adequate AI literacy means allows companies to innovate faster, with the incorporation of AI-based solutions in their internal procedures and/or launching AI-driven products and services, gaining a competitive advantage over traditional competitors.
There are multiple AI-driven systems that may be implemented by companies in the context of their risk management system as well as their compliance areas. AI literacy allows companies to implement AI driven solutions adequately, reducing the risk of AI driven errors. AI-literate risk managers and compliance officers will help companies:
Investment in AI literacy for its staff is a mandatory requirement applicable to all companies using AI-driven solutions. Therefore, non-compliance with such AI literacy requirements may entail the application of fines and other regulatory ancillary penalties. Additionally, lack of investment in AI literacy increases the probability of occurrence of AI driven errors due to the inadequate use of AI solutions or interpretation of AI generated content. The consequences of these type of errors can be vast, ranging from financial losses, to security issues and reputational damages.
Contact our team and get more information on how we can upskill your organisation
The largest knowledge platform in digital finance and Fintech.
© CFTE – Centre for Finance, Technology and Entrepreneurship 2025