Fluent in AI: How to Build an AI-literate Workforce

Ius Laboris
Contact

Ius Laboris

[author: Jan Heuer]*

The EU’s Artificial Intelligence Act (‘AI Act’) regulates specific risks arising from the development and use of artificial intelligence (‘AI’). So-called ‘AI literacy’ plays a central role in this. We examine exactly what this means, who is affected by the obligation to ensure AI literacy, and how employers can comply.
___

Recap: Application of the AI Act

The AI ​​Act entered into force on 1 August 2024, but its provisions will take effect gradually. The provisions on prohibited AI practices have already been applicable since 2 February 2025. Further provisions, primarily affecting public bodies, started to apply on 2 August 2025.

Looking ahead, 2 August 2026 will be particularly important for organisations. From this date, and save for one final provision, the remainder of the AI ​​Act’s provisions will become applicable. These include, among other things, the provisions on high-risk AI systems in ​​employment and human resources management. From this point on, special requirements and obligations must be observed when AI is used, for example, in automated application processes or personnel decisions. Finally, one year later, from 2 August 2027, the provisions for high-risk AI systems that are used as safety components in certain regulated products or that are themselves considered such products and are therefore subject to third-party assessment, will apply.

___

Article 4 and AI literacy

So, what of the obligation regarding AI literacy? This is outlined in Article 4 of the AI ​​Act, is risk-independent and therefore already binding for all AI systems. This provision requires providers and deployers to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy.

The AI Act defines “AI literacy” as the skills, knowledge, and understanding needed to make an informed deployment of AI systems as well as to gain awareness about the opportunities and risks of AI and the possible harm it can cause. This encompasses technical know-how as well as legal and ethical aspects, such as data protection and discrimination risks. Persons involved with AI must have a sufficient understanding of the functionality and limitations of the applications used, for example, regarding distortions caused by training data (biases) or erroneous output (hallucinations).

Whether a sufficient level of AI literacy is achieved depends on the specific case. It includes taking into account both technical and legal knowledge, training, experience, and the specific application context, as well as the individuals or groups to whom the AI ​​application relates.

Since employers are typically the deployers of AI ​​systems, they are responsible for ensuring that their employees have the necessary competence in using AI. However, it is not sufficient to simply refer to the operating instructions for an AI system or to resort to general data protection training. Standardised onboarding or vague information on the intranet will also not meet the requirements of the AI ​​Act. Instead, employers are likely to be required to take concrete and targeted measures. This includes, in particular, specific, targeted training for the relevant staff (i.e. HR departments, managers, IT managers, or other employees entrusted with the use or supervision of AI systems).

___

Risks of AI il-literacy

Although the AI ​​Act does not explicitly provide for fines for violations of Article 4, the provision carries more than just aspirational weight – it imposes a substantive obligation on providers and deployers of AI systems. For example, Article 99(1) of the AI ​​Act requires Member States to provide for effective, proportionate and dissuasive penalties for infringements of the AI Act. If appropriate national rules are enacted, a lack of AI literacy may therefore lead to legal consequences. Furthermore, Article 85(1) of the AI ​​Act provides for the possibility of filing a complaint with the market surveillance authority if a violation of a provision of the AI Act is suspected.

Irrespective of official procedures, insufficient AI literacy can expose organisations to risks of civil liability. A lack of the necessary expertise in handling AI systems may constitute a breach of a duty of care or of a separate protective law in the event of a dispute. Furthermore, uncontrolled data leaks due to improper use of AI systems also pose reputational risks for organisations.

___

How can employers comply?

To comply with the obligation to ensure AI literacy, the following measures are recommended:

  • Tailored, practical training: A key component is practical training that is tailored to the respective target group and conveys the essential obligations under the AI ​​Act and GDPR. For example, HR teams require different training content than managers or IT professionals. While HR training may focus on the use of AI in the recruitment process, for example, management training should address questions of responsibility when using AI within a team.
  • Organisation-specific guidelines: Developing organisation-specific guidelines for the use of AI is highly recommended. Clear, written rules help ensure transparency and provide certainty in day-to-day operations. Such policies should set out who is permitted to use which AI systems for what purposes, regulate the permissible scope of use – particularly in relation to confidential company data – and specify when human oversight is mandatory. Internal reporting or complaint mechanisms can also be established to give employees the opportunity to address AI-related concerns at an early stage.
  • Interdisciplinary AI committee: To strategically support the use of AI within the organisation, it can be helpful to establish an interdisciplinary AI committee. Such a committee – composed of representatives from HR, IT, compliance, and data protection – can, among other things, oversee the rollout of new systems and initiate and coordinate appropriate training measures. Organisations that do not want to form an AI committee directly could alternatively appoint an AI officer with the relevant responsibilities.

___

Takeaway for employers

AI literacy is now a binding obligation under the EU’s AI Act and employers must therefore ensure that staff are equipped to use AI systems responsibly. Targeted training, clear internal guidelines, and strategic oversight are essential to mitigate legal, operational, and reputational risks. Organisations that act early will be better positioned to meet compliance deadlines and avoid future liability.

*Kliemt.HR

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ius Laboris

Written by:

Ius Laboris
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ius Laboris on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide