AI in Cyber SecurityJan Kahmen2 min read

How Can Companies Demonstrate AI Competence Under the EU AI Act?

AI systems with unacceptable risks are now banned in the EU and companies must ensure that they have adequate AI expertise.

The first phase of the AI Act has been in force since February 2, 2025 and entails two main obligations: Firstly, AI systems with unacceptable risks are now prohibited in the EU. Secondly, companies must ensure that they have appropriate AI expertise. This obligation to develop AI competence is set out in Article 4 of the AI Regulation. The majority of companies are required to build and demonstrate AI competence because their employees use AI systems in professional contexts and are therefore considered operators.

Currently, there are a variety of expensive training programs on the market that address topics such as AI competence, AI officer, and AI coordinator. However, the exact need for AI competence is still unclear, as Article 4 is formulated quite generally. Nevertheless, it is crucial that companies take this topic seriously and start building up AI competence in a timely manner. This is not only a legal requirement, but also offers significant opportunities for more efficient work processes.

ePrivacy will soon be providing its customers with a dedicated training platform that can be used to train all employees in the company. The platform also offers an AI certificate of competence. As always, our approach remains pragmatic and solution-oriented. The content will be successively expanded as it is context-related and adaptable to individual needs. An excerpt of topics includes:

  • Fundamentals of AI
  • AI models and algorithms
  • Model training and AI ethics
  • Objectives and structure of the AI Act
  • Risk classes of the AI Act
  • Interaction of GDPR & AI Act
  • Processing phases in AI
  • AI guidelines and much more