Reinier Russell

managing partner

Reinier advises national and international companies

reinier.russell@russell.nl
+31 20 301 55 55

Vitória Alves

Vitoria assists as paralegal in drafting opinions and procedural documents.

+31 20 301 55 55

Prevent the AI Act from taking you by surprise: how to limit the risks

Publication date 19 november 2025

Almost all companies now use some form of AI. This means that they may be subject to the prohibitions and regulations set out in the European AI Act. How can you ensure that you comply with these rules?

ai

AI is becoming increasingly prevalent in society. The healthcare sector, education, the legal sector and the business community all use it. The applications of AI within these sectors vary, but they all have in common that they bring both advantages and disadvantages. AI will contribute to innovation and efficiency within various sectors. At the same time, it poses potential risks to the protection of fundamental human rights, as every application of AI raises ethical, privacy and security issues.

Companies need to be aware of this. For this reason, since 2 February 2025, the EU has required organisations that use AI to ensure sufficient AI literacy. They must provide their staff and others who manage AI systems on their behalf with sufficient AI knowledge. This obligation is part of the European AI Act, which was adopted in 2024 and will enter into force on 2 August 2026. Some important parts have already come into force, while other regulations will only come into force at a later date. This will always be indicated.

The AI Act

The European AI Act is the first piece of legislation specifically for artificial intelligence. The regulation aims to safeguard two interests: on the one hand, the protection of human rights and, on the other, measures to stimulate innovation. The law contains rules on the design, implementation and use of AI systems within the EU.

The AI Act affects the use of AI within the European Union and worldwide. This regulation applies to providers of AI systems and their representatives, importers and distributors, and companies and organisations that use AI (the deployers). Different rules apply to each of them. What does this regulation entail?

Risk categories

The European Commission has opted for a risk-based approach, whereby AI systems are divided according to risk levels. These risk levels determine the requirements that the AI system must meet. The system distinguishes between four main groups:

1. Unacceptable risk

AI systems that pose an unacceptable risk to safety, fundamental rights or EU values have been strictly prohibited since 2 February 2025. Examples include social credit scores and behaviour manipulation systems. These systems may not be placed on the market. Biometric identification systems also fall under this category in principle. However, the AI Regulation makes an exception for cases where these are deemed necessary for the functioning of the democratic rule of law. An example of this is their use to identify perpetrators of criminal offences.

2. High risk

AI systems that, due to their intended purpose and context, may pose significant risks to health, safety or fundamental rights are subject to strict regulation. Examples include systems used in education, for personnel selection or by public services. These systems must meet strict requirements in terms of transparency, accountability and data management. The regulation for this will come into force on 2 August 2027.

3. Limited risk

AI systems that pose limited risks to users and the public are mainly subject to transparency obligations, such as informing the public that they are interacting with AI-generated content. This is mandatory for chatbots and deepfakes, for example.

4. Minimal risk

AI systems that pose minimal or no risk to safety or fundamental rights do not have to comply with specific regulations under the AI Act. These include spam filters, for example.

GPAI models

Depending on the risk assessment, different rules apply, but the AI Act also contains specific regulations for general-purpose AI models, known as General Purpose AI (GPAI). This includes the well-known ChatGPT. Such AI models must provide documentation, comply with copyright laws and publish summaries of the training data. Models that pose a systemic risk have additional obligations, such as reporting incidents and ensuring cybersecurity. These rules have been in force since 2 August 2025.

Legal pitfalls

In addition to the specific obligations that apply to each category, the European Commission also recommends general safeguards for the use of AI. Companies must be aware of the risk group to which their AI system belongs and the obligations that this entails, so that they can avoid legal and financial risks. We will briefly discuss the most important points.

  • Compliance and liability: Companies must ensure compliance with the AI Act. This means, among other things, keeping documentation on how their AI works, performing risk analyses and identifying possible biases.
  • Transparency and control: The AI Act emphasises transparency and traceability of AI decisions, especially in high-risk applications. This requires companies to be able to clearly explain how their AI systems function and make decisions.
  • Human intervention: Article 22 of the GDPR requires human intervention if the use of AI relates to natural persons.
  • AI literacy: Organisations using AI must provide their staff and others who manage AI systems on their behalf with sufficient AI knowledge.

Penalties for non-compliance with the AI Act

Penalties may be imposed for non-compliance with the AI Regulation. The amount of the penalty depends on both the risk category of the AI system and the severity of the violation. AI practices that are explicitly prohibited due to unacceptable risks are subject to a fine of up to 35 million EUR or, for companies, up to 7% of global annual turnover. Violations of the rules for the use of high-risk AI systems and those for general-purpose AI models are punishable by a fine of up to 15 million EUR or, for companies, 3% of global annual turnover. Providing incorrect or misleading information to the authorities is punishable by a fine of up to 7.5 million EUR or, for companies, 1% of global annual turnover.

Avoiding legal pitfalls

To ensure compliance with the AI Act and avoid penalties, companies must ensure:

  • Risk management: Companies must evaluate the risks of AI systems and keep track of which compliance requirements apply.
  • Transparency and accountability: Companies must maintain detailed documentation on the operation and use of AI systems within the company.
  • Training and awareness: Companies must train their staff in the use of AI applications, particularly with regard to ethics, regulations and the protection of personal data. This is essential in order to meet the requirement of AI literacy.
  • Compliance with the GDPR: Companies must comply with the requirements set out in the GDPR at all times.

Practical tip

AI is not only the responsibility of the IT department; clear rules must also be established from an HR perspective. Russell Advocaten drafts AI policies for its clients that can be added to the employee handbook, code of conduct or other instructional tools.

IT/ICT lawyer

Do you have questions about the new AI regulations or would you like to avoid legal pitfalls? We are happy to assist you, also with other questions about IT/ICT and law. Please contact us:

    We process the personal data above with your permission. You can withdraw your permission at any time. For more information please see our Privacy Statement.

    Related publications

    Statutory minimum hourly wage

    The statutory minimum hourly wage changes every six months. What are the new amounts as of 1 January 2026?

    Read more

    11 November 2025: Wtta (Labour Supply Act) passed

    The new Labour Supply Act (Wtta) imposes stricter requirements on temporary employment agencies, payroll companies and secondment agencies. But the Wtta also has major consequences for companies that use their services. What does this mean for their personnel policy and administration?

    Read more

    ANBI status: tax benefits for your charity and donors

    ANBI status makes it even more attractive to make donations, gifts and bequests to charities. What requirements must an institution meet in order to obtain and retain this status? When is something considered to be of public benefit? What information must an ANBI publish?

    Read more

    Importing art from outside EU will become more difficult

    Since 28 June 2025, a new EU regulation requires anyone wishing to import cultural goods into the EU to have an import licence or submit an importer’s declaration. When is which type of document required? How does it affect art dealers, galleries, auction houses and collectors, both inside and outside the EU?

    Read more

    Privacy of ill employees

    Employees have a right to privacy in their private lives. This also applies to sick employees. However, they must also comply with their reintegration obligations and provide accurate information about their illness. What options does the employer have to check whether they are actually doing this?

    Read more

    Drugs and alcohol at work: 4 recent rulings

    Employees who consume alcohol and drugs during work or who want to work under the influence remain a problem for employers. What measures can you take against this? Are you allowed to test an employee if you suspect they are under the influence?

    Read more