Reinier Russell

managing partner

Reinier advises national and international companies

reinier.russell@russell.nl
+31 20 301 55 55

Vitória Alves

Vitoria assists as paralegal in drafting opinions and procedural documents.

+31 20 301 55 55

Prevent the AI Act from taking you by surprise: how to limit the risks

Publication date 19 november 2025

Almost all companies now use some form of AI. This means that they may be subject to the prohibitions and regulations set out in the European AI Act. How can you ensure that you comply with these rules?

ai

AI is becoming increasingly prevalent in society. The healthcare sector, education, the legal sector and the business community all use it. The applications of AI within these sectors vary, but they all have in common that they bring both advantages and disadvantages. AI will contribute to innovation and efficiency within various sectors. At the same time, it poses potential risks to the protection of fundamental human rights, as every application of AI raises ethical, privacy and security issues.

Companies need to be aware of this. For this reason, since 2 February 2025, the EU has required organisations that use AI to ensure sufficient AI literacy. They must provide their staff and others who manage AI systems on their behalf with sufficient AI knowledge. This obligation is part of the European AI Act, which was adopted in 2024 and will enter into force on 2 August 2026. Some important parts have already come into force, while other regulations will only come into force at a later date. This will always be indicated.

The AI Act

The European AI Act is the first piece of legislation specifically for artificial intelligence. The regulation aims to safeguard two interests: on the one hand, the protection of human rights and, on the other, measures to stimulate innovation. The law contains rules on the design, implementation and use of AI systems within the EU.

The AI Act affects the use of AI within the European Union and worldwide. This regulation applies to providers of AI systems and their representatives, importers and distributors, and companies and organisations that use AI (the deployers). Different rules apply to each of them. What does this regulation entail?

Risk categories

The European Commission has opted for a risk-based approach, whereby AI systems are divided according to risk levels. These risk levels determine the requirements that the AI system must meet. The system distinguishes between four main groups:

1. Unacceptable risk

AI systems that pose an unacceptable risk to safety, fundamental rights or EU values have been strictly prohibited since 2 February 2025. Examples include social credit scores and behaviour manipulation systems. These systems may not be placed on the market. Biometric identification systems also fall under this category in principle. However, the AI Regulation makes an exception for cases where these are deemed necessary for the functioning of the democratic rule of law. An example of this is their use to identify perpetrators of criminal offences.

2. High risk

AI systems that, due to their intended purpose and context, may pose significant risks to health, safety or fundamental rights are subject to strict regulation. Examples include systems used in education, for personnel selection or by public services. These systems must meet strict requirements in terms of transparency, accountability and data management. The regulation for this will come into force on 2 August 2027.

3. Limited risk

AI systems that pose limited risks to users and the public are mainly subject to transparency obligations, such as informing the public that they are interacting with AI-generated content. This is mandatory for chatbots and deepfakes, for example.

4. Minimal risk

AI systems that pose minimal or no risk to safety or fundamental rights do not have to comply with specific regulations under the AI Act. These include spam filters, for example.

GPAI models

Depending on the risk assessment, different rules apply, but the AI Act also contains specific regulations for general-purpose AI models, known as General Purpose AI (GPAI). This includes the well-known ChatGPT. Such AI models must provide documentation, comply with copyright laws and publish summaries of the training data. Models that pose a systemic risk have additional obligations, such as reporting incidents and ensuring cybersecurity. These rules have been in force since 2 August 2025.

Legal pitfalls

In addition to the specific obligations that apply to each category, the European Commission also recommends general safeguards for the use of AI. Companies must be aware of the risk group to which their AI system belongs and the obligations that this entails, so that they can avoid legal and financial risks. We will briefly discuss the most important points.

  • Compliance and liability: Companies must ensure compliance with the AI Act. This means, among other things, keeping documentation on how their AI works, performing risk analyses and identifying possible biases.
  • Transparency and control: The AI Act emphasises transparency and traceability of AI decisions, especially in high-risk applications. This requires companies to be able to clearly explain how their AI systems function and make decisions.
  • Human intervention: Article 22 of the GDPR requires human intervention if the use of AI relates to natural persons.
  • AI literacy: Organisations using AI must provide their staff and others who manage AI systems on their behalf with sufficient AI knowledge.

Penalties for non-compliance with the AI Act

Penalties may be imposed for non-compliance with the AI Regulation. The amount of the penalty depends on both the risk category of the AI system and the severity of the violation. AI practices that are explicitly prohibited due to unacceptable risks are subject to a fine of up to 35 million EUR or, for companies, up to 7% of global annual turnover. Violations of the rules for the use of high-risk AI systems and those for general-purpose AI models are punishable by a fine of up to 15 million EUR or, for companies, 3% of global annual turnover. Providing incorrect or misleading information to the authorities is punishable by a fine of up to 7.5 million EUR or, for companies, 1% of global annual turnover.

Avoiding legal pitfalls

To ensure compliance with the AI Act and avoid penalties, companies must ensure:

  • Risk management: Companies must evaluate the risks of AI systems and keep track of which compliance requirements apply.
  • Transparency and accountability: Companies must maintain detailed documentation on the operation and use of AI systems within the company.
  • Training and awareness: Companies must train their staff in the use of AI applications, particularly with regard to ethics, regulations and the protection of personal data. This is essential in order to meet the requirement of AI literacy.
  • Compliance with the GDPR: Companies must comply with the requirements set out in the GDPR at all times.

Practical tip

AI is not only the responsibility of the IT department; clear rules must also be established from an HR perspective. Russell Advocaten drafts AI policies for its clients that can be added to the employee handbook, code of conduct or other instructional tools.

IT/ICT lawyer

Do you have questions about the new AI regulations or would you like to avoid legal pitfalls? We are happy to assist you, also with other questions about IT/ICT and law. Please contact us:

    We process the personal data above with your permission. You can withdraw your permission at any time. For more information please see our Privacy Statement.

    Related publications

    Amendment or termination of the share scheme: is the consent of the works council required?

    The works council has the right of consent when establishing, amending or withdrawing a remuneration system. Is an amendment to a share scheme an amendment to the remuneration system?

    Read more

    Digital General Meeting for Private Law Legal Entities Act adopted

    On 16 December 2025, the House of Representatives of the Netherlands adopted the Digital General Meeting for Private Law Legal Entities Act. This Act makes it possible to hold general meetings entirely digitally. What does this mean for directors and shareholders of private limited companies, public limited companies and other legal entities?

    Read more

    Highly skilled migrants: salary thresholds for 2026 and possible stricter rules

    The salary thresholds for highly skilled migrants and European Blue Card holders are adjusted annually. What will be the amounts for 2026? Also, stricter rules for the highly skilled migrant scheme are proposed. What might change?

    Read more

    1 January 2026: Wwft prohibits cash payments of 3,000 euros or more

    As of 1 January 2026, the Money Laundering and Terrorist Financing (Prevention) Act (Wwft) will change. Cash payments of EUR 3,000 or more will then be prohibited. What does this mean for the retail sector and the art trade?

    Read more

    On-call employees

    On-call contracts offer many advantages for both employers and on-call employees. However, there are also a few rules that they need to take into account. What are they?

    Read more

    Personnel: Are you allowed to dismiss a drunken employee?

    What shall we do with the drunken employee? Sack him? That isn’t always allowed. Alcohol abuse may be the result of an addiction and in that case the prohibition on termination during illness may apply. What do you have to take into account when dismissing an employee due to alcohol consumption?

    Read more