Reinier advises national and international companies
reinier.russell@russell.nl +31 20 301 55 55Almost all companies now use some form of AI. This means that they may be subject to the prohibitions and regulations set out in the European AI Act. How can you ensure that you comply with these rules?

AI is becoming increasingly prevalent in society. The healthcare sector, education, the legal sector and the business community all use it. The applications of AI within these sectors vary, but they all have in common that they bring both advantages and disadvantages. AI will contribute to innovation and efficiency within various sectors. At the same time, it poses potential risks to the protection of fundamental human rights, as every application of AI raises ethical, privacy and security issues.
Companies need to be aware of this. For this reason, since 2 February 2025, the EU has required organisations that use AI to ensure sufficient AI literacy. They must provide their staff and others who manage AI systems on their behalf with sufficient AI knowledge. This obligation is part of the European AI Act, which was adopted in 2024 and will enter into force on 2 August 2026. Some important parts have already come into force, while other regulations will only come into force at a later date. This will always be indicated.
The European AI Act is the first piece of legislation specifically for artificial intelligence. The regulation aims to safeguard two interests: on the one hand, the protection of human rights and, on the other, measures to stimulate innovation. The law contains rules on the design, implementation and use of AI systems within the EU.
The AI Act affects the use of AI within the European Union and worldwide. This regulation applies to providers of AI systems and their representatives, importers and distributors, and companies and organisations that use AI (the deployers). Different rules apply to each of them. What does this regulation entail?
The European Commission has opted for a risk-based approach, whereby AI systems are divided according to risk levels. These risk levels determine the requirements that the AI system must meet. The system distinguishes between four main groups:
AI systems that pose an unacceptable risk to safety, fundamental rights or EU values have been strictly prohibited since 2 February 2025. Examples include social credit scores and behaviour manipulation systems. These systems may not be placed on the market. Biometric identification systems also fall under this category in principle. However, the AI Regulation makes an exception for cases where these are deemed necessary for the functioning of the democratic rule of law. An example of this is their use to identify perpetrators of criminal offences.
AI systems that, due to their intended purpose and context, may pose significant risks to health, safety or fundamental rights are subject to strict regulation. Examples include systems used in education, for personnel selection or by public services. These systems must meet strict requirements in terms of transparency, accountability and data management. The regulation for this will come into force on 2 August 2027.
AI systems that pose limited risks to users and the public are mainly subject to transparency obligations, such as informing the public that they are interacting with AI-generated content. This is mandatory for chatbots and deepfakes, for example.
AI systems that pose minimal or no risk to safety or fundamental rights do not have to comply with specific regulations under the AI Act. These include spam filters, for example.
Depending on the risk assessment, different rules apply, but the AI Act also contains specific regulations for general-purpose AI models, known as General Purpose AI (GPAI). This includes the well-known ChatGPT. Such AI models must provide documentation, comply with copyright laws and publish summaries of the training data. Models that pose a systemic risk have additional obligations, such as reporting incidents and ensuring cybersecurity. These rules have been in force since 2 August 2025.
In addition to the specific obligations that apply to each category, the European Commission also recommends general safeguards for the use of AI. Companies must be aware of the risk group to which their AI system belongs and the obligations that this entails, so that they can avoid legal and financial risks. We will briefly discuss the most important points.
Penalties may be imposed for non-compliance with the AI Regulation. The amount of the penalty depends on both the risk category of the AI system and the severity of the violation. AI practices that are explicitly prohibited due to unacceptable risks are subject to a fine of up to 35 million EUR or, for companies, up to 7% of global annual turnover. Violations of the rules for the use of high-risk AI systems and those for general-purpose AI models are punishable by a fine of up to 15 million EUR or, for companies, 3% of global annual turnover. Providing incorrect or misleading information to the authorities is punishable by a fine of up to 7.5 million EUR or, for companies, 1% of global annual turnover.
To ensure compliance with the AI Act and avoid penalties, companies must ensure:
AI is not only the responsibility of the IT department; clear rules must also be established from an HR perspective. Russell Advocaten drafts AI policies for its clients that can be added to the employee handbook, code of conduct or other instructional tools.
Do you have questions about the new AI regulations or would you like to avoid legal pitfalls? We are happy to assist you, also with other questions about IT/ICT and law. Please contact us:
The statutory minimum hourly wage changes every six months. What are the new amounts as of 1 January 2026?
The new Labour Supply Act (Wtta) imposes stricter requirements on temporary employment agencies, payroll companies and secondment agencies. But the Wtta also has major consequences for companies that use their services. What does this mean for their personnel policy and administration?
ANBI status makes it even more attractive to make donations, gifts and bequests to charities. What requirements must an institution meet in order to obtain and retain this status? When is something considered to be of public benefit? What information must an ANBI publish?
Since 28 June 2025, a new EU regulation requires anyone wishing to import cultural goods into the EU to have an import licence or submit an importer’s declaration. When is which type of document required? How does it affect art dealers, galleries, auction houses and collectors, both inside and outside the EU?
Employees have a right to privacy in their private lives. This also applies to sick employees. However, they must also comply with their reintegration obligations and provide accurate information about their illness. What options does the employer have to check whether they are actually doing this?
Employees who consume alcohol and drugs during work or who want to work under the influence remain a problem for employers. What measures can you take against this? Are you allowed to test an employee if you suspect they are under the influence?