Jan Dop

partner

Jan is a specialist in employment law and corporate law

jan.dop@russell.nl
+31 20 301 55 55

Using algorithms in the employment relationship

Publication date 11 oktober 2019

The use of algorithms carries the promise of objectivity. People assume that algorithm outcomes are “neutral.” This neutrality is, however, an illusion. Algorithms are not as unbiased as we think, and the risk of discrimination looms. Employers should be aware of the limitations of algorithms and have a plan for dealing with them.

Algorithms are used more and more by employers to make decisions such as which resume to select during an application procedure or which employee should receive a promotion. Furthermore, these algorithms are increasingly used by companies that operate on an online platform, such as Uber. Decisions regarding whom will receive which job at which location for which payment are all made by an algorithm.

The use of algorithms carries the promise of objectivity. People assume that algorithm outcomes are “neutral.” This neutrality is, however, an illusion. Algorithms are not as unbiased as we think, and the risk of discrimination looms. Employers should be aware of the limitations of algorithms and have a plan for dealing with them.

Download article

Machine Learning Algorithms

Simply put, an algorithm is a set of instructions that allows a computer to take input variables to produce an output variable. A large variety of algorithms can be distinguished, such as machine learning algorithms. These algorithms are able to learn from previous experiences and results. A machine learning algorithm does not simply rely on a predetermined equation as a model, but adaptively improves its operations after being exposed to more data and based on the knowledge it generates itself. Machine learning algorithms also are called smart algorithms. In this article we mostly refer to these smart, machine learning algorithms.

Using Algorithms for Employment Decisions

Using algorithms, employers can process large amounts of data in order to obtain relevant information, which can be used for (automatic) decision-making. For example, algorithms can speed up the application process by weeding out large numbers of resumes or analyzing video interviews and selecting the most suitable applicants. Employers also can use algorithms to assess the performance of employees or to determine which employee is eligible for a promotion or bonus. Furthermore, algorithms are used by companies such as Uber for the distribution of work and the awarding of rewards.

The use of algorithms can streamline these processes and may cut costs, since less people are needed for the recruitment and assessment of potential employees. However, the use of these algorithms is not without risk. These algorithms might (unintentionally) discriminate employees, as illustrated by the following examples.

Amazon

Amazon’s recruiting tool was created to automate the search for top talent by reviewing job applicants’ resumes and selecting the most talented applicants. The tool was trained to observe patterns in resumes of applicants from the past 10-year period, most of which were men. In order to prevent this from affecting the outcome of the algorithm, Amazon made the historical data gender-blind. However, despite making the algorithm gender-blind, the recruiting tool taught itself to prefer male applicants over female ones. It learned to prefer language predominantly used by men, such as “executed” or “captured,” and to penalize resumes that included words such as “women’s”. The recruiting tool was eventually shut down by Amazon.

Uber

Another example is Uber’s algorithm that connects drivers and passengers and determines the pay per fare. Even though the work assignments were made by a gender-blind algorithm and the pay per fare was based on a transparent formula, it was found that men made roughly 7 percent more per hour than women. The algorithm favored men since they on average work for Uber for a longer period, tend to drive faster and more hours, drive in higher-paying locations at more lucrative times and choose to drive longer fares.

Algorithmic Discrimination

The use of smart algorithms in order to assess (potential) employees is supposed to objectify the decision-making process. However, as shown by the aforementioned examples, the algorithms designed to eliminate biases may also introduce or amplify them. Algorithms may lead to unjustifiable discriminatory decision-making. How can algorithms lead to employment discrimination?

Human Biases

It should not be forgotten that algorithms are, in the end, human constructs: algorithms are invented, programmed and trained by humans. The choices made by humans while programming and training an algorithm affect its operation and outcomes. Thus, algorithms are not free of human influence.

Furthermore, algorithms are trained on historical data. If this training data is biased against certain individuals or groups, the algorithm will replicate the human bias and learn to discriminate against them. The selection process of the training data is also important. Data that is outdated, incorrect, incomplete or unrepresentative may lead to machine learning mistakes and misinterpretations. Eventually, algorithms are only as good as the data they are trained on. This is also referred to as “garbage in, garbage out” or “discrimination in, discrimination out.”

Employers often do not aim for discriminating (potential) employees. However, due to the choices made during the development process and the used training data, they may (unintentionally) create a discriminatory algorithm.

Protected Attributes

Discrimination may occur when the training data explicitly includes information regarding protected attributes, such as gender, race or ethnic or social origin. Based on the data, the algorithm can learn that a certain gender, race or other attribute is preferable.

In order to prevent this, some employers remove all protected attributes from the training data. Employers often believe that when the algorithm is ignorant of variables such as gender or race, it is unable to discriminate on these grounds. However, as also illustrated by the examples of Amazon and Uber, even excluding specific attributes such as gender or race as an input variable, does not prevent the algorithm from producing biased output. In such a case, so-called “proxy information” may cause an algorithm to become biased. As the example of Amazon’s algorithm shows, the language in which someone expresses oneself may indirectly indicate someone’s gender. A zip code may indirectly indicate someone’s race or ethnic or social origin. Therefore, excluding prohibited attributes seems not to be a solution for preventing algorithmic discrimination.

Black Box

Detecting algorithmic discrimination is not easy, especially since smart algorithms are increasingly complex. Algorithms are often described as a “black box:” the input – for instance, applicant’s resumes – and the output of the algorithm – for instance, which applicant will be invited for a job interview – are clear. However, how the algorithm came to this conclusion is highly opaque.

Due to the complexity and opacity of the algorithm, it is difficult for employers to assess the algorithms’ decision-making process and its results. Therefore, automated employment-related decisions, based on these algorithms, are often subjected to very little human oversight. However, based on Article 22 of the General Data Protection Regulation (GDPR) employers are prohibited to subject (potential) employees to a decision solely based on automated processing. Thus, human decision-making cannot fully be replaced by algorithms. Furthermore, it must always be explainable how and why a certain decision was made.

Conclusion

The use of algorithms can be very useful for employers. However, although algorithms have the potential of objectifying employment-related decisions, they are also prone to amplify bias. The risk that these algorithms could (unintentionally) lead to discriminatory results should not be overlooked.

Employers will have to adapt the working relationship with their employees to the use of algorithms. While developing and using machine learning algorithms, employers have to be aware of privacy laws. For this reason, employers should introduce a human control system and should remain capable of explaining how a decision was made. Furthermore, care should be taken to ensure that the use of algorithms is not at the expense of equal treatment rights. After all, the use of algorithms in decision-making poses a risk to an employee’s right to equality. In this context, consideration should be given to involving an employee representative, such as a works council (especially when an algorithm is used in the context of a rewarding/bonus-system), and laying down rules on the use of algorithms in a Code of Conduct or an employee’s handbook.

    We process the personal data above with your permission. You can withdraw your permission at any time. For more information please see our Privacy Statement.

    Related publications

    Amendment or termination of the share scheme: is the consent of the works council required?

    The works council has the right of consent when establishing, amending or withdrawing a remuneration system. Is an amendment to a share scheme an amendment to the remuneration system?

    Read more

    Digital General Meeting for Private Law Legal Entities Act adopted

    On 16 December 2025, the House of Representatives of the Netherlands adopted the Digital General Meeting for Private Law Legal Entities Act. This Act makes it possible to hold general meetings entirely digitally. What does this mean for directors and shareholders of private limited companies, public limited companies and other legal entities?

    Read more

    Highly skilled migrants: salary thresholds for 2026 and possible stricter rules

    The salary thresholds for highly skilled migrants and European Blue Card holders are adjusted annually. What will be the amounts for 2026? Also, stricter rules for the highly skilled migrant scheme are proposed. What might change?

    Read more

    On-call employees

    On-call contracts offer many advantages for both employers and on-call employees. However, there are also a few rules that they need to take into account. What are they?

    Read more

    Personnel: Are you allowed to dismiss a drunken employee?

    What shall we do with the drunken employee? Sack him? That isn’t always allowed. Alcohol abuse may be the result of an addiction and in that case the prohibition on termination during illness may apply. What do you have to take into account when dismissing an employee due to alcohol consumption?

    Read more

    24 November: Equal Pay Day: wage transparency for women and men

    24 November 2025 was Equal Pay Day in the Netherlands: the day of the year when men have earned on average as much as women in a whole year. How can the European Directive on wage transparency ensure that men and women are paid equally?

    Read more