Jan Dop

partner

Jan is a specialist in employment law and corporate law

jan.dop@russell.nl
+31 20 301 55 55

Using algorithms in the employment relationship

Publication date 11 oktober 2019

The use of algorithms carries the promise of objectivity. People assume that algorithm outcomes are “neutral.” This neutrality is, however, an illusion. Algorithms are not as unbiased as we think, and the risk of discrimination looms. Employers should be aware of the limitations of algorithms and have a plan for dealing with them.

Algorithms are used more and more by employers to make decisions such as which resume to select during an application procedure or which employee should receive a promotion. Furthermore, these algorithms are increasingly used by companies that operate on an online platform, such as Uber. Decisions regarding whom will receive which job at which location for which payment are all made by an algorithm.

The use of algorithms carries the promise of objectivity. People assume that algorithm outcomes are “neutral.” This neutrality is, however, an illusion. Algorithms are not as unbiased as we think, and the risk of discrimination looms. Employers should be aware of the limitations of algorithms and have a plan for dealing with them.

Download article

Machine Learning Algorithms

Simply put, an algorithm is a set of instructions that allows a computer to take input variables to produce an output variable. A large variety of algorithms can be distinguished, such as machine learning algorithms. These algorithms are able to learn from previous experiences and results. A machine learning algorithm does not simply rely on a predetermined equation as a model, but adaptively improves its operations after being exposed to more data and based on the knowledge it generates itself. Machine learning algorithms also are called smart algorithms. In this article we mostly refer to these smart, machine learning algorithms.

Using Algorithms for Employment Decisions

Using algorithms, employers can process large amounts of data in order to obtain relevant information, which can be used for (automatic) decision-making. For example, algorithms can speed up the application process by weeding out large numbers of resumes or analyzing video interviews and selecting the most suitable applicants. Employers also can use algorithms to assess the performance of employees or to determine which employee is eligible for a promotion or bonus. Furthermore, algorithms are used by companies such as Uber for the distribution of work and the awarding of rewards.

The use of algorithms can streamline these processes and may cut costs, since less people are needed for the recruitment and assessment of potential employees. However, the use of these algorithms is not without risk. These algorithms might (unintentionally) discriminate employees, as illustrated by the following examples.

Amazon

Amazon’s recruiting tool was created to automate the search for top talent by reviewing job applicants’ resumes and selecting the most talented applicants. The tool was trained to observe patterns in resumes of applicants from the past 10-year period, most of which were men. In order to prevent this from affecting the outcome of the algorithm, Amazon made the historical data gender-blind. However, despite making the algorithm gender-blind, the recruiting tool taught itself to prefer male applicants over female ones. It learned to prefer language predominantly used by men, such as “executed” or “captured,” and to penalize resumes that included words such as “women’s”. The recruiting tool was eventually shut down by Amazon.

Uber

Another example is Uber’s algorithm that connects drivers and passengers and determines the pay per fare. Even though the work assignments were made by a gender-blind algorithm and the pay per fare was based on a transparent formula, it was found that men made roughly 7 percent more per hour than women. The algorithm favored men since they on average work for Uber for a longer period, tend to drive faster and more hours, drive in higher-paying locations at more lucrative times and choose to drive longer fares.

Algorithmic Discrimination

The use of smart algorithms in order to assess (potential) employees is supposed to objectify the decision-making process. However, as shown by the aforementioned examples, the algorithms designed to eliminate biases may also introduce or amplify them. Algorithms may lead to unjustifiable discriminatory decision-making. How can algorithms lead to employment discrimination?

Human Biases

It should not be forgotten that algorithms are, in the end, human constructs: algorithms are invented, programmed and trained by humans. The choices made by humans while programming and training an algorithm affect its operation and outcomes. Thus, algorithms are not free of human influence.

Furthermore, algorithms are trained on historical data. If this training data is biased against certain individuals or groups, the algorithm will replicate the human bias and learn to discriminate against them. The selection process of the training data is also important. Data that is outdated, incorrect, incomplete or unrepresentative may lead to machine learning mistakes and misinterpretations. Eventually, algorithms are only as good as the data they are trained on. This is also referred to as “garbage in, garbage out” or “discrimination in, discrimination out.”

Employers often do not aim for discriminating (potential) employees. However, due to the choices made during the development process and the used training data, they may (unintentionally) create a discriminatory algorithm.

Protected Attributes

Discrimination may occur when the training data explicitly includes information regarding protected attributes, such as gender, race or ethnic or social origin. Based on the data, the algorithm can learn that a certain gender, race or other attribute is preferable.

In order to prevent this, some employers remove all protected attributes from the training data. Employers often believe that when the algorithm is ignorant of variables such as gender or race, it is unable to discriminate on these grounds. However, as also illustrated by the examples of Amazon and Uber, even excluding specific attributes such as gender or race as an input variable, does not prevent the algorithm from producing biased output. In such a case, so-called “proxy information” may cause an algorithm to become biased. As the example of Amazon’s algorithm shows, the language in which someone expresses oneself may indirectly indicate someone’s gender. A zip code may indirectly indicate someone’s race or ethnic or social origin. Therefore, excluding prohibited attributes seems not to be a solution for preventing algorithmic discrimination.

Black Box

Detecting algorithmic discrimination is not easy, especially since smart algorithms are increasingly complex. Algorithms are often described as a “black box:” the input – for instance, applicant’s resumes – and the output of the algorithm – for instance, which applicant will be invited for a job interview – are clear. However, how the algorithm came to this conclusion is highly opaque.

Due to the complexity and opacity of the algorithm, it is difficult for employers to assess the algorithms’ decision-making process and its results. Therefore, automated employment-related decisions, based on these algorithms, are often subjected to very little human oversight. However, based on Article 22 of the General Data Protection Regulation (GDPR) employers are prohibited to subject (potential) employees to a decision solely based on automated processing. Thus, human decision-making cannot fully be replaced by algorithms. Furthermore, it must always be explainable how and why a certain decision was made.

Conclusion

The use of algorithms can be very useful for employers. However, although algorithms have the potential of objectifying employment-related decisions, they are also prone to amplify bias. The risk that these algorithms could (unintentionally) lead to discriminatory results should not be overlooked.

Employers will have to adapt the working relationship with their employees to the use of algorithms. While developing and using machine learning algorithms, employers have to be aware of privacy laws. For this reason, employers should introduce a human control system and should remain capable of explaining how a decision was made. Furthermore, care should be taken to ensure that the use of algorithms is not at the expense of equal treatment rights. After all, the use of algorithms in decision-making poses a risk to an employee’s right to equality. In this context, consideration should be given to involving an employee representative, such as a works council (especially when an algorithm is used in the context of a rewarding/bonus-system), and laying down rules on the use of algorithms in a Code of Conduct or an employee’s handbook.

    We process the personal data above with your permission. You can withdraw your permission at any time. For more information please see our Privacy Statement.

    Related publications

    3 reasons to establish a works council

    Many companies do not have a works council, even though they should. When is it mandatory to establish one? What are the advantages of a works council? What are the consequences if your company does not have a works council?

    Read more

    Discrimination during the application process

    Discrimination in the recruitment and hiring of new staff is not permitted. When do you, as an employer, discriminate during the application process, even unintentionally? And how can you prevent this, also if you use AI?

    Read more

    How do you terminate a continuing performance agreement?

    In principle, a continuing performance agreement can always be terminated, even if no arrangements have been made in this regard. But you can’t just do it. What do you need to take into account when terminating the agreement? And what if you want to deviate from the arrangements made about terminating the agreement?

    Read more

    Legislative proposal for greater security for flex workers

    The government wants to improve the legal position of flex workers with a new law. What will it mean for employers and flex workers if the legislative proposal is adopted? What new rules will you need to take into account?

    Read more

    Statutory minimum hourly wage

    The statutory minimum hourly wage changes every six months. What are the new amounts as of 1 July 2025?

    Read more

    28 June 2025 European Accessibility Act: digital products and services must be accessible to all

    On 28 June 2025, the European Accessibility Act will come into force. From that data, digital products and services must also be accessible to people with disabilities. Which companies, products and services does the Act apply to? What disabilities should you take into account? What are the consequences of not complying with the Act?

    Read more