AI at Work: Tips on Battling Bias

Ius Laboris
Contact

Ius Laboris

[author: Christine Norkus]*

The use of workplace artificial intelligence (‘AI’) is becoming increasingly commonplace for employers in Germany. It can bring significant benefits to HR by increasing efficiency and saving costs. However, it is essential that any AI system does not discriminate against applicants or employees, otherwise employers could be liable. We explore the issues below.
 

An AI system is defined under the EU’s AI Act as a “[…] machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Put very simply, AI uses algorithms to make predictions based on probabilities. From the dataset used, AI selects the most likely solution. However, this does not necessarily mean it is always correct. The proposed solution is based on its programming, the data used for training, and in the case of self-learning AI, the learned rules. Therefore, an AI’s output must always be verified, otherwise serious risks can arise. In this article, we explore what those risks look like for HR professionals, particularly regarding discrimination and bias decision making, before setting out some key tips for employers.

___

Discrimination and AI

In principle, computer systems, including AI, do not discriminate. They are initially neutral and impartial. However, if the AI ​​is trained with biased data, this can lead to future results reflecting those biases. Issues of discrimination can also arise as early as in the development phase of the AI system. If certain biases or discriminatory views are overemphasised during programming, even unconsciously, they can distort the AI’s outputs.

This challenge is only made harder by the complexity of AI algorithms, sometimes operating as ‘black boxes’, meaning their decisions can be opaque and difficult to understand. As a result, identifying and correcting the source of discriminatory patterns in AI decision-making and outputs can be particularly difficult.

___

Risks in the HR context

Discrimination in AI systems often arises due to the use of biased training data. For example, if an AI system is trained with applicant data in which applicants with German or European-sounding names were consistently preferred over applicants with Arabic-sounding names, it is likely that the AI ​​will continue this pattern. This example can be applied to all protected criteria.

In the HR context, these sources of error can lead to employees or applicants being discriminated against based on various criteria when using AI. Discrimination based on race, ethnic origin, gender, age, religion, ideology, sexual identity, or disability is prohibited in Germany and can result in claims for damages.

___

Legal protection mechanisms

Depending on their use, the application of AI systems can have far-reaching consequences and so some are classified as being ‘high-risk’ in the AI ​​Act. These include AI systems that make or directly influence the (pre-) selection of applicants for hiring, promotion, or termination.

If an AI system is classified as high-risk, the employer must fulfil various obligations. Employers, as ‘operators’, are generally subject to transparency obligations. If an AI system is developed or adapted to operational requirements, the employer may also qualify as a ‘provider’ under the AI Act. Some additional obligations apply during development, while others are intended to enable the operator to use the AI ​​correctly.

By fulfilling the obligations under the AI Act, the risk of discrimination can at least be reduced, and employers can better assess the potential issues.

___

AI as a solution?

Although the use of AI systems can create risks, it can also be used to prevent or reduce discrimination. AI systems are particularly good at recognising and applying patterns. This can be used to review previous selection processes (i.e. hiring, promotions, and other decisions) for unconscious patterns that indicate discrimination. For example, job descriptions could be analysed to see whether they contain wording that favours certain groups of people, or decisions about promotions or pay increases could be examined to see whether women who work part-time are being regularly disadvantaged. Once the patterns have been identified, they can be acted on by the employer. The results could, for example, be used to design fairer hiring policies or wage systems to promote equal pay and thus reduce the gender pay gap. According to the Federal Statistical Office of Germany, it was still at 16% in 2024—and 6% when adjusted.

___

Takeaways for employers

In addition to carefully selecting the right AI solution for the company and observing the legal framework in Germany, employers are well advised to focus carefully on the incorporation of any new AI solutions into their workplace.

To minimise potential risks, it is important to get it right from the outset. For example, it should be examined whether the IT infrastructure provides an appropriate basis for the use of AI. Employees must also be onboarded in a timely manner to allay their fears and train them in the use of AI. If employees such as managers or HR personnel can correctly interpret the AI ​​output, the risk of errors, including potential discrimination, can be further reduced through this human supervision. It is also very useful to integrate new AI systems in an overall digitalisation strategy.

*Kliemt.HR Lawyers

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ius Laboris

Written by:

Ius Laboris
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ius Laboris on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide