Bias and discrimination through AI and algorithms

Bias and discrimination through AI and algorithms

Objective:

Learners critically examine the supposed neutrality of algorithms and understand how AI systems can reproduce and cement social inequalities.

Content and methods:

The worksheet provides a theoretical introduction to the problems of machine learning and illustrates these using a case study. By analyzing factual texts and working through reflection questions, learners examine mechanisms such as the “black box problem,” proxy variables, and the ethical consequences of algorithmic decisions.

Skills:

  • Recognizing bias in technical systems and questioning the objectivity of data.
  • Discussing responsibility and accountability in the context of automated processes.

Target group:

Grade 11 and above.

EZ
FC
GF
HI

53 other teachers use this template

Target group and level

Grade 11 and above

Subjects

non-subject specific contentEthicsPhilosophy

Bias and discrimination through AI and algorithms

Icon

Introduction

This worksheet takes a critical look at the supposed neutrality of artificial intelligence (AI). We examine how human biases are incorporated into mathematical models and what social consequences this “digital discrimination” can have.

Between code and prejudice: The ethical crux of algorithmic decision-making

The idea that mathematical formulas represent the ultimate authority for objective justice is deeply rooted in our technocratic worldview. Where humans are fallible due to emotions, fatigue, or unconscious biases, algorithms promise cool, data-based neutrality. But upon closer analysis, this belief in the incorruptibility of machines proves to be a dangerous fallacy. In fact, the widespread implementation of artificial intelligence (AI) systems threatens not to eliminate existing social inequalities, but rather to cement them in technical form and remove them from human scrutiny. The core problem lies in the fundamental architecture of so-called machine learning.

Unlike traditional software, these systems are not defined by rigid rules programmed by humans. Instead, AI independently extracts its decision-making logic from huge amounts of historical training data. This is where the fundamental principle of computer science comes into play: “garbage in, garbage out.” If past data sets are characterized by systematic discrimination, social stereotypes, or one-sided power structures, AI does not recognize these patterns as wrong, but as desirable statistical regularities. For example, an algorithm trained to assess the prospects of success of borrowers will inevitably favor those groups that have historically been privileged. The machine thus does not learn fairness, but rather optimizes the status quo.

Identifying discrimination mechanisms poses a particular challenge. Even when sensitive characteristics such as gender, religious affiliation, or ethnic origin are explicitly removed from the data sets, modern systems develop a remarkable ability to circumvent these filters. They use so-called proxy variables—seemingly neutral substitute data such as postal codes, internet usage behavior, or even linguistic nuances in a job application—that correlate highly with protected identity characteristics. This form of indirect discrimination is particularly insidious because it appears to be mathematically justified, while in essence it reproduces racist or classist structures.

Ethical control is made more difficult by the lack of transparency in the processes, often referred to as the “black box problem.” Highly complex neural networks make decisions based on millions of parameters whose causal relationships are difficult to understand, even for the developers themselves. When an AI denies a person access to insurance or classifies medical treatment as less urgent, there is often no justifiable reasoning that would be necessary for a constitutional procedure or ethical challenge. This means that algorithmic power is not subject to democratic accountability. Ultimately, the debate about bias in AI shows that technology never exists in a vacuum. It is always a reflection of the society that feeds it. The crucial question for the future will therefore not be how we improve mathematics, but how we ensure that automation does not become the unquestioned continuation of our own prejudices.

Algorithmic Discrimination in Credit Lending: The Case of the Apple Card

In November 2019, a tweet by tech entrepreneur David Heinemeier Hansson sparked widespread debate about algorithmic bias when he revealed that his Apple Card credit limit was 20 times higher than that of his wife, despite her possessing a superior credit score. This incident highlighted a significant issue: the Apple Card's credit assessment algorithm allegedly exhibited gender bias, offering lower credit limits to women than to similarly qualified men.

The technical root of this bias lies in the algorithm's design, which, while not explicitly encoding gender as a variable, may inadvertently incorporate gender-related proxies. Factors such as shopping patterns or spending habits, which correlate with gender, can influence credit decisions. Such correlations can reproduce historical biases present in the training data, leading to discriminatory outcomes even if gender is not directly considered.

Despite assurances from Goldman Sachs, the issuer of the Apple Card, that no gender bias existed, the opacity of the algorithm—a "black box"—made it impossible to verify these claims conclusively. The algorithm operates without transparency, and its decisions cannot be easily audited or explained. This lack of clarity complicates accountability and regulatory oversight, as traditional anti-discrimination laws struggle to keep pace with the rapid evolution of artificial intelligence and its applications.

AI exacerbates algorithmic discrimination in credit lending because it can deepen existing inequalities by automating biased decision-making processes. Without rigorous examination and adjustment of training data and algorithmic criteria, biases can persist and proliferate, impacting socio-economic opportunities and perpetuating gender disparities.

The societal and ethical implications are profound. As AI continues to permeate financial systems, it challenges the principles of fairness and equality. Discriminatory algorithms undermine public trust in these technologies and highlight the need for robust regulatory frameworks and ethical considerations in AI development. Ensuring equitable access to credit is not only a matter of justice but also a prerequisite for fostering inclusive economic growth.

For further reading, consult these sources: BBC News, The New York Times, WIRED.