Ethical Dilemmas - the Trolley Problem with AI

Ethical Dilemmas - the Trolley Problem with AI

Objective: This worksheet aims to introduce students to the "trolley problem" as a fundamental ethical dilemma, encouraging them to evaluate moral decision-making through the lens of different philosophical theories and their modern application in artificial intelligence and autonomous driving.


Content and methods: The worksheet uses a multimedia approach, starting with a video-based discussion of the trolley scenario. It then provides informative texts on utilitarianism, deontology, virtue ethics, and moral relativism to facilitate theoretical understanding. Students apply these concepts through comprehension questions, analytical writing tasks, and a creative "pitch" presentation where they role-play as programmers to address the ethical challenges of self-driving cars.


Competencies:

  • Ethical reasoning and moral judgment
  • Application of philosophical theories to modern technology
  • Critical thinking and perspective-taking
  • Argumentation and presentation skills
  • Reading and media literacy


Target group: Grade 10 and above

CP
DT
EW
FA

50 other teachers use this template

Target group and level

Grade 10 and above

Subjects

non-subject specific contentEthicsPhilosophy

Ethical Dilemmas - the Trolley Problem with AI

Icon

The trolley problem

Watch the following video and then share your thoughts on the experiment with the rest of the class:

  • How would you behave?
  • Is there a fundamental difference between the first and the second situation?
  • What other factors could make the situations even more difficult? (For example, imagine the person you would have to sacrifice to save five others is your best friend.)
Icon

Further information

Now learn more about this thought experiment in the following text and answer the questions below it.

The Trolley Problem

The trolley problem is a philosophical thought experiment that explores the complexities of ethical decision-making. First introduced by British philosopher Philippa Foot in 1967, it has become a cornerstone in discussions about morality and ethics. The problem presents a moral dilemma involving a choice between two tragic outcomes, forcing individuals to confront the principles underlying their ethical beliefs.

Imagine a scenario: A trolley is hurtling down a track towards five people who are tied up and unable to move. You stand near a lever that can divert the trolley onto another track, where only one person is tied up. The decision you face is whether to pull the lever, sacrificing one life to save five. This seemingly simple choice unveils a multitude of ethical questions and challenges.

The trolley problem is often used to illustrate two predominant ethical theories: utilitarianism and deontology. Utilitarianism, associated with philosophers like Jeremy Bentham and John Stuart Mill, suggests that the correct action is the one that maximizes overall happiness or minimizes suffering. From this perspective, pulling the lever is justified because it saves more lives, thus producing the greatest good for the greatest number.

On the other hand, deontological ethics, championed by Immanuel Kant, focuses on adherence to rules and duties. According to this view, some actions are morally wrong regardless of their consequences. Therefore, actively causing harm by pulling the lever may be seen as inherently unethical, regardless of the outcome.

The trolley problem is not just an academic exercise; it has practical implications, especially in the context of modern technology such as self-driving cars. These vehicles must be programmed to make ethical decisions in potentially life-threatening situations, akin to the trolley scenario. The challenge is designing algorithms that can navigate moral complexity and reflect societal values.

Artificial intelligence, while adept at processing data and following rules, struggles with ethical nuances. It can apply utilitarian principles by calculating the best outcome based on pre-defined variables, but it lacks the ability to understand or prioritize moral values that are not quantifiable. Deontological approaches are similarly problematic for AI, which can follow rules but cannot comprehend the ethical reasoning behind them.

Virtue ethics, a third ethical framework, offers an alternative by emphasizing the development of moral character and virtues such as prudence, justice, temperance, and fortitude. This approach suggests that ethical decisions should be guided by these virtues, rather than rigid rules or calculations. However, the complexity of human virtues is difficult to encode into an AI system, highlighting the limitations of technology in replicating human ethical reasoning.

The trolley problem continues to be a powerful tool for examining ethical principles and their application in real-world scenarios. It prompts us to consider not only the choices we make but the reasons behind them, challenging us to reflect on the values that shape our moral compass. As technology advances, the problem remains relevant, urging us to confront the ethical implications of our decisions and the systems we create.

The exploration of the trolley problem reveals the intricacy of moral philosophy and the challenges posed by emerging technologies. It serves as a reminder of the importance of ethical reflection and the need for a nuanced understanding of the principles that guide our actions.

Sources:

Select the correct answer from the options provided.

Icon

A philosophical reflection

Now read the following text about a philosophical school of thought through which the trolley problem can be considered. Take notes if necessary.

Moral Relativism

Moral relativism is a philosophical viewpoint that suggests the truth or falsity of moral judgments is not universal or objective but instead relative to the traditions, convictions, or practices of an individual or group. This theory posits that different cultures or individuals may have diverse beliefs about what constitutes right or wrong, leading to a variety of moral standards. Consequently, moral relativism asserts that there is no singular moral truth that applies universally to all people at all times.

At its core, moral relativism underscores the idea that moral beliefs and practices are shaped by cultural, societal, and personal factors. It highlights the often-observed phenomenon that people frequently disagree on what actions are deemed moral. This perspective challenges the notion of absolute moral standards and instead advocates for tolerance and understanding of differing moral practices and beliefs. By acknowledging that moral judgments are influenced by context, moral relativism encourages individuals to refrain from imposing their moral standards on others, fostering a more inclusive and pluralistic approach to ethical discussions.

Critics of moral relativism argue that it can lead to moral ambiguity and difficulty in addressing moral wrongdoing. Without a universal standard, determining the morality of actions can become subjective and potentially problematic, especially in cases of human rights violations or ethical dilemmas. Opponents fear that moral relativism might inadvertently justify immoral actions under the guise of cultural differences or personal beliefs.

Despite its criticisms, moral relativism provides a valuable framework for understanding the complexity of moral discourse across different cultures and societies. It emphasizes the importance of context, encouraging individuals to recognize the diversity of moral perspectives and promoting a more empathetic approach to ethical discussions. By accepting the relativity of moral judgments, moral relativism offers an alternative lens through which to view the multifaceted nature of human morality and ethics.

Your notes:

Icon

You as a programmer

Now your argumentative skills are needed: Imagine you are a programmer for self-driving cars and a representative of the aforementioned school of thought. How would you program the AI in the car in the event of an accident - to protect the most people, randomly, or to save the passengers of the vehicle? Use all the information you've learned and gather your arguments in a pitch to your boss, who expects a proposal from you for programming the latest collection!

Your notes:

Icon

Your pitch presentation

Now give your pitch presentation in front of the rest of the class.

Icon

Outlook

How do you envision the future with self-driving cars? Will the philosophical considerations from this worksheet play a role? What dangers might arise from a large number of self-driving cars in traffic? What advantages do you see?

Share your thoughts with the rest of the class.

Sample Solution

Here you can find a sample solution for the programmer's pitch presentation.

--:--
--:--