Self-Driving Cars & Consequentialism
By Ariel Sykes, Assistant Director of the Ethics Institute

The following lesson plan is from the 9th grade required course called Introduction to Ethics. As part of this course students learn about the different ethical frameworks and theories that can be used to make ethical decisions. As part of the “consequentialism” unit, students are introduced to the timeless philosophical thought-experiment of the “Trolley Problem.” To make the scenario more realistic, I have introduced the variation of “self-driving cars” and use the simulation provided through MIT via moralmachine.net. After completing the activity, students engage in some self-reflection questions and then we use questions they generate to engage in a full-class discussion.
Exploring Consequences Lesson Plan
The Value of Thought-Experiments in Ethics Education
As an ethics educator, I often use thought-experiments to introduce the idea of ethical decision making. While highly contrived and often not nuanced enough to be hyper-realistic, thought experiments can help us seriously consider the values and ethical frameworks we use in an ethical-dilemma. While we might not, for example, ever be in a situation to decide who lives and who dies (like in the Trolly Experiment), we can learn something about what we value by engaging in the imaginative “what-if.” During a world-wide pandemic, many of my “go-to” thought experiments have an added weight of relevance for those involved. Students are encouraged to find parallels between the thought experiment and the real-world, as a way to examine how their decision-making process in the thought-experiment may translate to new situations.
Thought experiments are valuable at modeling the ethical skills and competencies of identifying possible options, considering various stakeholders, uncovering hidden assumptions and biases, and communicating your reasoning with others. There is also the added value of the fun and silliness of these “extreme scenarios” that invite students into the activity of ethical thinking and questioning. While it may take some work to tamper down the creative alternatives or modifications of a thought experiment by participants with the goal of “avoiding the dilemma” or “making the decision easier,” you can eventually get people to seriously consider “if I had to make this impossible choice, what would I do and why?” Overall, I find thought experiments to be helpful reflective tools that encourage people to examine what and how they think, begin to hear and understand how other people in their lives may think differently, and begin to be open to changing their mind. In the next section, you will hear directly from students about their experience with the self-driving car thought experiment.
Student Reflections on “Self-Driving Car Simulation”
How much does “saving more lives” matter to you, according to the analysis of your selection data from the moral machine simulation?
“Saving more lives mattered very much to me. While one life is not more or less important than the other, the majority should be prioritized in the case of life and death. I’m not sure I would say that in other situations, though.”
“Saving more lives matters to me most of the time. I think that it sometimes depends on the person involved though. For instance, if they were strangers that I didn’t know, I would probably save the most people. But if they were someone important to me, such as my family or friends, I would probably choose to save them even if there were less people. This makes me think about how AI cars may choose to save passengers always, since they are generally people you care about.”
“I think saving the most lives matters more to me than who the person is. I chose to save other people’s lives before mine. I don’t want to be selfish about it. If it was my fault that this car crash is happening or it is my car, then I think I deserve to not be saved over the other people.”
What is something that surprised you about the data analysis of your choices?

“Something that surprised me was that the analysis recorded how I reacted to saving the passengers or saving the pedestrians. During the activity I only thought about this reason once, and was instead focused on the number of people being killed. I realize now that the person in the car chose to go in that car but the person walking didn’t choose to become involved in a car accident.”
“One category that surprised me was the social value someone has in society. It said that my social value preference was very high and influenced my decisions. Yet, I believe that all people are equal regardless of their social class, gender, or career. They also all have the same right to want to live and to be protected from harm.”
“Something that surprised me was the gender preference category. I had not realized how much I was swayed towards saving my own gender. I feel like this really highlights a gender bias around who’s saved and who’s killed. I was unaware of this bias, and it helped me think about how I can eliminate this when it comes to my decision-making.”
“It really surprised me that I was more likely to save dogs than humans. I would have thought that I would have saved more humans because they are the same species as me. I may have chosen to save more dogs because I have a pet and am more empathetic towards them. I don’t think I would want a self-driving car prioritizing animals though.”
What is something that you are now wondering about?
How do people subconsciously or consciously determine people’s values and worth? How should we navigate this to ensure that we can make ethical decisions?
Should there be a preference for saving younger people rather than older people, in life and death situations?
How should we program self-driving cars and other AI systems that make life-death decisions?
Should a self-driving car always protect its passengers?
When someone dies in a self-driving car accident, who is ethically responsible?
Could a self-driving car be able to take into account things other than the number of people being saved versus killed? If so, would it be ethical to place different levels of importance on certain people?