Article 6GZBQ Moving Beyond the Trolley Problem - Studying Moral Behavior in Drivers

Moving Beyond the Trolley Problem - Studying Moral Behavior in Drivers

by
janrinok
from SoylentNews on (#6GZBQ)

In the early days of autonomous driving development, there was some press about researchers using the "trolley problem" (kill one person "on purpose" vs "do nothing" and kill many) to think about how robot cars should work. Now researchers at North Carolina State U have broken that big moral question down into smaller, more mundane pieces, in an attempt to see what ordinary human drivers think and do. Press release at: https://www.autonomousvehicleinternational.com/news/adas/ncsu-researchers-ditch-the-trolley-problem-to-help-autonomous-vehicles-make-moral-decisions.html and full paper at https://link.springer.com/article/10.1007/s00146-023-01813-y

... "The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic. Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?"
[...]
"For example, if someone is driving 20mph over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There's currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations."

To address that lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who has to decide whether to violate a traffic signal while trying to get their child to school on time. Each scenario is programmed into a virtual reality environment, so that study participants engaged in the experiment have audiovisual information about what drivers are doing when they make decisions, rather than simply reading about the scenario.

For this work, the researchers built on something called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

Researchers created eight different versions of each traffic scenario, varying the combinations of agent, deed and consequence. For example, in one version of the scenario where a parent is trying to get the child to school, the parent is caring, brakes at a yellow light, and gets the child to school on time. In a second version, the parent is abusive, runs a red light, and causes an accident. The other six versions alter the nature of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their decision (the consequence).

To date they have done small pilot studies, next is a much larger study with thousands of human subjects.

Do we want robot cars to make the same routine decisions that some average human makes? I think it's a given that following some traffic rules (eg, speed limit) to the letter is likely to foul up traffic in many situations. How about in other situations?

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments