Emerging Technologies

This MIT professor says there is still one huge challenge with self-driving cars

Image: MIT

Danielle Muoio
Tech Reporter, Tech Insider
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

MIT associate professor Iyad Rahwan has asked 3 million people to consider the “Trolley problem” when it comes to self-driving cars.

The Trolley problem goes like this: a runaway trolley is barreling toward five people on a track who cannot move. But you have the option to pull a lever and send it to a side track where you see one person standing. What would you do?

But as Rahwan puts it, the Trolley problem gets thornier when considering self-driving cars. The first scenario puts the ethical burden on a person. But if a self-driving car is in a lose-lose situation where it must make a choice, we’re asking a robot in our everyday environment to make the call.

“The idea of a robot having an algorithm programmed by some faceless human in a manufacturing plant somewhere making decisions that has life-and-death consequence is very new to us as humans,” Rahwan told Business Insider.

Rahwan’s work highlights the difficulty of assessing what should happen if a self-driving car gets into an accident. Should cars be programmed to act a certain way in dicey scenarios?

The Trolley debate has lingered in the background for quite some time as automakers advance their self-driving car efforts. Rahwan helped bring it to the surface in October 2015 when he co-wrote a paper “Autonomous vehicles need experimental ethics.”

But the debate arguably got to the forefront of discussion when Rahwan launched “MIT’s Moral Machine” — a website that poses a series of ethical conundrums to crowdsource how people feel self-driving cars should react in tough situations. The Moral Machine is an extension of Rahwan’s 2015 study.

Rahwan said since launching the website in August 2016, MIT has collected 26 million decisions from 3 million people worldwide. He is currently analyzing whether cultural differences play a role in the responses given.

Rahwan admits the debate itself isn’t without its flaws. The Trolley problem is purposefully simple so it’s easier to understand, allowing researchers to really assess people’s psychological processing.

“The downside of that is it looks very unrealistic and looks like a situation that would never happen or be very rare,” he said.

"Still, that doesn’t mean these aren’t questions worth asking", Rahwan said.

“They need an answer to this question because, ultimately, it’s not about a specific scenario or accident, it’s about the overall principle that an algorithm has to use to decide relative risk,” he said.

Some automakers have publicly addressed this question.

In October, Christoph von Hugo, the manager of driver-assistance systems at Mercedes-Benz, said that future autonomous vehicles would put the driver first in a lose-lose situations.

"If you know you can save at least one person, at least save that one. Save the one in the car," he said in an interview with Car and Driver. "If all you know for sure is that one death can be prevented, then that's your first priority."

Following the story’s publication, Mercedes said the quote was taken out of context to several publications. A Daimler spokesperson reiterated that stance in an email to Business Insider:

“For Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives,” the spokesperson wrote. “There is no instance in which we’ve made a decision in favor of vehicle occupants. We continue to adhere to the principle of providing the highest possible level of safety for all road users.”

Rahwan also said it's unlikely engineers will program a specific decision into their algorithms.

"No one is going to build a car that says the life of one child is worth 1-and-a-half adult or something like that. This is unlikely," Rahwan said.

But automakers should be transparent with their data so independent researchers can assess whether certain self-driving cars are behaving in a biased fashion, Rahwan said. For example, if data shows a self-driving car is disproportionately harming specific people, like hitting cyclists over pedestrians, programmers should revisit their algorithms to see what's going wrong.

The National Highway Traffic Safety Administration acknowledged in a September report that self-driving cars could favor certain decisions over others even if they aren't programmed explicitly to do so.

Self-driving cars will rely on machine learning, a branch of artificial intelligence that allows computers, or in this case cars, to learn over time. Since cars will learn how to adapt to the driving environment on their own, they could learn to favor certain outcomes.

"Even in instances in which no explicit ethical rule or preference is intended, the programming of an HAV may establish an implicit or inherent decision rule with significant ethical consequences," NHTSA wrote in the report, adding that manufacturers must work with regulators to address these situations.

Rahwan said programming for specific outcomes isn't the right approach, but thinks companies should be doing more to let the public know that they are considering the ethics of driverless vehicles.

"In the long run, I think something has to be done. There has to be some sort of guideline that’s a bit more specific, that’s the only way to obtain the trust of the public," he said.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesArtificial Intelligence
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

South Korean nuclear fusion reactor sets new record, and other technology news you need to know

Sebastian Buckup

April 19, 2024

1:31

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum