Driverless Car Morals – Who Lives And Who Dies?

Driverless Car Morals – Who Lives And Who Dies?

How do driverless cars determine who lives, and who dies in a worst case scenario? To many, a moral dilemma it may seem.

So, in a worst case scenario, you’re cruising along, and your brakes fail, your vehicle moves towards a crossway full of pedestrians... What do you do? Run over a group of people or swerve to the left hitting a lady with her 2 kids?

 

What happens when you’re a passenger, chauffeured by a driverless vehicle... What would the vehicle do, how would it react? 

A few scientists over at the Massachusetts Institute of Technology are asking individuals around the world how they think an autonomous vehicle should behave under such life and death circumstances.

The objective is only for more advanced algorithms and ethical beliefs to guide robo cars, but rather to gain knowledge of what it will take for society to accept this form of transport and be comfortable with.

Their conclusions portray a predicament for auto manufactures and governments keen to introduce autonomous vehicles on the promise that they'll be safer than human-controlled cars. Public prefer an autonomous vehicle car to act in the greater good, sacrificing its passenger if it deemed possible to save a crowd of bystanders.

"There is a real risk that if we don't understand those psychological barriers and address them through regulation and public outreach, we may undermine the entire enterprise," said Iyad Rahwan, an associate professor at the MIT Media Lab. "People will say they're not comfortable with this. lt would stifle what I think will be a very good thing for humanity."

U.S. populaces, Rahwan & colleagues at the University of Toulouse in France and the University of California, Irvine, are now increasing their surveys & observing how responses vary in diverse countries after publishing the research from a survey held last year.

MIT Researches created a website known as the Moral Machine, which is these scientists have been using allowing individuals to play an active role in deciding who lives or dies. A pedestrian who is jaywalking or an autonomous vehicle carrying several dogs? A pregnant woman or a homeless man?

Initial unpublished study made from many of replies from over 160 countries show broad differences between the West and the East. More protruding in the United States and Europe are judgments that reflect the utilitarian principle of minimizing total harm over all else, Rahwan said.

Just 5 miles from MIT's Media Lab in Cambridge, the first self-driving car to roll out on Massachusetts public roads began testing this month in Boston's Seaport District.

"We approach the problem from a bit more of a practical, engineering perspective," said NuTonomy CEO Karl Iagnemma, who owns a Cambridge-based company which has also piloted self-driving taxis in Singapore.

Iagnemma says, the study's moral dilemmas are "vanishingly rare." Designing a safe vehicle, not a "sophisticated ethical creature," is the focus of his engineering team as they tweak the software that guides their electric Renault Zoe past Boston snow banks.

"When a driverless car looks out on the world, it's not able to distinguish the age of a pedestrian or the number of occupants in a car," Iagnemma said. "Even if we wanted to imbue an autonomous vehicle with an ethical engine, we don't have the technical capability today to do so."

Focusing too much on the stark "trolley problem" risks marginalizing the study of how best to address self-driving ethics, said Noah Goodall, a scientist at the Virginia Transportation Research Council. Engineers already program cars to make moral choices, such as when they slow down and leave space after detecting a bicyclist.

"All these cars do risk management. It just doesn't look like a trolley problem," Goodall said.

Rahwan has said to agree with self-driving enthusiasts that illuminating vehicles from human error could save a lot of lives. His concern however is that progress could be stalled without a new social compact that addresses moral trade-offs.

Rahwan said: Current traffic laws and human behavioural norms have created "trust that this entire system functions in a way that works in our interests, which is why we're willing to fit into large pieces of metal moving at high speeds,"

"The problem with the new system it has a very distinctive feature: algorithms are making decisions that have very important consequences on human life," he said.