An Ethical Dilemma Muddies The Future Of Self-Driving Cars

Cars are dangerous. In 2014 alone, nearly 30,000 people died in car accidents just in the U.S., nearly three times the rate of gun homicides. Self-driving cars will, in theory, bring down the rate of fatal accidents. But if an accident is unavoidable, what should self-driving cars do? What if the car has to make a choice between the safety of one passenger versus the safety of many pedestrians? What’s the ethical decision there?

It turns out that the humans building and operating these cars aren’t sure, and that could mean self-driving cars never really get on the road.

The Dilemma Of Self-Driving Cars

A team at MIT recently examined the ethics of self-driving cars, finding that there’s a very human problem with them. In the abstract, most people happily agree that — in a situation in which a crash is inevitable — the car should put the lives of the many ahead of the few. If a self-driving car’s brakes fail and it’s a choice between slamming into a crowd of 10 pedestrians or killing one passenger, 76 percent of people endorse the car making a simple utilitarian decision.

All well and good, but when the team asked a similar group of respondents whether they’d buy a car that would put the safety of others over the safety of people in the vehicle, their opinions spun around. Consumers flat-out stated that they’d never buy a car that would potentially kill them. What this says about humanity is an exercise better left to the reader. If it makes you feel better, most of the people in the study said they’d gladly take one for the team if they were behind the wheel, but they wanted a car that would protect their families. That cognitive dissonance creates a pretty huge problem.

Who Dies?

Google, Tesla, and the other companies building self-driving cars have so far dodged this problem by pointing out that their cars are safer, smarter drivers than most humans. That’s completely true, as far as it goes, and will be more true when every car on the road is self-driving. But no algorithm can compensate for the unexpected. Mechanical failure, misread data, human error, or even deliberate sabotage means that sooner or later, a self-driving car is going to be stuck with the question of whether to plow through a group of pedestrians or go off a bridge.

Right now, the answer for most companies is to make a human deal with it. Google’s self-driving car, for example, requires a human being behind the wheel who can immediately take over in a situation that the car can’t figure out. But that relies on human judgement, and quick response, and Google’s first crash where its self-driving car was found to be at fault came about because neither the robot nor the human realized a bus wasn’t going to stop until it was too late.

This is more than just an ethical dilemma. Legally speaking, nobody is entirely sure who’d be at fault if a self-driving car got in an accident. Is it a product liability problem? Is the driver at fault? Is it the fault of whoever was injured? That would have an impact well beyond just insurance payouts and courts. The larger social impact of not being sure, if you get into a car, what your ultimate legal fate might be could make people weary to purchase the cars in the first place. In fact, the same study indicates that some degree of buyer wariness already exists.

It seems like self-driving cars are the future. If nothing else, our cars will park for us and use accident avoidance systems to protect us. But a future where we don’t need to touch the wheel might never come unless we make some hard ethical decisions first.

(Via Science)

×