The ethics of driverless cars

Studies estimate that at least 380 million semi- or fully-autonomous vehicles will occupy roads by 2030. While driverless cars are projected to increase efficiency and reduce traffic, they also raise important ethical questions. Reporter Andrea Vasquez discusses some of these questions with Shannon Vallor, Professor of Philosophy at Santa Clara University.


With rapid advancements in transportation technology, studies estimate that at least 380 million semi- or fully autonomous vehicles will occupy roads by 2030.

While driverless cars are projected to increase efficiency and reduce traffic, they also raise important ethical questions.

Reporter Andrea Vasquez discussed some of these questions in a Google hangout with Shannon Vallor, professor of philosophy at Santa Clara University.

Shannon Vallor, thank you for joining us.

Happy to be here.

So we're looking at self-driving cars.

We're seeing these in our definite future.

But there's some ethical issues involved.

One way that people look at this is through something called the trolley problem.

Can you explain what that is?


So the trolley problem is an old philosophical thought experiment that just used to be used in courses on ethics and moral philosophy as a test of our moral intuitions.

And the way it usually went is some variation of the following.

You're driving a trolley down a track.

There are a number of people trapped in a vehicle or somehow or maybe tied to the tracks by some nefarious person.

And the trolley's going to kill those people.

But the driver of the trolley has the option of diverting the trolley onto another track where, perhaps, there is only one person in the way.

Perhaps there's a worker on the tracks that will be killed.

And the question is, 'What's the right thing to do?

Should you actively cause the death of the one worker by diverting the trolley or allow, let's say, five people to be killed by the trolley on its present track?'

And the idea is it seems like both have some problems.

In the one case, you're actively causing someone to die.

In the other case, you're allowing more people to die, when you could prevent those deaths and cause only one death.

The reason why this has captured people's moral imaginations in the case of driverless cars is that there are some scenarios that seem to come up that might present similar difficulties for programmers of driverless vehicles because, unlike drivers now, who have to make split-second decisions that we don't expect them to do a lot of careful moral calculation in those scenarios, these cars are going to be able to anticipate those kinds of scenarios.

Programmers are going to have to be able to tell cars, in advance, how to handle difficult situations, for example, one where, let's say that there is a school bus trapped in a tunnel.

And your car is hurtling towards the tunnel.

It detects the fact that there's a school bus, which is presumably full of lots of children, that it is likely to rear-end unless it veers off the road.

But let's say that there's an almost certainty that, if the car veers off the road, it's going to kill you, the occupant.

Let's say that there's a cliff that the car cannot avoid going over if it veers from the lane.

Let's assume that perhaps it'll have to crash into the tunnel wall.

So here's a question.

What should the car do?

Should it rear-end the school bus, which might cause the deaths of a number of innocent children?

Or should it sacrifice you?

Presumably one is better than many.

But on the other hand, it's your car.

I've spent the money.

I want to survive the crash.

And then there are worries about the programmers actively causing the car to put it's occupant at risk versus accepting the risks that are already on the road that the programmers themselves have not chosen.

So you can see how the trolley problem kind of gets re-created here.

Do the programmers have to anticipate every possible scenario?

Or do these cars have the potential to learn?

Many of these cars, now, have self-learning algorithms that are able to be trained to generalize from past driving experience to new situations so that just like a driver might encounter a situation on the road that they have never encountered before and, from past experience, make an educated guess about what the right thing to do would be, we now have self-learning, artificial neural networks in cars that are being trained to make the same kinds of educated guesses in new driving situations.

And in the programming of these, prior to that self-learning, those decisions that they're making, those premeditated decisions in those scenarios, does that become a company-by-company standard?

Does that become and industry regulation?

How do we navigate that?

That's all up in the air right now.

And that's something that automakers are talking about, legislators are talking about, industry, professional associations for computer scientists and software engineers are talking about this because we don't know.

Right now, each self-driving technology is being developed more or less independently from the others.

And negotiations with municipalities and regulators is really happening separately.

But the conversations are beginning to come together.

And there have already been a number of efforts to get people who are interested in this technology in the same room to talk about standards because down the road, you're going to need those standards.

You're going to need a common understanding of what driverless cars can and cannot do on the road, a common understanding of what protocols will exist for cars to communicate with one another so that they can identify other driverless cars on the road and perhaps coordinate action with them.

There's gonna have to be decisions about how insurance is gonna work, how liability for injuries or deaths caused by driverless cars is going to be addressed.

There are going to have to be decisions made about whether people down the road will be sanctioned for choosing to drive their own cars when, at some point, that's going to put other people at greater risk.

The whole reason for this technology is that humans, by and large, are not great drivers.

We drive distracted.

We drive drunk.

We drive tired.

And people die as a result.

And so the whole goal of this technology is to remove that human risk by allowing automated systems to do the work for us.

And this isn't the only place where new tech innovations are creating questions that we didn't know to ask before, ethical questions.

And are we learning anything from these conversations that are starting to happen, about how to navigate this cross-section of human ethics and technical and artificial intelligence?


I mean, I think we are just at the beginning of those conversations.

But I'm very much encouraged by the enthusiasm, not just within university researchers and research communities but also from industry and government, the growing understanding that we need to have conversations about the ethics of emerging technologies.

And AI and automation is one of the biggest ones.

Now, what we will learn from them and how much that will really affect design, implementation of artificial intelligence and similar technologies, that remains to be seen.

But we're heading in a good direction.

Shannon Vallor, thanks again for joining us.

Thank you.