Notes on the ethical dilemma of self-driving cars

I don’t understand what the big deal is about the ethical dilemma of self-driving cars. A Google search reveals over 40,000 results of how the trolley car thought experiment is applicable to self-driving cars. Thousands of articles in the span of just a few years have been written about it. Apparently this is a huge problem–nevermind the actual technical difficulties of making a self-driving car in the first place, let alone one that is ‘ethical’.

Most accidents involve driver error, often due to carelessness, a problem which self-driving cars are supposed fix (a car will be driving down an intersection and then another car, due to the driver of the second car possibly being distracted, will broadside the first car, in which the first driver is oblivious). The only way the trolley experiment could applicable is if the brakes fail but then car is somehow veering towards a sidewalk or crosswalk and then somehow the car must make a choice between, say, hitting one pedestrian (such as an young person who has many decades to live) or perhaps three elderly people (who have less than a decade to live each), because there are no other options (such as having the car veer off in a direction where there are no human obstructions). But is also applies to accidents in which the self-driving car must ‘choose’ between saving its own passengers, versus the passengers of the other car in the accident. The whole scenario is implausible yet it has gotten a ton of media attention.

When cars hit pedestrians, it’s either it’s due one of three reasons: the pedestrian not paying attention or crossing on a red light, driver error, or the car being out of control due to mechanical failure or adverse road conditions, and the crash is so sudden that the driver cannot possibly contemplate the moral considerations of his or her actions. Self-driving cars can help regarding the first two, which leaves the third possibly open (and again, this is assuming that the car has no option but to hit a human target instead of a tree, a light post, or another car).

Regarding pedestrians, in the exceedingly unlikely event that a self-driving car is forced to make such a choice, one way to resolve the problem would be to just program the car to make the choice at random. Regarding car v. car collisions, I cannot imagine a self-driving car that puts consequentialist ethics ahead of the safety of its passengers being a commercial success.