Keeping a Soul in the Driver's Seat

../railway.jpg

I can't wait for driverless cars. Ten years is the estimate. Combined with car sharing, they will revolutionise our cities and make them much more efficient and livable. They should bring about umitigated good to our society. Yet they still come with ethical challenges, usually categorized as trolley problems.

Wired just published an article called Here's a Terrible Idea: Robot Cars With Adjustable Ethics Settings, outlining the ethical issues involved in substituting human drivers with robots.

In freak accidents, computers would have to take decisions such as killing one motorcyclist without a helmet vs. killing five pedestrians.

The writer raises many such dystopian choices: children vs elderly, us vs others, rich vs poor, etc. He rightfully sees a liability for anyone having to program those decisions. In his opinion, any attempt by the car manufacturer to distantiate itself from lawsuits by offering variable ethical settings to the owner of the car would not decrease the liability of the car manufacturer, and therefore this remains an obstacle to rolling in a driverless car.

The car company has another option, which is missed by the writer: absorb progressively the insurance business.

First off, it's clear that any level of indirection and legal tangling is helpful in freak legal confrontations to shield car manufacturer from legal responsibility towards private individuals. Secondly, the writer does not give enough credit to the creativity of engineers/lawyers/business types.

Why wouldn't they be able to introduce one further level of indirection? The manufacturer could build a "car without a soul".

The car could offer full access proprietary APIs to its raw or slightly processed data, but require linking to an ethical core library before it would start. This ethical core would only be called upon if a future collision is detected, and asked to respond to the really tough questions (or it could be run on a loop validating any driving input). Who would take the liability of writing such a core? Insurance companies would seem like the natural candidate. In fact, this is a very natural extension of their business, litigating for the choices they have coldly programmed in rather than the mistakes made by their clients. It would also make sense to decentralise geographically this ethical core, since driving customs are bound to vary from country to country (think of these comparatively safe Indian drivers or this Russian ninja).

The question is whether insurance companies would be willing to go along. They would certainly feel pressure to adapt to a world of driverless cars, but the brilliant move for the car company would be to promise increased efficiency and reach to the whole insurance industry (more clients), and act as a middleman. By encouraging collaboration between the insurance companies, ostensibly to help them save money on R&D, standardise good practice, exchange regulatory tips, etc, the car company would crowdsource the insurance industry to force itself into obsolescence. This would allow the car company to eventually provide the full product, once all the R&D costs of the fine tuning of the ethical core have been shouldered by the insurance companies. Note that this core would only be ethical in name, as it would have been exclusively fine tuned with cost efficiency in mind.

This assumes there is a car company that is sufficiently dominating the car industry to strong arm insurance companies.

(For other futuristic and "fun" questions on the transformation brought about by driverless cars, see the amazingly cold-blooded If driverless cars save lives, where will we get organs?.)

(Image credit: Wikipedia)