Ethics and self driving cars

Courtesy of the British Computer Society, I attended a free lecture by Ethical Roboticist Professor Alan Winfield earlier this week. Here a few thoughts I jotted down during his excellent talk.

Winfield started out discussing a couple of recent cases where the beta-testing of driverless cars has ended in tragedy. In the first case, a Tesla customer was using their new driverless car. He was supposed to be alert and aware, ready to take over from the AI that was controlling the vehicle at a moment’s notice. Unfortunately he wasn’t paying attention, and when for some reason the AI failed to ‘see’ a lorry trailer blocking the way ahead, the AI drove the car straight into it, killing him. The other case involved the death of a member of the public who was walking a bike across an intersection, and killed when, again, the AI controlling an Uber vehicle failed to recognise that she was there.

There is an interesting ethical conundrum to unpick here thrown up by the requirement that a human ‘back up driver’ maintains the necessary level of attention so as to be able to take over from the AI if needed. In both these cases, there is evidence to suggest that the human drivers were definitely not paying sufficient attention. But even if they had been correctly seated behind the wheel and not distracted by anything, it is very difficult to maintain the required level of concentration and alertness if you are doing nothing, as Winfield discussed.

Kant asserted the maxim that ‘ought implies can’. It’s not coherent to impose a moral requirement for a person to do something she is not capable of doing. Now I’m not saying it’s impossible to maintain concentration in these conditions, but it is quite difficult. I think this is something that needs careful consideration in terms of the technology, usability, and the moral expectation placed on the user, given the limitation of our fallible human brains.

Which leads onto the next problem; what if the human back-up driver doesn’t take over control of the vehicle, not because they aren’t paying attention, but because they have chosen to trust the AI. Part of the appeal of driverless cars is harnessing AI technology which, under certain circumstances, can reliably outperform the average human. If, for the majority of the time, the car is a better driver than a human, why would a human choose to second-guess it? The car is supposed to be able to process information and react faster to events than a person with a meat computer in their heads. Furthermore, it may not be obvious to the human that the AI is malfunctioning. The two cases described above are fairly clear cut; you wouldn’t notice that your vehicle was about to plough into a pedestrian or a lorry trailer and think, “well, I won’t intervene because the car knows what its doing and I don’t want to interfere”. But as the technology gets more sophisticated and the obvious problems get ironed out, we’ll be left with the more complex and subtle edge-cases. Might future vehicular manslaughter cases hang on whether it has been determined if there was a reasonable expectation that the human back up driver should have intervened?

As is so often the case, it seems our legal and moral frameworks have not yet caught up with technological developments. Winfield described himself as the start of his talk as a Professional Worrier. Ethicists lay the groundwork for the standards and regulations which enable public trust. This isn’t about Luddite-esque hand-wringing. But it is critical that these issues are discussed in a structured way to keep people safe, and minimise harm.


Join the conversation

3 Comments

  1. Interesting POV.. but same goes for the non adaptive cruise control loads of vehicles have these days.. the driver still needs to act and there are no controls in place to check if the driver has hands on the wheel..

    So in a way the discussion had to be going even before AI was introduced IMHO..

    What I found a bit misleading in that you state that the two described incidents were recent..

    Please add the current published results in AI assisted miles driven and non assisted..

    Greetings from Holland .

    1. Hi,
      The events were in 2016 and 2018, which I think of as recent. I was only quoting from the lecture I attended as a way to discuss the philosophical principles behind the ethical considerations. I’ve linked to the blog of the professor giving the talk if you want a more forensic discussion of the investigations.
      Thanks for stopping by.

Leave a comment

Leave a Reply