The moral dilemmas of self-driving cars: how we “teach” ethics to this technology

Cars a fully autonomous driving they could improve it road safety, accessibility to transport and energy efficiency. However, there are still many challenges to be addressed for the large-scale diffusion of vehicles in which human control is negligible or non-existent. One of these is to ensure that their performance meets the company’s expectations in terms of safety And legality. Today, partially self-driving cars are on the market, making the driving experience easier, but what about fully autonomous driving There are no definitive solutions yet to this problem.

Research, both corporate and academic, is working in two main directions: that ethics and that engineering. On the one hand it is essential to define shared ethical choices in the event of an unavoidable fatal accident, on the other it is important to create standardized “ethical” algorithms which try as much as possible to prevent and avoid road accidents. In this article we see the results of the largest ethical study ever done and an algorithm for autonomous driving that takes into account road risk and the recommendations of the European Union.

The ethics of autonomous driving according to people: what is The Moral Machine

To try to understand which ones ethical choices code inside self-driving cars, in 2018 the MIT (Massachusetts Institute of Technology) created “The Moral Machine experiment”, a huge study that got responses from all over the world, with even 39.61 million decisions collected by 233 different countries. The research team designed a online platform which presents users with scenarios unavoidable accidents with two possible outcomes, both fatalasking them to choose between swerving and continuing straight. The choices were made so that users had to choose between saving whoever was driving or who was crossing, deciding whether save as many lives as possible or base your choice on age of the people involved regardless of the number of people present.

An example of an instance proposed by Moral Machine. Credit: Iyad Rahwan, CC BY–SA 4.0

These dilemmas recall the Trolley Problem,” a well-known ethical question that explores whether it is morally acceptable to sacrifice one life to save others and allows us to ask ourselves about what to do in situations where any decision involves negative consequences.

For example, if an autonomous vehicle is in front of a sudden obstacle and has to choose between deviating, endangering pedestrians who are crossing, or continuing, endangering passengers, what should be the programmed choice?

Ethics are not the same for everyone: different cultures have different priorities

The Moral Machine platform offers various ethical dilemmas where you have to choose who to save in the event of an accident: humans or animals, passengers or pedestrians, young or old, those who respect the highway code or those who don’t, men or women, few lives or many.

TO world levelthis study found that there are three main preferences: Most people save beings humans instead of animals, save as many lives as possible and gives priority to the most young. However, preferences vary greatly depending on culture. The countries involved in the study can be grouped into three macro-areas (Western, Eastern and Southern), each with different moral priorities. For example, the area easterncharacterized by countries that give more importance to elderly peopleis less inclined to save young people. The areas southern and westernmade up of more individualistic countries, where the uniqueness of each person counts, give greater preference to saving the greater number of lives. Only a few choices, such as saving those who respect the traffic rules, are shared similarly by everyone. These cultural differences make difficult define rules ethical unique.

From our driving experience, however, we know that not all accidents are fatal and that, indeed, most of the interactions we have with other vehicles do not lead to any accidents. And this is where the engineering side of the question: if the ethical part focuses on how to manage the inevitable accident, the engineering part focuses on how to avoid it.

“Ethical” algorithms for reducing the risk of car accidents

An example of ethical algorithm for trajectory planning of autonomous vehicles is the one developed by the University of Munich. The system is in line with the ethical recommendations of the European Union and defines a maximum level of acceptable risk, aims to minimize overall risk and ensure fair treatment of all road users.

risk management

From a practical point of view, in every moment where the vehicle is moving the algorithm plans it possible trajectories generating different options and assigning each road user a specific risk value. This value of riskcalculated as the product of probability of collision and the potential resulting damageit is essential to choose the trajectory that guarantees the greatest possible safety. The generated trajectories are then classified according to their total risk level and are discarded if they exceed the maximum acceptable risk threshold. For the best trajectories, the algorithm evaluates the safetyThe comfort and the travel time to select and execute the optimal trajectory. At this point, we start again from the first step and a new trajectory is planned for the next interval. In this way, the car guarantees the safest possible trajectory.

Given all the considerations made, it should be clarified that i fully autonomous driving vehiclesi.e. level 5, are not on the market but exist in some parts of the globe in the form of robotaxi. These are some vehicles used as experiments in specific cities of the United States United Statessuch as San Francisco and Phoenix.

Clearly, the road to the commercialization of fully autonomous vehicles is still long, but research will continue to be explored to obtain vehicles that are not only safe, but are also capable of making ethical choices.