The fields of Philosophy and Software Engineering are not known for being interlinked. Sean Mooney investigates why this is changing with the advent of self-driving cars.
The vehicle manufacturer Volvo is aiming to bring an end to traffic fatalities by the year 2020, and the optimism in this statement is typical of the industry. With 1.2 million people across the world dying annually on the roads, it is a bold ambition, but such is the promise of self-driving cars. The technology will put an end to fatigued, angry, and careless drivers. Computers have surpassed us in calculation and chess; and driving will be no different.
“The vehicle manufacturer Volvo is aiming to bring an end to traffic fatalities by the year 2020.”
In a world where a substantial portion of the cars on the road are self-driving, how we program these vehicles will be of huge importance. The code beneath the hood will determine the car’s actions. Such decisions cannot be left up to software engineers alone. Desirable values should be discussed by society as a whole as the technology will greatly impact our lives. All new technology comes with ethical concerns around privacy and the impact on employment, but the stakes are higher for autonomous vehicles: features in the software will cost lives on the roads.
“All new technology comes with ethical concerns around privacy and the impact on employment, but the stakes are higher for autonomous vehicles.”
Once consigned to the depths of humdrum philosophy journals, the trolley problem is fast becoming a pertinent issue thanks to self-driving cars. The dilemma centres on how a car should behave when confronted with options in an inevitable collision. In the simple case, consider a pedestrian stepping out in front of an oncoming vehicle. Whether the car should be programmed to swerve into the wall (killing the driver) or keep going (killing the pedestrian) is a matter of debate.
The permutations of the trolley problem are never-ending: If there are passengers in the car, whose life should the car prioritise – the driver, the passengers, or the pedestrians? We might want the car to protect the passerby since the passenger and driver assume some level of risk by getting in the car, but injuring multiple passengers over a single pedestrian does not seem desirable.
The utilitarian approach aims to minimise the number of total fatalities, regardless of their role in the collision. Wanting cars to treat all people equally seems reasonable, but the manufacturer would surely want to maximise the safety of the driver. After all, who is going to buy a car which does not put their safety first? From the start, there will have to be some bias towards saving the driver.
A more nuanced approach could consider factors which cut across social issues such as age, weight, race, and gender, but this is dangerous territory. If the goal is to minimise total fatalities over long timescales, it could be argued that a doctor should be avoided in any collision, whereas convicted murderers should not.
The car could instead be programmed to merely not take any corrective action. By avoiding intervention, the car will never take deliberate actions that result in death, even if it saves others, although that would be overly simplistic. The car could seek to protect the party that was not at fault. There is logic to this as the consequences should focus on the person that breaks the law. The edge cases are equally ambiguous.
The issue boils down to the fact that, on the one hand, human lives are immeasurable. Each encounter would ideally be dealt with in a unique way, but it is necessary to program the car with a certain set of rules to govern its actions. Perhaps some element of randomness in the system is required.
Following any accident will be the question of responsibility. It is tough to hold the owner of the car fully accountable given that they do not understand know how it operates. Some blame should sit with the manufacturer, but they clearly state the limitations of the vehicle’s artificial intelligence. It would seem unjustified to convict a software engineer of manslaughter for a having a bug in their software, even if it did lead to a death. However this would not prevent loved ones of the deceased from identifying a particular line of code in the car which caused the car to hit them.
“It would seem unjustified to convict a software engineer of manslaughter for a having a bug in their software, even if it did lead to a death.”
The correct course of action is to not apply blame to anyone, but in a legal system centred around guilt and intentions, holding nobody accountable feels like an injustice.
Autonomous vehicle technology sits at the intersection between abstract philosophy and everyday life. Our intuitions will undoubtedly be tested as the pragmatic aspects become more salient. Although road deaths will drop considerably, the margin for error is razor-thin. Bugs in the software cost lives on the street and the most ethically sanguine course of action will mean little to those who are killed.
“Autonomous vehicle technology sits at the intersection between abstract philosophy and everyday life.”
Whether society is ready or not, the technology is coming and the time to seek clarity on these ethical issues is now. As soon as we program the car to preference the life of the driver over hitting a tree, we are admitting that there are right and wrong answers to questions of morality. Eventually we will be encoding our ethical framework into technology.