homehome Home chatchat Notifications


Researchers quantify basic rules of ethics and morality, plan to copy them into smart cars, even AI

Be good, drive good.

Alexandru Micu
July 5, 2017 @ 11:53 am

share Share

As self-driving cars roar (silently, on electric engines) towards wide scale use, one team is trying to answer a very difficult question: when accidents inevitably happen, where should the computer look to for morality and ethics?

Ethical banana.

Image credits We Are Neo / Flickr.

Car crashes are a tragic, but so far unavoidable side effect of modern transportation. We hope that autonomous cars, with their much faster reaction speed, virtually endless attention span, and boundless potential for connectivity, will dramatically reduce the incidence of such events. These systems, however, also come pre-packed with a fresh can of worms — pertaining to morality and ethics.

The short of it is this: while we do have laws in place to assign responsibility after the crash, we understand that as it unfolds people may not make the ‘right’ choice. That under the shock of the event there isn’t enough time to ponder the best choice of action, and a driver’s reaction will be a mix between an instinctual response and what seems — with limited information — to limit the risks for those involved. In other words, we take context into account when judging their actions and morality is highly dependent on context.

But computers follow programs, and these aren’t compiled during car crashes. A program is written months, years in advance in a lab and will, in certain situations, sentence someone to injury or death to save somebody else. And therein lies the moral conundrum: how do you go about it? Do ensure the passengers survive and everyone else be damned? Do you make sure there’s as little damage as possible, even if that means sacrificing the passengers for the greater good? It would be hard to market the latter, and just as hard to justify the former.

When dealing with something as tragic as car crashes, likely the only solution we’d all be happy with is there being none of them — which sadly doesn’t seem possible as of now. The best possible course, however, seems to be making these vehicles act like humans or at least as humans would expect them to act. Encoding human morality and ethics into 1’s and 0’s and downloading them on a chip.

Which is exactly what a team of researchers is doing at The Institute of Cognitive Science at the University of Osnabrück in Germany.

Quantifying what’s ‘right’

The team has a heavy background in cognitive neuroscience, and have put that experience to work in teaching machines how humans do morality. They had participants take a simulated drive in immersive virtual reality around a typical suburban setting on a foggy day, and the resolve unavoidable moral dilemmas with inanimate objects, animals, and humans — to see which and why they decided to spare.

By pooling the results of all participants, the team created statistical models that outlining a framework of rules on which morality and ethical decision-making rely. Underpinning it all, the team says, seems to be a single value-of-life that drivers facing an unavoidable traffic collision assign to every human, animal, or inanimate object involved in the event. How each participant made his choice could be accurately explained and modeled by starting from this set of values.

That last bit is the most exciting finding — the existence of this set of values means that what we think of as the ‘right’ choice isn’t dependent only on context, but stems from quantifiable values. And what algorithms do very well is crunch values.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” said Leon R. Sütfeld, PhD student, Assistant Researcher at the University, and first author of the paper.

The findings offer a different way to address ethics concerns regarding self-driving cars and their behavior in life-threatening situations. Up to now, we’ve considered that morality is somehow innately human and that it can’t be copied, as shown by efforts to ensure these vehicles conform to ethical demands — such as the German Federal Ministry of Transport and Digital Infrastructure’s (BMVI) 20 ethical principles.

[panel style=”panel-info” title=”Some of they key points of the report are as follows:” footer=””]

  • Automated and connected transportation (driving) is ethically required when these systems cause fewer accidents than human drivers.
  • Damage to property must be allowed before injury to persons: in situations of danger, the protection of human life takes highest priority.
  • In the event of unavoidable accidents, all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited.
  • In all driving situations, it must be clearly defined and recognizable who is responsible for the task of driving – the human or the computer. Who is driving must be documented and recorded (for purposes of potential questions of liability).
  • The driver must fundamentally be able to determine the sharing and use of his driving data (data sovereignty).

[/panel]

Another point that the report details on heavily is that of how data recorded by the car can be used, and how to balance the privacy concerns of drivers with the demands of traffic safety and economic interest in the user’s data. While this data needs to be recorded to ensure that everything went according to the 20 ethical principles, the BMVI also recognizes that there are huge commercial and state security interests in this data. Practices such as those “currently prevalent” with social media should especially be counteracted early on, BMVI believes.

At first glance rules such as the ones BMVI set down seemed quite reasonable. Of course you’d rather have a car damage a bit of property, or even risk the life of a pet, over than of a person. It’s common sense, right? If that’s the case, why would you need a car to ‘understand’ ethics when you can simply have one that ‘knows’ ethics? Well, after a few e-mails back and forth with Mr. Sütfeld I came to see that ethics, much like quantum physics, sometimes doesn’t seem to play by the books.

“Some [of the] categorical rules [set out in the report] can sometimes be quite unreasonable in reality, if interpreted strictly,” Mr Sütfeld told ZME Science. “For example, it says that a human’s well-being is always more important than an animal’s well-being.”

To which I wanted to say, “well, obviously.” But now consider the following situation: say you have a dog running out in front of a human-driven car, in such a way that it’s an absolute certainty it will be hit and killed if the driver doesn’t swerve onto the opposite lane. There’s a good chance the driver will spot the dog and avoid collision but there’s also a very tiny chance, say one in twenty, that he won’t be paying attention and hit the animal — with very little injury for the person driving, something along the lines of a sprained ankle.

“The categorical rule [i.e. human life is more important] could be interpreted such that you always have to run over the dog. If situations like this are repeated, over time 20 dogs will be killed for each prevented spraining of an ankle. For most people, this will sound quite unreasonable.”

“To make reasonable decisions in situations where the probabilities are involved, we thus need some system that can act in nuanced ways and adjust its judgement according to the probabilities at hand. Strictly interpreted categorical rules can often not fulfil the aspect of reasonableness.”

Ethicar

Miniature car.

Image via Pixabay.

So simply following the Ethic Handbook 101 to the letter might lead to some very disappointing results because again, morality is also dependent on context. The team’s findings could be the foundation of ensuring ethical self-driving behavior by allowing them the flexibility to interpret the rules correctly in each situation. And, as a bonus, if the car’s computers understand what it means to act morally and make ethical choices, a large part of that data may not need to be recorded in the first place — nipping a whole new problem in the bud.

“We see this as the starting point for more methodological research that will show how to best assess and model human ethics for use in self-driving cars,” Mr Sütfeld added for ZME Science.

Overall, imbuing computers with morality may have heavy ramifications in how we think about and interact with autonomous vehicles and other machines, including AIs and self-aware robots. However, just because we now know it can be possible, doesn’t mean the issue is settled — far from it.

“We need to ask whether autonomous systems should adopt moral judgements,” says Prof. Gordon Pipa, senior author of the study. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

As an example, he cites the new principles set out by the BMVI. Under this framework, a child who runs out on a busy road and causes a crash would be classified as being significantly involved in creating the risk, and less qualified to be saved in comparison with a person standing on the sidewalk who wasn’t involved in any way in creating the incident.

It’s an impossible decision for a human driver. The by-stander was innocent and possibly more likely to evade or survive the crash, but the child stands to lose more and is more likely to die. But any reaction a human driver would take would be both justifiable — in that it wasn’t premeditated — and blamable — in that maybe a better choice could have been taken. But a pre-programmed machine would be expected to both know exactly what it was doing, and make the right choice, every time.

I also asked Mr Sütfeld if reaching a consensus on what constitutes ethical behavior in such a car is actually possible, and if so, how can we go about incorporating what each country’ views on morality and ethics (their “mean ethical values” as I put it) into the team’s results.

“Some ethical considerations are deeply rooted in a society and in law, so that they cannot easily be allowed to be overridden. For example, the German Constitution strictly claims that all humans have the same value, and no distinction can be made based on sex, age, or other factors. Yet most people are likely to save a child over an elderly person if no other options exist,” he told me. “In such cases, the law could (and is likely to) overrule the results of an assessment.”

“Of course, to derive a representative set of values for the model, the assessment would have to be repeated with a large and representative sample of the population. This could also be done for every region (i.e., country or larger constructs such as the EU), and be repeated every few years in order to always correctly portrait the current „mean ethical values“ of a given society.”

So first step towards ethical cars, it seems, is to sit down and have a talk — first, we need to settle on what the “right” choice actually is.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper.

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

But that’s something society as a whole has to establish. In the meantime, the team has worked hard to provide us with some of the tools we’ll need to put our decisions into practice.

As robots and AIs become a larger part of our lives, computer morality might come to play a much bigger part in our lives. By helping them better understand and relate to us, ethical AI might help alleviate some concerns people have about their use in the first place. I was already pressing Mr Sütfeld deep into the ‘what-if’ realm, but he agrees autonomous car ethics are likely just the beginning.

“As technology evolves there will be more domains in which machine ethics come into play. They should then be studied carefully and it’s possible that it makes sense to then use what we already know about machine ethics,” he told ZME Science.

“So in essence, yes, this may have implications for other domains, but we’ll see about that when it comes up.”

The paper “Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure” has been published in the journal Frontiers in Behavioral Neuroscience.

share Share

How Hot is the Moon? A New NASA Mission is About to Find Out

Understanding how heat moves through the lunar regolith can help scientists understand how the Moon's interior formed.

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.