How should a computer go about telling right from wrong?
According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.
Lying is bad — usually
“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.
“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”
So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.
We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.
Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).
“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”
In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.
In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.
When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.
When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.
“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.
One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.
“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”
The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.