Why We Forgive Humans More Readily Than Machines

Mon, 08 Nov 2021 04:30:00 GMT
Scientific American - Technology

When things go wrong, flexible moral intuitions cause us to judge computers more severely

Commonsense statements are facts that are obvious to humans but are hard to grasp for machines.

Over the last few years, together with my team, I ran dozens of experiments where thousands of Americans reacted to actions performed by humans and machines.

These comparisons allowed us to go beyond the way humans judge AI, and focus instead on how our judgment of machines compares to our judgment of humans.

Even the first data points showed that people did not react to humans and machines equally.

People were less forgiving of machines than of humans in accidental scenarios, especially when they resulted in physical harm.

We decided to build a statistical model explaining how people judged humans and machines.

This explains why machines are judged more harshly in accidental scenarios; people take a consequentialist approach to judging machines, wherein intent is irrelevant, but not to humans.

This simple model led us to an interesting conclusion, an empirical principle governing the way in which people judge machines differently than humans.

This moral flip-flopping goes beyond the way we judge humans and machines.

Compared with the consequentialist morality with which we judge machines, we apply a more Kantian or deontic morality to our fellow humans.