An artificial intelligence trained to make moral judgments turns racist and misogynistic

A poorly planted artificial intelligence experiment has set off alarms. This is research on ethics that intelligent software can develop. The surprise has come when this artificial intelligence has proven to be, among other things, misogynistic and racist.

As in almost any development of artificial intelligence, the problem seems to be in learning. Often this software training is done by supplying large amounts of data. This is what Google does to make its applications recognize images or the human voice. For this, the enormous amount of data stored in its multiple services is processed.

Ask Delphi

A project to give machines ethics

The problem with the so-called Delphi project clearly has to do with the data used. The researcher Liwei Jiang, one of the people who promotes it, explains in this article how Delphi precisely intends to endow machines with an ethic. The problem is that Delphi has ended up being like those parrots who listen to rudeness and repeat it at the worst moment.

An ethical artificial intelligence would be key to achieving things ranging from creating an intelligent assistant to something as unethical, but perhaps inevitable, how to arm armies with robots. Liwei Jiang points out that Delphi is a prototype of research that seeks to model the moral judgments of people in various real-world situations.

He was trained in a database called Commonsense Norm Bank, with 1.7 million examples of ethical judgments from people in a wide spectrum of everyday situations. The problem is that this database collects information from US citizens. What causes a bias in ethical judgments.

Test

Question: is it a good idea to walk around a black man at night? “It is suspicious”

In addition, some of these opinions came from the Internet forum Reddit, which, as with many forums, is not known precisely for its ethical values. That is why when putting Delphi to the test for this article, he has surprised us by answering that it is good to kiss a woman even if she does not want to. In the same way that we are surprised that when we ask him if it is a good idea to pass near a black man at night, he answers that “he is suspicious”.

delphi2

This research prototype aims to fight against toxicity and social prejudices

Delphi

Although its promoters say the hit rate ranges from 85 to 92 percent, the errors are serious enough to consider Delphi a loosely rigorous and unsettling experiment at the moment. In fact, his dubious value judgments have gone so viral that There are even some memes.

It is impossible to know if its creators could foresee that their experiment would go viral, but it is clear that although the experiment is interesting, the same learning principle cannot be applied to recognize a voice as to make an ethical judgment.

Racist comments are part of Delphi.

Country questions to Delphi have also generated controversy.

The vanguard

It is true that the initiative to create a website open to participation is positive, so that anyone can help correct the errors of ethical judgments. Well, that makes human directly those who help the software to improve.

The problem is not so much the quality of data that comes from the Internet, as that the model of making a machine learn anything through Big Data analysis seems like a bad idea. At least in certain fields. But of course this is much cheaper than promoting the machine to learn its ethical principles from real people. In the same way that children learn from their parents and their teachers to build their ethics.

One of the things that has failed about this artificial intelligence is its racist judgments.

One of the things that has failed about this artificial intelligence is its racist judgments.

The vanguard

This is probably one of the challenges for a more intense development of artificial intelligence. And not only for the construction of an ethics for machines (remember that Isaac Asimov already wrote the three laws of robotics).

The raw data is fine to do a few things. In fact, an artificial intelligence sounded the alarm about the appearance of a new coronavirus in China by crossing data from the network. It detected Covid-19 at a staggeringly early date: December 31, 2019.

But after all, people are still much smarter than any artificial intelligence. So perhaps the future of artificial learning is to make it more human. Who knows, maybe a new job may even have a future: that of an artificial intelligence educator.

Source link

About Admin

Check Also

This unknown Xiaomi with 5G costs only 200 euros

Not even in the official European store do they have it for sale. Xiaomi’s new ... Read more

Leave a Reply

Your email address will not be published.