Based on thousands of simulations, MIT researchers argue that artificial intelligence is not free of prejudice. Not only can they learn from people, but robots can also teach themselves prejudices, as it were.
The researchers report this to the TechCrunch site. The MIT researchers set up a simulation in which several groups of AI-powered robots had to choose to donate money. The robots could choose to donate to a robot within their own group, or to donate to a robot outside their group. In the test, the robots base their decision on the reputation of each individual robot and the donation strategies of that robot.
More and more strategies
Several thousand simulations were run, in which the robots learned new strategies from each other. In the end, it turned out that the robots mainly opted for strategies that yielded short-term profits. As a result, they were increasingly biased against robots from other groups from which they had learned nothing.
By learning and copying behaviour from each other, the robots in the test developed prejudices. The researchers thus claim to have demonstrated that prejudice is a force of nature which, through evolution, is becoming increasingly deeply rooted in a society. It would also be extremely complicated to reverse this.
This is not the first time that robots have been compromised in this way. A chatbot that Microsoft launched on Twitter, Tay tweets, learned behavior from other users of the platform and became increasingly racist. In addition, there is a widespread fear of the risks associated with the development of artificial intelligence, if these are not properly developed.This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.