2 min

The Salesforce research team has written a paper, in which it describes how artificial intelligence (AI) can be provided with ‘common sense’, writes Silicon Angle. In this way, AI models not only have to answer questions correctly, but also explain why those answers are the best.

A major problem that machine learning and deep learning still have is that the neural networks they deploy do not have the general knowledge and context that people do have. This concerns, for example, social rules, the laws of nature and cause and effect. As a result, decisions can sometimes be strange or wrong. The Salesforce research team wants to change that.

CoS-E

In the paper, the researchers propose to train neural network models not only with data from different datasets to answer questions accurately, but also with an explanation of why these answers are the best. The company has hired people through Amazon’s Mechanical Turk crowdsourcing service to explain examples in the Commonsense Question Answering dataset, presented earlier this year by researchers from Tel-Aviv University and Allen Institute for Artificial Intelligence.

The result is a whole new dataset that includes Salesforce’s Common Sense Explanations (CoS-E). This makes it possible to grab large amounts of unlabelled text, extract general knowledge from that text and put reasoning behind it.

The neural network also proved to perform much better on the real test after having seen examples of human reasoning during the training phase. Salesforce scientist Nazneen Rajani states that this is possible because the explanation gives important information about how the world works. “We speculate that the network learns to reason on the basis of that information during the tests.”

Second test

In a second part of the study, the researchers trained a second neural network to learn only how it can generate common sense reasoning from pieces of text it had to read, in order to recreate the explanations generated by CoS-E. The Commonsense Auto-Generated Explanations (CAGE) framework did even better in terms of accuracy of responses. Yet with a score of 65 percent, it still lags behind the accuracy of people, which is 95 percent.

Scientists expect the results to improve as the model becomes better known with knowledge of the world. The intention is to create a dataset with the commonsense statements open source.

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.