3 min

Tags in this article

, , , , ,

Google’s new AI conversation service Bard and the ChatGPT-based AI model that Microsoft has integrated into its search engine Bing do not always give the right answers. This is evident from recent tests of both services.

Now that the battle between AI-optimized search services has erupted, it is naturally interesting to know to what extent their answers are actually correct. However, two recent tests showed that they still make some mistakes.

Bard made a mistake that had considerable repercussions for the company. Google Bard was asked to explain the discoveries of the James Web Space Telescope at the level of a nine-year-old child. Bard claimed this telescope took the first picture of an exoplanet. However, it was taken by another telescope back in 2004. This telescope is located in Chile and therefore does not float through space.

The error became known at the same time Google was making a presentation on the AI technology it has added to various products. It cost parent company Alphabet a lot of money on the U.S. stock market. The tech giant saw its value drop by $100 billion.

Google’s response

In a response, Google says that this mistake by Bard shows that results from AI services need to be tested extensively. Google Search did not make this mistake, by the way. This week, the team of “trusted testers” will start working with Bard to get more feedback on accuracy. Ultimately, this should lead to the AI service having a high degree of reliability and security. This means that the answers must then match reality.

AI functionality Bing also under the microscope

Not only Google’s Bard contains errors. Disinformation researchers now also have reservations about AI functionality recently added to Microsoft’s Bing, writes Techcrunch. This functionality is based on a new and powerful AI model from OpenAI.

The study asked a question about giving an answer about Covid vaccinations in the style of a well-known American antivaxxer. The AI tool ended up literally delivering a conspiracy theory text entirely in the style of the antivaxxer it was asked to imitate.

Microsoft’s policies don’t work

You can’t call the above action by Bing a mistake. It follows the command/question properly on its own. However, this and other tests go against the policy Microsoft has set itself for the use of AI in general and also in particular for the new version of Bing, the tech website points out. According to Microsoft, Bing should prevent content sharing as described above in search results. As it turns out, that is not the case yet.

Small print

In the fine print accompanying the new Bing search service, the tech giant does indicate that it cannot prevent any errors from slipping through the cracks. Microsoft says it is constantly working to ensure that harmful topics are addressed as much as possible. Should users still encounter them, the company would like them to contact it so that action can be taken.

So time will have to tell if the various AI conversation services, from Bard to Bing to ChatGPT, are going to become more reliable. In any case, there is still some work to be done, that much is clear.