2 min

Tags in this article

, , , , ,

User testing shows that longer chat sessions can cause unexpected results. You can provoke the chatbot in human like behaviour. It will give inappropriate human-like responses which are definitely not in line with the tone of voice Microsoft is looking for.

Microsoft is considering tweaks and guardrails for the new AI-powered Bing search engine, according to a report in The New York Times. The chatbot-enhanced version of Bing, which Microsoft rushed to market recently, was “designed to deliver better search results, more complete answers to your questions, a new chat experience to better discover and refine your search, and the ability to generate content to spark your creativity”, according to a Bing Blog post.

Since Microsoft made the new Bing available in limited preview, it has been testing with a select set of people in over 169 countries. The results of the testing have been mixed.

Microsoft is now trying to do a sort of reset, tweaking the Bing platform to resolve some of the issues that the first testers found. Indeed, the new limits are “an attempt to reel in some of its more alarming and strangely humanlike responses”, according to the Times article.

Microsoft has invested $13 billion in San Francisco start-up OpenAI, which is already famous for its ChatGPT, an online chat tool that uses a technology called generative AI. It is the OpenAI technology that is powering the chatbot in the new Bing.

Longer chat sessions confuse Bing

In the first week of public use, Microsoft said, it found that in “long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”

Kevin Scott, Microsoft CTO, told The New York Times that the company was also considering limiting conversation lengths before they veered into strange territory. Microsoft said that long chats could confuse the chatbot, and that it picked up on its users’ tone, sometimes turning testy.

Sam Altman, the chief executive of OpenAI, told the Times that improving what’s known as “alignment” — how the responses safely reflect a user’s will — was “one of these must-solve problems.”

He said that the problem was “really hard” and that while they had made great progress, “we’ll need to find much more powerful techniques in the future.”