2 min

Tags in this article

, , , ,

The California-based artificial intelligence (AI) startup OpenAI has announced that it is creating an updated version of its viral chatbot, ChatGPT, which users can tailor to their needs. The move comes amid ongoing concerns about bias in AI.

The company stated in a blog post that it is working to mitigate political and other biases but also wants to ensure that more diverse views are taken into account. To achieve this, it will offer users greater customization options, although the company acknowledged that there will always be some limits on the system’s behavior.

Released in November 2022, ChatGPT has generated significant interest in the generative AI technology used to produce answers that simulate human speech.

The ChatGPT technology is not yet ready for prime time

The announcement comes as the Microsoft-powered Bing search engine, which also uses OpenAI, has been criticized for providing potentially dangerous answers, indicating that the technology may not yet be ready for mainstream use.

As technology companies in the generative AI sector continue to develop their products, finding ways to establish guardrails for nascent technology has become a critical area of focus.

In a blog post, OpenAI explained that ChatGPT’s responses are trained on large text datasets before being reviewed by humans. Guidelines are provided to these reviewers, who direct the AI to respond to certain queries while cautioning it not to respond to adult, violent, or hate speech.

Microsoft continues to improve the chatbot

If asked about controversial topics, the AI can answer but should describe viewpoints rather than express a definitive stance. Microsoft’s guidelines prevent AI from taking an incorrect viewpoint on complex topics.

Microsoft has also sought to improve its AI chatbot with the help of user feedback, which highlighted that the system could be provoked into offering unintended responses.

This latest move from OpenAI shows the company’s commitment to addressing concerns about bias in AI. As companies in the generative AI sector continue to improve their technology, they must also ensure that guardrails are in place to prevent the kind of negative publicity that can undermine trust in AI.