2 min Applications

OpenAI details how it wants to improve AI accuracy and security

OpenAI details how it wants to improve AI accuracy and security

In response to US President Joe Biden’s call for AI software developers to take responsibility for the safety of their products, OpenAI LP, the creator of ChatGPT, has released details of the measures it takes to minimize potential dangers from its systems.

OpenAI has recognized the potential risks associated with AI and is committed to building safety into its models at multiple levels. The company conducts rigorous testing of any new system before release, engages with experts for feedback, and improves its behavior using reinforcement learning with human feedback.

All new AI systems are released cautiously to a steadily broadening group of users, and continuous improvements and refinements are implemented based on feedback from real-world users.

OpenAI says GPT 4 is better at security and will continue to improve

OpenAI is also implementing strict safety measures to protect children from its systems by requiring users to be 18 or older or, otherwise, be at least 13 and have parental approval to use its AI systems.

Blocks have been implemented to prevent the systems from generating hateful, harassing, violent, or adult content. GPT-4 is said to be 82% less likely to respond to requests for disallowed content than its previous model, GPT-3.5.

OpenAI also improves factual correctness and prevents AI hallucination, where a system fabricates responses when it cannot find an accurate answer. OpenAI is leveraging user feedback to enhance factual correctness, and GPT-4 is 40% more likely to generate factual answers than GPT-3.5.

Some of the training data may contain publicly available personal information

However, OpenAI stressed that its goal is for its systems to learn about the world, not private individuals. OpenAI has fine-tuned its models to reject requests for the personal information of private individuals and will respond positively to any request to delete personal information from its systems.

OpenAI’s disclosure on AI safety is timely, as there have been public calls for the industry to pause the development of advanced AI systems. However, OpenAI rejected the idea of a pause and instead disclosed its approach to AI safety, which indicates that it will continue to press ahead.

OpenAI believes that as it builds and deploys more capable models in the future, they will be even safer than its existing systems. OpenAI has also called upon policymakers and AI providers to ensure that the development and deployment of AI prioritize safety.