OpenAI is experiencing performance problems with its new Orion LLM and must adjust its strategies to improve future models. The limited availability of training data could possibly be the cause.
According to research by The Information, OpenAI is currently experiencing performance limitations with the new Orion LLM it is working on. While the performance of this new model would outperform the AI giant’s existing LLMs, it does not involve a major leap forward like that of GPT-3 to GPT-4.
This means that the continued development of LLMs at OpenAI is delayed. While Orion outperforms its predecessors in some areas, it does not offer improvements everywhere, for example, in coding.
New strategies
According to The Information, OpenAI is developing new strategies to address these challenges. For example, a special “foundations team” is being created to explore ways OpenAI can continue to improve its LLMs despite the declining availability of training data.
In addition, OpenAI might consider training Orion on synthetic data generated by other AI models. Another possible strategy is to optimize LLMs at a stage after initial training.
OpenAI has not yet commented on the article. The AI giant indicated no plans for an LLM codenamed Orion this year in previous comments.
Also read: OpenAI invests in building hardware