It’s been a busy release week from OpenAI. The company already introduced its agentic Operator tool and an efficient reasoning model called o3-mini. Now, it has put forward ‘deep research’, available in select countries. What does it entail?
OpenAI CEO Sam Altman promised last week to bring forward certain releases in response to DeepSeek-R1, the Chinese AI model that can reason almost as well as OpenAI’s o1. With o3-mini, the American AI company already had an answer to complaints that o1 use is prohibitively expensive for many. The new model still has higher API costs than R1, but it’s free to try for all users.
Making coffee, then an answer
OpenAI offers deep research for the most complex questions. The solution can create tables of queried details or compare all kinds of data. It’s a kind of “search on steroids”, if you will, which also requires users to wait for the eventual answer. Depending on the complexity of the question, one may have to wait 5 to 30 minutes for an answer.
Deep research is not cheap to run or to use. It is also not available in the EU and nearby countries. Those living in a supported area still have to pay $200 a month for ChatGPT Pro to access this tool right now. Even then, its use is limited: a maximum of 100 monthly queries.
The contained nature of deep research betrays an extremely high compute requirement. As the company has mentioned before, 100 half-hour reasoning queries based on a large context window will undoubtedly be loss-making for OpenAI.
Faster and cheaper later
OpenAI responded quickly to DeepSeek. o3-mini might not have been free without the new Chinese competitor. It acts as a “reason” button in the free ChatGPT app, precisely like DeepSeek’s chat interface “DeepThink.” o3-mini replaces o1-mini. Meanwhile, o1 is also available for free in limited quantities through Microsoft’s Copilot. “Think Deeper” is underneath the o1 model and, like OpenAI and DeepSeek, can be checked for a particular query.
However, it will take a bit longer to see fundamental changes in OpenAI policy. For example, the explanation around deep research is as vague as ever; since GPT-3 in 2020, OpenAI only provides helicopter-level architecture information and shows benchmark results. This contrasts China’s DeepSeek, whose team released a comprehensive white paper for both DeepSeek-V3 in December and its reasoning model R1 in January.
Still, CEO Sam Altman admits that his OpenAI was on the “wrong side of history” regarding the open source debate. The company was already theatrically cautious with both GPT-2 and GPT-3, arguing that releasing these models without warning the outside world beforehand would have been harmful. There was something to be said for that, but now it is simply a competitive advantage to hide precisely how GPT-4, GPT-4o, o1 and now o3 work. Still, DeepSeek succeeded once; who knows, maybe now it will succeed again, and even faster, to respond.
Also read: DeepSeek, hot on OpenAI’s heels, hit by cyberattack