The text-davinci-003 model handles more complex instructions and produces longer-form content.

This week OpenAI published a new generative text model. According to the organization, the model produces higher-quality writing, can handle complex instructions and is capable of generating longer-form content. Known as text-davinci-003, the model is part of the GPT-3 family and builds on earlier systems.

GPT-3 (Generative Pre-trained Transformer 3) is OpenAI’s renowned natural language processing AI model released in May 2020. The model now features around 175 billion parameters. OpenAI trained the deep-learning AI system through information from millions of websites.

OpenAI’s new model is built on the Davinci engine. This engine was designed to perform a wide range of tasks with fewer instructions to achieve the required output. It’s considered to be particularly useful when in-depth knowledge of a subject matter is required. This includes summarising texts and producing narrative content or dialogue.

Davinci-based models are more computationally heavy, which comes with a slightly higher cost per API call than simpler models such as Ada and Babbage. However, high performance also makes it easier to use for in-depth understanding.

Poetry on command

The main new capability of text-davinci-003 is inserting completions within texts. This includes adding a suffix prompt as well as a prefix prompt to transition between paragraphs and better define the flow of the copy.

OpenAI GPT-3 is available as a commercial product with an API, but for a fee ($0.02 per 1,000 tokens), anyone with an OpenAI account can experiment with the model on a special beta ‘Playground‘ website. Using the Playground to interact with the AI requires no coding skills.

Visitors can type instructions on writing subjects in the form of a poem, and sit back and watch GPT-3 generate results on screen. This latest update to GPT-3 “feels like a step forward in complexity that comes from integrating knowledge about a wide variety of subjects and styles into one model that writes coherent text”, Ars Technica described.