2 min

Tags in this article

, ,

OpenAI is making it easier for developers to integrate ChatGPT and Whisper AI models into their applications. To this end, two paid APIs were recently introduced.

The first API presented focuses on ChatGPT. Until now, this tool was only accessible via a graphical (web) interface, so the technology could not yet be used in other applications.

Meanwhile, the model has been further optimized so that less hardware is needed to run the model and access functionality in the backend. This has greatly reduced the cost of running the model.

Because of this optimization and cost reduction, the use of the new ChatGPT API, labeled gpt-3.5-turbo, is therefore lower than that of the existing GPT 3.5 models. The cost of this API is $0.002 per 1,000 tokens. This makes ChatGPT up to ten times cheaper than the aforementioned other GPT 3.5 models.

Introduction of dedicated servers

The underlying infrastructure of the AI tool runs on Microsoft Azure. It means that the deployments of multiple users run on the same (shared) hardware. Therefore, for users who want to work even more intensively with ChatGPT or deploy it for other solutions, OpenAI is also now introducing “dedicated instances” or dedicated servers.

Here, developers pay only for the time used to allocate computing power for handling their requests. With dedicated instances, they also get full control over the load of these instances. Higher loads improve the throughput of requests, but make the handling of each request slower.

They also get additional options of functionality. These include longer context limits and the ability to “pin” the model snapshot.

Use of dedicated instances

OpenAI says that using these dedicated instances only makes sense for developers who want to run more than 450 million tokens per day. Moreover, dedicated instances provide better optimization of developer workloads relative to hardware performance. This provides significant cost savings compared to using a shared infrastructure. Prices of these dedicated instances can be requested from OpenAI.

All users of the ChatGPT API, not just those of dedicated instances, can further benefit from any optimizations made to the model.

Whisper also gets API

In addition to the ChatGPT model, OpenAI’s Whisper AI transcription model also has a paid API now. This API provides access to a managed version of this AI model.

Whisper was introduced as an open source model last September. This makes it easier for companies to roll out the model within their own environment, rather than an OpenAI API. Still, deploying this model is a tricky technical matter.

The transcription tool automatically converts audio files into written text. In doing so, users can choose between transcription in the original language or and English version. The model was trained on 680,000 hours of audio files available on the Internet. About a third of these files were not in English.

The API now provides on-demand access to Whisper’s (OpenAI-managed) v2 model. The cost for this access is $0.006 per minute.

Also read: OpenAI introduces paid version ChatGPT Plus