3 min Applications

OpenAI makes fine-tuning of its LLMs easier and cheaper

OpenAI makes fine-tuning of its LLMs easier and cheaper

OpenAI has introduced tools that allow developers to fine-tune AI models more efficiently than before. On top of previous options through the fine-tuning API, the company now offers more ease of use as well. These tools should reduce the common problem of a high error rate during fine-tuning processes.

The tools will be added to the existing fine-tuning API. This API allows developers to provide OpenAI’s various LLMs, such as GPT 3.5 Turbo, with additional data that the model was not trained on. Based on this data, it is then possible to make more targeted queries to the LLMs, achieve greater efficiency and reduce costs.

Features for incremental fine-tuning process

Performing these processes via the fine-tune API is a complex step-by-step process, summarized as “epochs”. This is extremely error-prone. In an epoch, the LLM analyzes the dataset with which it is fine-tuned at least once. When this involves an error, an LLM may not be able to properly incorporate the given data and is therefore limited in functionality.

OpenAI’s new toolset for the fine-tuning API should help prevent these errors. Misses in epochs often occur only after the first training session. For this purpose, it is now possible to save a copy of the AI model after a successful session. If an error occurs in a subsequent phase, users can always go back to the last correct version. Since AI fine-tuning consumes a lot of computing power, this addition can provide significant cost savings.

In addition, OpenAI has added new features for comparing different versions of a finetuned LLM. This makes it easier to adjust the hyperparameters of LLMs, which determines how random or predictable a model is. Therefore, the new feature offers more accuracy; in addition, the dashboard will now show more technical data.

Furthermore, it is now possible to stream data to AI development tools from other vendors, such as for the model development platform Weights and Biases.

Customize LLMs in parts

In addition to the new features for the fine-tune API, OpenAI also presented a new tool for companies that need more advanced capabilities for optimizing LLMs.

Assisted Fine-Tuning allows developers to extend the capabilities of an LLM by adding additional hyper-parameters. Also, users of this tool can fine-tune only certain parts of the LLM without having to do so for the entire codebase. This is done using PEFT, or Parameter-Efficient Fine-Tuning. It is a technique that helps save a lot of costs by doing only the necessary calculations.

Also read: ChatGPT now available without an account