4 min Applications

OpenAI lets organizations retrain GPT-3.5 model for specific tasks

OpenAI lets organizations retrain GPT-3.5 model for specific tasks

ChatGPT is a jack-of-all-trades among chatbots. However, it is not recommended to deploy that specific tool from OpenAI for professional use out of the box. Other generative AI solutions are easier to adjust, but now those have a new competitor: the GPT-3.5 model can now also be retrained for numerous purposes.

OpenAI indicates that developers and companies have been asking for a customizable version of GPT-3.5 Turbo, which is known as the most efficient LLM the company houses. A variant of GPT-3.5 runs the free version of ChatGPT, while the paid version is upgraded with GPT-4. It has already been shown many times that even somewhat older, smaller LLMs can still achieve stunning results with the right dataset.

Better steerability

GPT-3.5 Turbo is now trainable on enterprise data, just as GPT-3 was previously available for the same use case. By feeding the model with unique data, it can better focus on the specific needs of the client company. Still, a GPT-3.5 Turbo model already “knows” a lot: like ChatGPT, it already has information through September 2021.

GPT-3.5 Turbo is said to be more capable than GPT-4 in some ways, according to OpenAI. It can produce impressive results, and the possibility of additional refinement is enticing. OpenAI promises that organizations can easily control the model, or provide it with ‘steerability’, as it characterizes it. For example, it cites that it is possible to make it answer only in German. In addition, additional consistency can be achieved in how it presents text or programming code. It is also possible to determine what tone the chatbot takes to match the brand style in question.

There is a doubling of the number of tokens to 4,000: this means how many textual parts a chatbot can handle. This can be a single letter, part of a word or a full word. Although more tokens are possible, it is cheaper for organizations to minimize them as much as possible. Therefore, OpenAI now offers the option to pre-instruct an LLM to minimize the amount of content required in the prompt. For example, the chatbot always knows that it should answer according to a company’s corporate identity, so it does not have to be told that every time. This can lead to prompts that are up to 90 percent smaller than before, according to OpenAI. There will be another GPT-3.5 Turbo model with 16,000 tokens later this year.

Proprietary data, but privacy?

OpenAI introduced a number of privacy-focused improvements earlier this year. For example, companies have been able to indicate for some time that their data should not be used for further training of OpenAI models. Still, it remains the case that GPT models can only be addressed via a public cloud: corporate data does leave the company’s own corporate environment, albeit that it will not be out in the open. VMware indicated just this week that there are certainly drawbacks to this approach: it is not clear what the model is already pre-trained for, while there still remain aspects that cannot be adapted as third-party.

Tip: VMware and Nvidia launch Private AI Foundation: AI that keeps enterprise data secure

Even then, OpenAI’s technology counts as a very impressive example of generative AI. For that reason, companies that may not be too concerned about privacy will only consider its effectiveness and pricing model. In that regard, OpenAI charges in dollars: GPT 3.5 costs 0.8 cents per 1,000 tokens for training and 1.2 cents per 1,000 tokens for inputs and outputs the chatbot.

By the way, older GPT 3 models will disappear from the existing service. This includes, for example, davinci-002 and babbage-002, which will remain on the air for a few more months. This does show another danger of public cloud variants of AI: those who want to build an LLM and replace it once at their own pace will still end up with a model they own.

Also read: GPT-4 moderates content faster and better than inexperienced humans