Meta AI made its Open Pretrained Transformer (OPT-175B) training model available for free. The release invites scientists to use the model for research.

Meta AI wants to stimulate the use of large language models (LLMs). LLMs are AI training models based on Natural Language Processing (NLP) and upwards of 100 billion different parameters. These giants are used to develop algorithms that generate creative text, solve simple mathematical problems and comprehend text.

The OPT-175B model has over 175 billion parameters and was trained using public datasets. Its availability should introduce many researchers to LLMs. Scientists can learn about the limitations of LMMs and discover risks that are currently unknown.

Meta AI released the model in combination with pre-trained models and code for training. The code is runnable with as few as 16 Nvidia V100 GPUs, which opens the model up to scientists that lack the resources traditionally required for AI training.

The pre-trained models are based on the same dataset and settings as OPT-175B. This allows Meta AI researchers to test how its model behaves at scale. The pre-trained models come in different parameter variants: 125 million, 250 million, 1.3 billion, 2.7 billion, 6.7 billion, 13 billion and 30 billion.

Restrictions

Although using OPT-175B is free, Meta AI does enforce some restrictions. The LLM AI model is shared under a non-commercial licence. As a result, most of its use will be scientific in nature.

Tip: Amazon releases MASSIVE dataset for natural language understanding