2 min

Tags in this article

, , , ,

Hugging Face, ServiceNow and Nvidia recently introduced three open-access LLMs under the name StarCoder2. These LLMs are ideally suited for performing code-related tasks.

The partnership of Hugging Face, ServiceNow and Nvidia has developed three LLMs in different sizes within the StarCoder2 portfolio: one of three billion parameters, one of seven billion parameters and one of fifteen billion parameters. In addition, these LLMs are trained on 619 different programming languages. The partnership specifically focuses on the responsible development and use of LLMs for coding purposes.

Specifically, the StarCoder2 models now developed should help companies accelerate various programming tasks that are part of their development processes. Because the LLMs are so-called open-access models, they should enhance previous GenAI initiatives in areas such as productivity and equal access for developers.

Trained on more data

Under the hood, the three different LLMs within StarCoder2 have been trained on what is called The Stack v2. This dataset, according to the developers, contains as much as seven times more training data than the previous version.

Het Stack v2-logo met sterren en een ruimteschip.

In addition, the developers used new training methods within the BigCode project. With this, they want to ensure that LLMs can also understand and generate programming languages with smaller sources, such as the old COBOL, mathematical languages and program source discussions.

The smallest StarCoder2 LLM, that of three billion parameters, was trained using the ServiceNow Fast LLM framework. The LLM, with seven billion parameters, was developed using Hugging Face’s nanotron framework. The largest model of fifteen billion parameters was trained and optimized using the end-to-end Nvidia NeMo cloud-based framework and the Nvidia TensorRT LLM software.

Expectations

The three developers obviously hope the StarCoder2 LLMs will provide the best performance for coding tasks. In any case, tests would already show that the smallest StarCoder2 model already performs as well as the earlier StarCoder-LLM of fifteen billion parameters. In addition, the developers indicate that the new LLMs include capabilities for “repository context” and generating accurate and context-aware predictions.

The StarCoder2 LLMs are now available through the BigCode project’s GitHub page. They are also available through Hugging Face. The Nvidia-trained variant of fifteen billion parameters will also be available through the Nvidia AI Foundation environment.

Also read:ServiceNow and Hugging Face release new LLM for code generation