2 min Applications

Inflection unveils “best in class” AI foundation model

Inflection unveils “best in class” AI foundation model

Inflection, an AI startup led by LinkedIn and DeepMind co-founders, releases a new AI model. The new service is designed to compete with comparable LLMs from Google and OpenAI.

This week Inflection announced the release of Inflection-1, the company’s new large language model. The LLM will power Inflection’s personal AI service, named Pi (for “personal intelligence”). The company says Pi is designed to be “a kind and supportive companion offering conversations, friendly advice, and concise information in a natural, flowing style”. The personal AI was released in May, 2023.

Inflection hopes this follow up release of a new foundation model will allow it to challenge LLM powerhouses like OpenAI’s GPT 3.5 and Google’s PaLM.

Measuring AI benchmark performance

Inflection-1 was trained using thousands of NVIDIA H100 GPUs on a very large dataset, the company claims. “Our team has been able to take advantage of our end-to-end pipeline to develop a number of proprietary technical advances that have enabled these results”, it says.

Inflection also published a technical memo that details its evaluations and compares Inflection-1’s performance against other LLMs. Massive Multitask Language Understanding (MMLU) is a commonly used benchmark that tests a very wide range of academic knowledge. On this benchmark, Inflection showed its model was the “best performing foundation model in its class”, outperforming Meta’s LLaMA, OpenAI’s GPT 3.5, and Google’s PaLM 540B.

A model that is best in its “compute class”

The memo also shows, however, that OpenAI’s GPT-4 (ChatGPT) and Google’s PaLM-2 both outperform Inflection-1. Inflection claims it will soon be releasing another technical memo detailing one of its AI models “in the same compute class” as PaLM-2 and GPT-4.

It should be noted that the concept of assigning different compute classes to artificial intelligence models is new, and so far has not been adopted by the AI community as a whole. We should recognise, that some large language models are larger than others.