3 min Applications

IBM customers can get their hands on Meta’s LLaMA 2 model

IBM customers can get their hands on Meta’s LLaMA 2 model

IBM is expanding its enterprise AI platform watsonx with help from Meta. The LLaMA 2 LLM will be available for customers to train AI without having to build their own model. For now, it will only be available to a select group of clients, followed by a broader release. This will make the deployment of generative AI a lot more attainable.

Meta has had a turbulent AI approach this year. In February, it made an LLM called LLaMA available to researchers, although it was leaked on 4chan not long after. It had 65 billion parameters, although smaller variants existed.

The development of generative AI is still accelerating, and so there is already LLaMA 2, with up to 70 billion parameters. This refined version has a somewhat more clearly laid out plan from Meta, which is now thus once again partnering with IBM.

AI at your fingertips

LLaMA 2 is a so-called “foundation model,” which, unlike GPT-4, for example, still explicitly requires additional training. An organization can use IBM’s watsonx.ai studio to build on a model like LLaMA 2. The most obvious is to develop its own dataset and train the model with it. The applications vary by company, and the two companies deliberately leave this open.

A key issue surrounding generative AI is preventing and containing unwanted outputs. For that reason, IBM has an option for LLaMA 2 that adds “guardrails,” barring coarse language in advance.

watsonx.ai will further expand to include a studio for AI tuning and its own generative AI models. This allows companies to see which model works best for the specific application.

No duplication of effort

The idea of bundling AI in this way may not be unique to IBM and Meta, but it differs from the API approach that OpenAI takes. For example, there are parties like Salesforce coming up with GPT-based solutions, where we still have our doubts that this AI application will not derail. After all, it is difficult to gain good insight into the operation of an AI model via an external application. OpenAI is also pretty shady about the details in that regard.

Tip: Finally, a marketing assistant that gets the job done

Examples like these show why organizations may want to take a different path where people are more in control. AI models that are customizable and trainable in an insightful way will have an edge for many organizations. It is up to parties such as IBM and Meta to make the advantage explicit, as with this extension to watsonx.ai. After all, offering foundation models avoids duplication of effort, while companies can still work on the AI models and training themselves.