AWS is going to make two open-source LLMs from Mistral available in Amazon Bedrock. This should give customers even more choice in terms of AI models for their business needs.
The underlying reasons for offering Mistral’s LLMs through Amazon Bedrock are several, according to AWS. First, these AI models show a good cost-benefit balance and this allows companies to deploy GenAI without extreme costs.
Another reason, according to AWS, is that the relevant LLMs of the French AI startup offer a very fast inferencing speed. This makes scaling manufacturing processes much more efficient.
Other reasons
Also, these open-source LLMs offer more transparency and customizability, making it easier for companies to have their own needs met but also to meet the most stringent compliance rules.
Furthermore, the Mistral models offer better access to AI in general. According to AWS, these models help multiple audiences use AI. This, in turn, brings GenAI to more companies, regardless of size.
Mistral 7B and ‘Mixtral’ 8x7B
More specifically, the introduction of the Mistral-LLMs in Amazon Bedrock revolves around Mistral 7B and the so-called Mixtral 8x7B.
Here, the arrival of Mistral 7B should offer AWS users more based on relatively low memory capacity but with high throughput. Moreover, this model supports a range of particular use cases, including summarizing text, classifying text, completing text, and completing code.
The advent of Mixtral 8x7B provides an even more powerful so-called Mixture-of-Experts (MoE) model. It allows users to create text summaries, have questions answered, text classifications and code creation in multiple languages. Supported languages include French, English, German, Spanish and Italian.
It’s not yet known exactly when both open-source Mistral LLMs will become available in Amazon Bedrock.
Also read: AWS, Microsoft, Meta and Tom Tom work on more detailed maps