Mistral AI is trying to position itself in the AI field as the secure alternative to OpenAI and other AI tools. The company is introducing an API for content moderation.
The API recognizes harmful content from eight different categories. Some examples are hate speech, sexual content, violence, and privacy-sensitive information. The recognition is done both in messages and conversations.
Mistral AI has integrated its LLM Ministral 8B for the security service. That model has been available for several weeks.
‘Security crucial in AI’
The company says it considers it particularly important that developments in the AI industry always happen with attention to security. At competitor OpenAI, there are indications that safety in development is quickly being pushed aside for other interests. Several executives who left the company raised this issue in the public posts they wrote about the reason for their departure.
Also read: Capital or security: OpenAI undergoes consequences of choosing capital
“In recent months, we have seen growing enthusiasm in the industry and research community for new LLM-based moderation systems, which can help make moderation scalable and more robust across applications,” the company said.
To give other U.S. competitors another leg up, the API offers support for 11 languages. Often, American competitors stick to their own language, and “foreign languages” are treated with much less care and attention. The API supports Arabic, English, French, German, Italian, Japanese, Korean, Portuguese, Russian and Spanish.