2 min Analytics

Microsoft comes up with toolkit for conversational AI

Microsoft comes up with toolkit for conversational AI

Microsoft introduced Icecaps, a toolkit for conversational AI, on Thursday. This is a subfield in Natural Language Processing for creating realistic conversations with AI. Icecaps is an acronym for Intelligent Conversation Engine: Code and Pre-trained Systems.

The tool uses multitask learning to create different personalities in an AI system. In this way, AI’s can take on different ‘personalities’, depending on the interlocutor. This means that conversations can be adapted to the parties involved and to certain scenarios. “With a design that emphasizes flexibility, modularity and ease of use, Icecaps enables users to build customized neural conversation systems that produce personalized, diverse and informed conversations,” reports Microsoft on the project page.

There are a number of features to help users create conversational AI systems. These functions are all designed to use data as efficiently as possible. For example, models in the tool are conceptualized as a chain of components. Data is transported through this chain in order to perform complex tasks. Technologies for creating personalities and functions for converting text into binary data have also been added.

“Several of these tools were made possible by recent work done here at Microsoft Research, regarding personalization, mutual information-based decoding, knowledge grounding, and an approach to enforcing more structure in shared properties to stimulate a more diverse and relevant response,” explains the company in a blog post about the new AI tool.

Ready-made models still available at a later date

A number of other features will become available in the coming months, including pre-trained models for developers to build on. First of all, stochastic answer networks (SAN) and personalized transformers will be available. “We had hoped to include these systems in the launch of Icecaps. However, given that these systems can produce malicious responses in some contexts, we have decided to investigate improved content filtering techniques before releasing these models to the public,” can be read on the Icecaps GitHub page.