2 min

Microsoft announces Tutel. The open-source library is available immediately for developing AI models and applications with a ‘mixture of experts’ architecture.

AI models and residential buildings have a feature in common. Both start with architecture. ‘Mixture of experts’ (MoE) is an architecture for AI models: a set of design considerations for the development of self-learning, predictive code and applications.

The presence of multiple specialist models is characteristic of a MoE approach. These specialized models are also known as experts. Experts are called upon only when an issue requires their specialism. Let’s say MoE is used for the AI application in text processing. A grammar expert would not be called upon to suggest a synonym for a word; that’s what the vocabulary expert is for.

Microsoft’s stake in MoE

Microsoft is particularly interested in MoE because the approach uses hardware efficiently. Only experts with specialities required for an issue use computing power. The rest of the model waits quietly for their turn, which promotes efficiency.

Microsoft underlines its interest with the launch of Tutel, an open-source library for developing MoE models. According to Microsoft, the application of Tutel enables developers to accelerate the operation of MoE models and improve the efficiency of hardware usage. The latter is particularly true for MoE models that use Microsoft’s Azure NDm A100 v4 VMs, for which Tutel was designed.

A concise interface should facilitate the integration of Tutel and existing MoE solutions, as exemplified by Tutel’s recent integration with fairseq, a Facebook toolkit for training AI models.