1 min

Tags in this article

, , ,

Amazon is giving its voice assistant Alexa a lot more functionality by integrating it with (generative) LLM models. By doing so, the company hopes to attract developers’ interest in developing new features.

By integrating generative LLM AI models, Amazon wants to encourage developers to develop more innovative user experiences of the voice assistant. Primarily, the developments should enable more natural conversations with the voice assistant.

New toolset

Amazon is offering them a new toolset with AI for this purpose. These include integrating real-time access to data, functionality for making restaurant reservations or summarizing important news items. The toolset is deployable without complex code and does not require experience training specific interaction models.

In addition, the new LLM-based tools should allow developers to integrate content and APIs with the relevant Alexa LLMs, or LLMs of their choice. This, in turn, is important for advancing the naturalness of conversations on Alexa-enabled devices.

Operation

More specifically, during the development process, developers themselves provide key components such as the skill manifest, API specifications, content sources and plain language descriptions. At run time, Alexa then automatically identifies the appropriate provider, orchestrates API calls and knows how to retrieve content based on user context, previous conversation history and event timing.

Also read: Amazon IoT security below par, but it is far from alone