3 min Applications

Google adds agent-driven workflows to Opal

Google adds agent-driven workflows to Opal

Google is adding an agent step to Opal. This makes it possible to design workflows not as fixed chains of model calls, but as dynamic, agent-driven processes. 

With this expansion, Opal is shifting from a low-code orchestration tool to a platform in which an AI agent independently determines which actions, tools, and models are needed to achieve a goal.

According to TechCrunch, with this step, Google is explicitly positioning Opal as a so-called vibe-coding tool, where users can build mini-applications in natural language that not only make plans but also perform tasks. The new agent runs on the Gemini 3 Flash model and automatically selects the tools needed to complete a task.

Until now, Opal workflows mainly consisted of predefined steps. Developers had to explicitly specify which model was called when and what input was required. The agent step breaks that pattern. Instead of selecting a model, the user defines the goal, after which the agent itself determines the execution path and plans the next steps.

A technically relevant detail that TechCrunch highlights is that the agent can also use memory across multiple sessions by enabling existing Google services, such as Google Sheets. This allows applications to keep track of a continuous list or user context, for example, without having to build explicit state logic for it. This makes Opal more suitable for business scenarios where continuity and context are important, such as internal tooling or simple process automation.

The agents in Opal are also natively interactive. When information is missing or additional choices are needed, the agent itself asks the user follow-up questions before continuing. This lowers the threshold for non-technical users but also means that workflows are less deterministic than those of traditional automation solutions. For IT departments, this requires clear agreements on control, logging, and governance.

Global rollout and integration in Gemini

The broader product context is also relevant. Opal was introduced in July 2025 for users in the United States and has since been rolled out gradually worldwide. In December, the tool was integrated into the Gemini web app, allowing users to build their own applications without code using a visual editor. This underscores that Opal is no longer just an experimental Labs tool, but is becoming part of Google’s broader AI strategy.

TechCrunch emphatically places this development in a broader market trend. In addition to Google, other providers are also working on platforms that allow applications to be built using natural language. This positions Opal in a competitive landscape where speed, accessibility, and integration with existing ecosystems are decisive factors.

For business IT teams, this means that Opal could be an interesting experimental platform for AI-driven automation and light application development. At the same time, the key question remains where agentic behavior actually adds value and where traditional, explicitly defined workflows are still preferred. The addition of autonomous agents increases flexibility, but also requires renewed attention to manageability and predictability within enterprise environments.