In recent weeks, Google has unexpectedly suspended accounts of customers who used the Antigravity development environment and Gemini services in combination with external agent tools such as OpenClaw.
According to The Register, this affects not only users of free accounts but also paying customers with subscriptions of up to $250 per month. The measure has caused unrest among developers. They claim that they remained within their paid quotas and did not receive a clear warning.
According to Google, the problem arose because Antigravity and related AI services were increasingly being used via autonomous third-party agents. This type of use led to a much heavier and more continuous load on the underlying computing infrastructure than the services were designed or priced for. The company claims that this use affected the quality of service for other users. Intervention was necessary to prevent further pressure on the systems.
Google says that this was not a normal or intended use. The company refers to it as abuse because Antigravity was, in fact, used as a conduit for other applications. In doing so, some user groups collectively used so much computing power that the available capacity was exceeded. To respond quickly, Google blocked access to Antigravity for these accounts. This was done while other Google services remained available.
Dissatisfaction with unclear terms of use
However, developers have been highly critical of this reasoning. They point out that the terms of use do not explicitly prohibit combining Antigravity or Gemini with external wrappers or harnesses. As a result, many customers assumed that these integrations were permitted. In their view, it is unreasonable to block paying users without prior warning, especially when they have complied with the formal limits of their subscription. Various voices from the community emphasize that clear technical blockages or error messages would have been more appropriate than immediate account suspensions.
The situation shows parallels with previous interventions by Anthropic. It took measures to prevent subscriptions from being used more cheaply via workarounds than via the official API. In both cases, the core issue seems to be that AI suppliers are struggling to control the difference between consumer-oriented use and large-scale, automated deployment by developers.
The issue exposes a broader tension in the AI market. Companies such as Google offer relatively inexpensive access to powerful models to gain market share, but run into limits when those models are used at scale and continuously. This raises questions about the sustainability of current pricing models and how transparent suppliers should be about what is and is not allowed. For developers, it underscores the importance of clear agreements, as dependence on AI platforms is growing faster than clarity about their rules.