Coding is a crucial battleground for GenAI development. However, the race for AI programming assistance leadership remains uncertain. Google Cloud aims to secure its position with Gemini Code Assist Enterprise. Here’s what it offers.
Gemini Code Assist is already a familiar tool among developers. Previously branded as Duet AI for Developers, it has long been discussed as an alternative to GitHub Copilot, Anthropic’s Claude model, and ChatGPT. However, general feedback hasn’t been particularly positive. For instance, Codeium (which itself offers a competing product) concluded that Code Assist fails to pass even “basic sniff tests.” Clearly, a high-quality AI programming tool needs to offer more.
Code Assist, but for a specific need
Given the severe shortage of experienced developers, companies genuinely need AI assistance. Google Cloud hopes to address this challenge with Gemini Code Assist Enterprise, which they’re touting as “the next leap in app development capabilities.” It extends beyond simple AI assistance within an IDE. By granting the new tool extensive access to your codebase, it can provide targeted suggestions. Unlike generic recommendations, Code Assist Enterprise adapts to the local context, leveraging the extensive context window of the underlying Gemini 1.5 Pro model.
Tip: Gemini 1.5 Pro unravels complex vulnerabilities
This LLM comes with a standard context window of 128,000 tokens, which is already substantial. In June, Google introduced a 1.5 Pro variant featuring 2 million tokens. This expanded capacity allows the model to process and consider all input data, increasing its ability to produce, suggest, or correct valid code. This capability, called code customization, takes into account your company’s best practices and internal libraries.
Simple prompts
In the Gemini Code Assist Enterprise introduction video, Google emphasizes the tool’s straightforward interaction through natural language prompts. Need telemetry for an app? Simply ask Gemini. It fulfills the same use case as low-code platforms, but within Google’s own ecosystem.
Gartner forecasts that 90 percent of enterprise software engineers will use AI-assisted programming by 2028. This represents a significant leap forward, considering only 14 percent had adopted it by early 2024. Google maintains that this adoption will be essential to manage increasingly complex codebases.
However, it appears that simple prompts aren’t necessarily creating more programmers, but rather enabling existing programmers to work more efficiently with AI assistance. The democratization of coding remains an unfulfilled promise for now.
Additionally, Gemini Code Assist Enterprise offers broad language support. While examples primarily feature Python, it supports everything from Ruby to Rust and Scala.
Is Gemini really the right option?
The question remains: Will Google provide developers with the right tools? After all, Gemini 1.5 Pro isn’t necessarily the top performer in benchmarks. According to Aider’s LLM Leaderboards, which tests existing code modification capabilities, Gemini ranks behind 12 other models. OpenAI’s relatively new o1 preview leads the pack, while Claude, DeepSeek Coder, and the Llama-derived Dracarys2-72B-Instruct tie with Gemini 1.5 Pro.
In summary, Google had work to do to make Gemini truly optimal for programmers. They claim to have accomplished this by focusing on practical application within organizations. After all, what matters isn’t which LLM performs best generally, but how well it functions within its specific environment.
It isn’t particularly cheap: pricing starts at $45 per user per month. However, there’s a promotional offer until March 31, 2025, where annual subscribers can access the service for $19 per user per month.
Also read: GitHub Copilot Chat makes AI programming assistance even more nimble