Google has given its set of intelligent code assist tools a major update. Gemini Code Assist now runs on its own Gemini 2.0 LLM and gets connections to other code repositories and cloud-based databases.
Google’s set of Code Assist tools helps developers increase productivity in integrated development environments such as JetBrains and Visual Studio Code (VSC).
The tech giant has now further expanded Code Assist’s functionality to scale up this productivity. According to Google, this allows developers to add more context to their work without interrupting their workflows.
Code Assist is now based on Gemini 2.0, the most advanced Google LLM currently available. This brings a larger context window to the tools and allows the tool to understand larger code bases than was previously possible.
Feature Ecosystem
Google is launching Gemini Code Assist only in private preview for now. It will additionally bring other features to secure and scalable applications into production. To do so, the tech giant works closely with a third-party ecosystem to provide end users with the best experience. As a result, Gemini Code Assist is already connected to data sources and code repositories such as GitHub, GitLab, Google Docs, Atlassian, Sentry.io and Snyk. These connections allow end users of these data sources to directly invoke Gemini Code Assist’s help from the IDEs they are working from.
On the other hand, real-time data and other information residing in these sources can now be retrieved directly within the code tools.
In addition, these connections also apply to cloud-based databases such as MongoDB, Elastic, DataStax, Aiven, Neo4j and SingleStore, among others. Observability partners within this ecosystem include Dynatrace and New Relic. Security partners are Sonar and Black Duck.
Also read: Gemini Code Assist Enterprise: will Google let everyone code?