At Google Cloud Next this year, AI once again plays centre stage. Google Cloud is bringing the necessary updates to its existing LLM offerings within Vertex AI. For example, Gemini 1.5 Pro is in public preview effective immediately. On top of that, it is introducing Imagen 2.0 to generate images, and it has unveiled CodeGemma. Beyond that, there’s even more AI news to report.
Google introduced us to BART last year. This model, Google believed at the time, was going to revolutionize AI. Mere months later, the introduction of Gemini has dwarfed BART’s achievements. Today, we’ve already arrived at the (preview) release of Gemini 1.5 Pro. It’s an improved version of Gemini, with the most powerful feature being the huge amount of input that can be put into a prompt to then apply AI to. In fact, Gemini 1.5 Pro has an input context window of 1 million tokens. That’s more than any model at the moment.
Tip: Google introduces its first proprietary ARM CPU, called Axion
Imagen 2.0
With Imagen, it was already possible to generate images. With Imagen 2.0, this can be done a lot better as one would expect, but it’s now also possible to generate ‘live’ images based on text prompts, with a playback time of 4 seconds. It remains to be seen how organizations can use this effectively. What we find much more important about Imagen 2.0 is the ability to edit images. Both within and around the image, all aspects can be adjusted. For example, users can easily remove or add elements. It is also possible to add a digital watermark. Organizations such as Shutterstock and Rakuten are already working with Imagen 2.0.
CodeGemma
Within the Gemma family of LLMs, Google is now introducing two versions of CodeGemma. These include a version with 7 billion parameters and a model with 2 billion parameters. With these, it is possible to improve programming code, as well as to generate complete functions while programming or simply helped by further finishing code. The CodeGemma 7B model runs in the cloud and works through an integration in the IDE, but the 2B model is small enough to run on a laptop.
The CodeGemma models are trained on datasets with more than 500 billion tokens. They are specifically trained to understand code better than the English language. As a result, according to Google, the models are able to generate much more accurate code. Of course, the models can handle a variety of programming languages, including Python, JavaScript, Java and more.
Vertex AI improvements
Google Cloud also has improvements in store within Vertex AI. For example, it will be easier to manage prompts. The biggest issue with Vertex AI right now is the lack of ease around experimenting with, migrating and monitoring prompts. With Vertex AI Prompt Management, that’s a thing of the past; you can now share prompts within a tam, including versioning, so you can always revert to previous versions. In addition, Google offers the ability to use AI to make suggestions to further improve your prompt.
In addition, users can compare two similar prompts side by side and compare the results to see which prompt produces the best results. In this way, organizations can more easily take steps forward in prompt engineering.
Next, Google introduced so-called Evalution Tools, which help users determine which prompt with which model produces the best results. This involves looking at how well the prompt is observed by the model, how much the output still resembles the input, what its quality is, as well as how much time it takes. In this way, it is easier to determine which model is better.
Enterprise Truth, keeping models current with proprietary data
All foundation models are trained on datasets, and after a certain date they stop. That means all models are a little older and less current with every passing day. For some applications, that’s a problem because organizations want the ability to work with current data. For that, Google is introducing what it calls Enterprise Truth, where you can ground an existing model with your own data so that it can still have the data you need.
The data one can attach to the model is not added to the dataset, but as a user runs their prompts, the model does have that data available to generate the output. For example, should you want to generate a daily summary of a stock market day, it can come in handy if you feed the model the stock prices. However, this can also be something as simple as your own organization’s knowledge base.
The amount of updates for the LLMs and Vertex AI are numerous and significant. Google continues to develop new AI solutions and models at a rapid pace. It is doing all this even while many organizations are still searching for exactly how to deploy AI.
Also read: Google Cloud offers sovereign cloud for AI in any data center