Google AI Overviews promises to answer searches faster than ever. Unfortunately, more than once this turns out to be a completely wrong (and dangerous) answer.
An “AI Overview” appears directly below Google’s search bar and is clearly marked as an AI feature. It appears to anyone logged in and with access to it through Search Labs. This does not yet apply to Europeans. For now, that appears to be a blessing in disguise, according to the AI answers the feature displays at the moment.
Dangerous advice
The errors in AI Overview vary wildly. One X user shows a screenshot containing advice for when cheese won’t stick to the rest of the pizza. Besides reasonable suggestions like sauce and letting the pizza cool, Google advises adding 1/8 of a cup of non-toxic glue to make the sauce more tacky. This mistake possibly stems from the fact that advertisers are said to use glue to make cheese on cheese on pizza look more appealing in advertisements. However, it is doubtful that even this is true at all.
Other AI Overview misses include the misclassification of a python as a mammal (it’s a reptile) and another piece of advice that suggests making some mustard gas to clean a washing machine.
Although the usual Google results appear with a short mouse scroll below the AI addition, the high-level answers are mostly where users end up. Nearly half of Google visitors will already leave the page after seeing the top results if those are not to their liking. In other words: the top spot in search is a privileged place to be.
Tampering with success formula
It is a matter of time before AI Overview temporarily disappears or suddenly improves significantly. We suspect the former, as an earlier half-baked release of Google’s Gemini image generation also led to a summary withdrawal. Specifically, it only generated images of people with darker skin tones, even among German soldiers during World War II, for example. It raises the question of why Google thought it wise to put AI Overview live at this stage, knowing it had messed up an AI launch previously.
The accuracy of AI Overview has not been shared with the outside world, so it is uncertain how large the percentage of incorrect AI search results really is. Since Google is by far the largest player in search, even a tiny margin of error will lead to many a curious result. However, the search prompts with these errors are not edge-cases in any sense, so it seems that any AI Overview output may contain a large error. In fact, that is inherent in generative AI, which requires a certain unpredictability to generate new text, images and audio.
Google tinkering with its search formula is a big deal anyhow, given its near-universal adoption rate and high degree of relevancy when it comes to delivering consistent results. Without the GenAI addition, conventional search results still feature bullet points that pull relevant information from prominent sources. These are usually a good reflection of the source itself, although obviously they can also be incorrect. Still, it might be preferable to the GenAI addition, as it’s another point of failure.
Not the first time
As mentioned, it is not Google’s first AI error this year. Others have been in the same boat since the rise of the AI hype in early 2023. For example, Bing Chat (now Microsoft Copilot) was initially rather rude and often extremely inaccurate. This seems to have to do with too much freedom of movement for the underlying LLMs that enable AI generation, often described as a “high temperature” in the generated outputs. A lower temperature equates to fewer “creative” answers. After all, generative AI models can be somewhat constrained, giving a somewhat more predictable answer that is more likely to be accurate than other options.
Also read: ChatGPT now talks in real-time, new GPT-4o model available for free