2 min Applications

Google: ‘Bard is getting better at logic and reasoning’

Google: ‘Bard is getting better at logic and reasoning’

The tech giant says its AI chatbot is improving at things like math and programming.

This week Google announced that Bard, its AI-powered chatbot, was getting better at logic and reasoning based tasks and queries. This must be welcome news given that the Google was late to the generative AI party, and its chatbot has been plagued by problems seemingly from the start.

Indeed, in April Bloomberg reported that Google employees were raising the alarm about Bard, claiming Google’s rush to catch up to ChatGPT had “led to ethical lapses”. Some staffers accused Bard of being a “pathological liar”, according to the report.

A new method: “implicit code execution”

On Wednesday, however, Google unveiled the Bard “improvements” in a blog post. Jack Krawczyk, Bard Product Lead and Amarnag Subramanya, VP of Engineering for Bard, explained just how the chatbot was “getting better”.

“Bard is getting better at mathematical tasks, coding questions and string manipulation”, they write. This is because of a new technique called “implicit code execution”, which helps Bard detect computational prompts and run code in the background. “As a result, it can respond more accurately to mathematical tasks, coding questions and string manipulation prompts”, they claim.

The implicit code execution feature helps Bard do things like calculate interest accumulation, rearrange word spellings and analyze prime numbers and factors.

Realising the limitations of LLMs

The Google staffers observe large language models (LLMs) are like prediction engines. This means that when given a prompt, they generate a response by predicting what words are likely to come next. This in turn makes generative AI services “extremely capable on language and creative tasks, but weaker in areas like reasoning and math”.

That is now improved by the implicit code execution method, which allows Bard to generate and execute code to boost its reasoning and math abilities.

Krawczyk and Subramanya say that, using the new technique, Bard identifies prompts that might benefit from logical code, writes it “under the hood,” executes it and uses the result to generate a more accurate response.

They claim the new method improves the accuracy of Bard’s responses to computation-based word and math problems by approximately 30%.