4 min Applications

Google doesn’t trust its own chatbot, so why should we?

Google doesn’t trust its own chatbot, so why should we?

Google parent company Alphabet is advising its own staff to be very careful with chatbots, including its own Bard tool. Despite having control over Bard itself, it does not want employees feeding sensitive information to the AI bots.

It doesn’t come across too well: a tech company that doesn’t fully support its own technology. Earlier this month, it was already apparent that Meta’s own staff was reluctant to reach for the company’s VR headsets to participate in the Metaverse during working hours, a vision held by Mark Zuckerberg to transfer jobs into virtual spaces. Now it seems Google wants to curb the dangers of its own tech from above. Still, it is important to add some nuance here.

Secure AI

Generative AI is in high demand, and chatbots are the posterchild of the movement. After all, it was ChatGPT that showcased the technology’s potential: its mostly human-like responses and innovative applications captured the imagination. Against that product from OpenAI, Google entered the fray with Bard. At least, that’s how it’s often characterized. The fact is that Google CEO Sundar Pichai is very clearly quite cautious about Bard’s capabilities. In late March, he spoke in a podcast by The New York Times about the “inspiration” the chatbot was intended for, and nothing work-critical. Not precisely a workhorse for professionals, in other words.

Rather, Google’s caution indicates that it is not positioning Bard as the most suitable tool for serious work. Or at least, not in the here and now. It is however paving the way for that to change sometime in the more distant future with improved programming code skills and mathematical computing power.

What Google wants to move toward is the safe handling of AI models. That obviously includes cybersecurity. Data entry should not result in data leaks to the outside world, for example. That’s why Google recently came out with the Secure AI Framework, which is much more indicative of how the tech giant views AI than basing it off the restrictions it places on personnel as far as Bard is concerned.

Copilots versus chatbots

For applications based on competitor OpenAI’s technology, Microsoft in particular uses the term “copilot” rather a lot. This still involves the same underlying LLMs (large language models) that also support ChatGPT, but these have been applied for a specific purpose and with a lot of limitations. For example, GitHub’s Copilot or within an Office 365 application are not as chatty as an ordinary chatbot. These focus specifically on a particular task and are trained beforehand not to deviate from it. We’ve referred to this before as a “chatbot with a job.”

This is ultimately why it is not surprising that Google is somewhat reluctant to use Bard for its own work. The chatbot is an experiment, which is how the company explicitly calls the application.

Worrying development

Still, the fact is that Google and OpenAI can urge caution as much as they want, as chatbots are indeed being used for work. Despite parties like Samsung and Amazon not letting their staff use an AI assistant while working, Reuters cites research indicating that 43 percent of professionals still use ChatGPT for work. They just don’t tell their employer a lot of the time.

Currently, these are still sparse stories: a lawyer listing fake facts created by ChatGPT or a cybercriminal having phishing emails drafted by the chatbot. However, we will see this more and more often if the power of generative AI is not harnessed correctly—a worrisome development, in other words.

The solution? Not only clear regulations and frameworks from government and industry but also education. The deployment of generative AI is already in full swing and will be unstoppable. To support this, employees need to develop a higher awareness of security than they do now.

This is not a lost cause. The key is to place a clear dividing line between recreational or experimental tools like Bard and ChatGPT and copilots as we see them in applications. Consider also security helpers at CrowdStrike and recently Trend Micro. These tools accelerate detection and response to cyber-attacks. In addition, they create more overview in an often difficult-to-fathom IT environment. It is perhaps the clearest example of what is possible with generative AI when given a clear task. Let that be exactly how we turn to technology in our working lives.