Google changes privacy policy to scrape content from all websites

Google changes privacy policy to scrape content from all websites

Google is changing its privacy policy to allow itself to scrape content from all websites. The change has been in place since July 1.

In an updated privacy policy, Google grants itself the right to train its AI model with any information from the Web browser. The move may not come as a surprise, since the Web browser holds enormous potential as a training set for the AI model LaMDA, which chatbot Bard uses.

Although some may perceive it as a form of abuse of power. After all, Google has a strong grip on the Internet and remains the most preferred Web browser.

Privacy with a loophole

Google reassures its users in the first paragraph by stating that only publicly available information is included in the workout: “Google uses information to improve our services and to develop new products, features, and technologies that benefit our users and the public. For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities,” reads the privacy policy.

Although it does include a small loophole to get at more sensitive information: “Or, if your business’s information appears on a website, we may index and display it on Google services.” That brings us to the question of how chatbots will handle users asking for sensitive information about others. After all, requesting this information from a chatbot works much more efficiently than scouring every possible website yourself.

Also read: ‘ChatGPT based on illegal sites, private data and piracy’