2 min Analytics

Google scientists told to portray AI only in a positive light

Google scientists told to portray AI only in a positive light

The company has created additional rounds of review to police the tone of their employees’ publications

Reuters reports that Google this year moved to tighten control over its scientists’ papers by launching a “sensitive topics” review. In at least three cases, the company requested authors refrain from casting its technology in a negative light, according to the report.

Reuters claims this enforced bias was documented to internal communications and interviews with researchers involved in the work.

Google’s new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal webpages explaining the policy.

Reuters says that Google declined to comment for this story.

Managing “sensitive topics”

The explosion in research and development of AI across the tech industry has prompted authorities in the United States and elsewhere to propose rules for its use. Some have cited scientific studies showing that facial analysis software and other AI can perpetuate biases or erode privacy.

The “sensitive topics” process adds a round of scrutiny to Google’s standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said.

Google Senior Vice President Jeff Dean said that Google supports AI ethics scholarship and is “actively working on improving our paper review processes, because we know that too many checks and balances can become cumbersome.”

Google in recent years incorporated AI throughout its services, using the technology to interpret complex search queries, decide recommendations on YouTube and autocomplete sentences in Gmail. Its researchers published more than 200 papers in the last year about developing AI responsibly, among more than 1,000 projects in total, Dean said.

An example of tone-setting: AI for foreign language study

A paper this month on AI for understanding a foreign language softened a reference to how the Google Translate product was making mistakes following a request from company reviewers, a source said. The published version says the authors used Google Translate, and a separate sentence says part of the research method was to “review and fix inaccurate translations.”

For a paper published last week, a Google employee described the process as a “long-haul,” involving more than 100 email exchanges between researchers and reviewers, according to the internal correspondence.

Tip: What is Google Anthos? Is this the modern cloud infrastructure you’re looking for?