Google today released a new AI tool to help track down and remove online imagery that abuses children. The Content Safety API is a tool that uses deep neural networks for image processing. As a result, fewer people’s eyes are needed to do the work.
Currently, researchers are manually searching thousands of photos. That is sometimes quite traumatic, because of the content of the images. The rapid identification of new photos means that children who are sexually abused can now be identified more quickly and protected against further abuse, according to developers Nikola Todorovic and Abhi Chaudhuri in a blog.
Available for free
The Content Safety API will be made available free of charge to NGOs and Google’s partner companies. This allows the toolkit to quickly identify and report the content. According to the developers, the tool represents a considerable improvement in this area, with up to seven hundred percent faster material with child abuse being found.
Google’s announcement comes just after British Foreign Secretary Jeremy Hunt voiced strong criticism of the company. According to Hunt, Google decided to participate in censorship in China, but does not do enough to help remove or tackle child abuse material worldwide.
Seems extraordinary that Google is considering censoring its content to get into China but won’t cooperate with UK, US and other 5 eyes countries in removing child abuse content. They used to be so proud of being values-driven…
— Jeremy Hunt (@Jeremy_Hunt) August 30, 2018
Not the only one
Google is not the only company working on this kind of technology to tackle child abuse. The British police recently announced, for example, that they were also working on an AI that could be used to deal with horrific images. In this way, abuse can be detected and dealt with quickly. At the same time, it ensures that people who have to track down the images are less likely to be traumatised.