2 min

Tags in this article

, , ,

Google has released a new version of Open Images, the photo-dataset that helps with Artificial Intelligence (AI) projects. There are additional possibilities for labelling objects and a feature that is described as ‘localised narratives’.

Open Images makes millions of photos available to data scientists. Objects have already been recognised in the images, to which descriptions have also been added. As a result, data scientists can rely on a large database; they can use the examples as training data for an AI model with object detection.

The new version of Open Images offers 23.5 million new ‘photo-level’ labels. People have verified the description of what is happening on the image. This brings the total number of labels to 59.9 million. Specific annotations have also been added, including 2.5 million labels describing human actions.

Coherent annotations

In addition, the version comes with what Google calls ‘localised narratives’. This is a new kind of annotations that models should be able to use to obtain more information from an image.

Google realises the ‘localized narratives’ by having a human being select objects and add a description to them. The system keeps track of the movement of the mouse, so that it knows exactly what is being selected. This should create more coherence between annotations. The image below, which Google shared on its blog, shows exactly how this works.

Google has created ‘localised narratives’ for about 500,000 Open Images files. In general, the tech giant hopes that the update will “take a qualitative and quantitative step towards coherent annotations for image classification and object detection”.