2 min

Tags in this article

Earlier this year, Google researchers investigating the properties of AI that employs self-attention bottlenecks had their study accepted to the Genetic and Evolutionary Computation Conferences (GECCO – 2020). The claim by these researchers is that the AI agents show an aptitude for solving difficult vision-based tasks.

But that’s not all; they are better at tackling slight modifications of the tasks because of blindness to details that could confuse them. This phenomenon is called inattentional blindness. It causes a person to miss things that are in plain sight.

It is caused by selective attention, which is a mechanism that is thought to allow humans to condense information into a form that’s compact enough to enable decision-making.

Inspiring promises

Yann LeCun, a respected computer scientist in the fields of AI, thinks that it can inspire the designs of AI systems that can do a better job of mimicking the elegant and efficient mechanisms in biological organisms.

According to the Google researchers of the proposed ‘AttentionAgent,’ the aim is to have the agent devote attention to task-relevant elements and ignore the distractions. The way to achieve this is by having the system input images into patches and then rely on self-attention architecture to decide on patches and choose a subset.

The chosen patches guide AttentionAgent’s actions and allow it to be aware of all the changes in the input data and track how important factors change and evolve. 

Encouraging experiments

In experiments, the AttentionAgent learned to pay attention to a range of regions in images and survive on a level in VizDoom, an environment in digital research based on the first-person shooter game Doom.

The results are encouraging. If the agent can learn more meaningful features and maybe even recognize symbolic information from visual input, it could lead to an exciting future for this research.