The research team at Meta, parent company of Facebook, has introduced the so-called Segment Anything Model (SAM). The team trained the AI model with the world’s largest dataset of annotated images, Meta said.

The company provides more details about the new AI application in a blog post. The ability for the AI to distinguish objects from each other is called segmentation. It is able to do so even when the object in question wasn’t in the original dataset. According to Meta, spreading SAM is important to democratize this application of artificial intelligence. However, only researchers may make use of the end products of SAM at the moment. It is not the first time Meta has made one of its AI models publicly available.

VR application

Meta states that SAM has “a general notion of what objects are.” Consequently, it would also be widely applicable without the need for additional training with a new dataset. AI researchers have coined the term zero shot transfer to refer to this level of functionality.

In addition to its current applicability, Meta suggests that in the future the model could be used for virtual reality, such as digitally moving and transforming objects. VR glasses may make use of pupil tracking to see what the user is looking at and detect that specific object. This is called gaze-based object detection.

Also read: Google’s PaLM: what is it & why is it relevant?