OpenAI has developed two new neural networks. The first model is capable of generating new content based on a description and the second excels in recognizing images.
In a blog post, OpenAI describes the possibilities of DALL·E and Clip and shows examples.
DALL·E is based on the GPT-3 text generator that the company unveiled in 2020. The neural network is capable of developing text and software code using simple descriptions. The network is also able to draw new images based on a description.
The latter feature has been thoroughly tested by the developers with assignments based on descriptions such as “an armchair in the shape of an avocado” or “a snail made of harp”. DALL·E was able to generate images in different styles and from different angles.
The other new neural network by OpenAI is called Clip. Clip is able to recognise objects that are pictured in an image. Unlike the various other neural networks that can recognise images, Clip can also make descriptions of objects that it has never seen before.
Tip: OpenAI starts selling its text-based AI models to companies