Nvidia is developing a new way to make virtual environments much more realistic. The company has built a technique that makes it possible to transform video images into a virtual environment. Games and virtual reality can therefore be a lot more realistic in the future.
Nvidia uses its proprietary supercomputer DGX-1, which runs on Tensor Core GPUs, to convert videos it captured with the dashboard camera of its self-propelled car into virtual environments. The setup makes it possible to achieve direct results.
Building virtual environments
The research team put a neural network to work and used the Unreal Engine 4 to generate frames directly. Then the company put artificial intelligence to work in order to convert those frames into images. Developers can then easily convert the result to meet their needs.
According to Bryan Catanzaro, vice president of Applied Deep Learning at Nvidia, his company has been working on new ways to create interactive graphics for years. This is the first time we can do that with a neural network. Neural networks and then specifically generative models are going to change the way we make graphics.
The idea is also that developers will soon be able to develop virtual content much more easily and cheaply. After all, they can quickly make recordings and use them to generate a virtual environment. For the developers of games and virtual reality content it is in any case a nice development: for them new possibilities arise by using videos.
However, Nvidia’s technology is still relatively in its infancy. Currently it is still under development and content developers need a supercomputer to use the technology. So it will take a while before the technology is ready and used to develop games.
You can read more about Nvidia’s research on this page (PDF).This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.