Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images
upstart writes:
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images:
Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.
Yet a machine that needs to interact with objects in the world - like a robot designed to harvest crops or assist with surgery - must be able to infer properties about a 3D scene from observations of the 2D images it's trained on.
While scientists have had success using neural networks to infer representations of 3D scenes from images, these machine learning methods aren't fast enough to make them feasible for many real-world applications.
A new technique demonstrated by researchers at MIT and elsewhere is able to represent 3D scenes from images about 15,000 times faster than some existing models.
The method represents a scene as a 360-degree light field, which is a function that describes all the light rays in a 3D space, flowing through every point and in every direction. The light field is encoded into a neural network, which enables faster rendering of the underlying 3D scene from an image.
Read more of this story at SoylentNews.