Neural Rendering: How Low Can You Go in Terms of Input?
upstart writes in with an IRC submission for c0lo:
Neural Rendering: How Low Can You Go In Terms Of Input?:
Yesterday some extraordinary new work in neural image synthesis caught the attention and the imagination of the internet, as Intel researchers revealed a new method for enhancing the realism of synthetic images.
The system, as demonstrated in a video from Intel, intervenes directly into the image pipeline for the Grand Theft Auto V video game, and automatically enhances the images through an image synthesis algorithm trained on a convolutional neural network (CNN), using real world imagery from the Mapillary dataset, and swapping out the less realistic lighting and texturing of the GTA game engine.
Commenters, in a wide range of reactions in communities such as Reddit and Hacker News, are positing not only that neural rendering of this type could effectively replace the less photorealistic output of traditional games engines and VFX-level CGI, but that this process could be achieved with far more basic input than was demonstrated in the Intel GTA5 demo - effectively creating 'puppet' proxy inputs with massively realistic outputs.
Read more of this story at SoylentNews.