NVIDIA Showcases Real-Time Neural Materials Models, Offering Up To 24x Shading Speedup
NVIDIA Showcases Real-Time Neural Materials Models, Offering Up To 24x Shading Speedup

NVIDIA has showcased a new real-time neural materials models approach that offers a huge 12-24x speed in shading performance versus traditional methods.
At Siggraph, NVIDIA is showcasing a new real-time rendering approach called "Neural Appearance Models" which aims to leverage AI to speed up shading capabilities. Last year, the company unveiled its Neural Compression technique which unlocks 16x texture detail, and this year, the company is moving to speedup texture rendering and shading performance by a huge leap.
The new approach will be a universal runtime mode for all materials from multiple sources including real objects captured by artists, measurements, or generated from text prompts using Generative AI. These models will be scalable in various quality levels ranging from PC/Console gaming, Virtual Reality, and even Film Rendering.
The model will help to capture every single detail of the object to be rendered such as incredibly delicate details and visual intricacies such as dust, water spots, lighting, & even the rays cast by the blend of various light sources and colors. Traditionally, these models will be rendered using shading graphs which are not only costly for real-time rendering but also include complexities.
With NVIDIA's "Neural Materials" approach, the traditional materials rendering model is replaced with a less expensive and computationally efficient neural network which the company states is going to enable up to 12-24 times faster shading calculation performance. The company offers a comparison between a model rendered using a shading graph and the same model rendered with the Neural Materials model.
The model matches the details of the reference image in all regards & as mentioned above, it does so much faster. You can also view each model and compare the image quality for yourself at this link.
The new model achieves the following innovations:
The company also explains how Neural Models work. At Render Time, the neural material is a lot like using a traditional model. At each hit point, they first look up textures and then evaluate two MLPs, one to get the BRDF value and the second one to import and sample the outgoing direction. Some improvements to the real-time approach include built-in graphics priors that improve the inference quality and training time encoder to output the renders at massive resolutions.
All of the models rendered using the "Neural Materials" approach offer up to 16K texture resolution, offering in-depth and detailed objects within games. These refined models are less taxing on games too, leading to better performance than what was previously possible.
Having textures built on neural models running faster allows NVIDIA to scale them across different applications. In a side-by-side comparison, NVIDIA shows two models, one with 2 layers (w/16 Neurons) rendered in just 3.3ms while a slightly more detailed model, rendered with 3 layers (w/64 Neurons) still rendered in 11ms.
As for what hardware will support Neural Materials models, NVIDIA states that they will leverage existing machine learning frameworks such as PyTorch and TensorFlow, tools such as GLSL or HLSL, and hardware-accelerated matrix multiply-accumulate (MMA) engines on reach GPU architectures such as AMD, Intel, and NVIDIA. The runtime shader will compile the neural material description into optimized shared code using the open-source Slang shading language which has backends for a variety of targets including Vulkan, Direct3D 12, and CUDA.
Tensor-core architecture introduced in modern-day graphics architecture also provides a step forward for these models and while currently limited to compute APIs, NVIDIA exposes Tensor-core acceleration to shaders such as modified open-source LLVN-based DirectX Shader compiler which adds custom intrinsics for low-level access, allowing them to generated Slang shared code efficiently.
Performance is showcased using an NVIDIA GeForce RTX 4090 GPU using hardware-accelerated DXR (Ray Tracing) at 1920x1080. The model's render time is listed in ms and the results show the new Neural approach rendering images much faster and with better details than the reference performance. In full-frame rendering times with Neural BRDFs, the 4090 achieves 1.64x faster performance with 3x64 and 4.14x speedup with 2x16 model parameters. The material shading performance using Path Tracing seems a 1.54x speedup with 2x32 and a 6.06x speedup with 3x64 parameters.
Overall, the new NVIDIA Neural Materials models approach looks to redefine the way textures and objects will be rendered in real-time. With a 12-24x speedup, this will allow developers and content makers to generate materials and objects faster with ultra-realistic textures that also run fast on the latest hardware. We can't wait to see this approach leverage by upcoming games and apps.
What's Your Reaction?






