NVIDIA Omniverse Is Creating Life-Like AI-Based Animations In STALKER 2 & Fort Solis AAA Games With Raytracing
NVIDIA Omniverse Is Creating Life-Like AI-Based Animations In STALKER 2 & Fort Solis AAA Games With Raytracing

NVIDIA has announced that its Omniverse platform is laying the foundation for Generative AI in games like STALKER 2 & Fort Solis with RTX.
At GDC, NVIDIA announced that its game developers are using its Omniverse platform and the recently announced AI Foundations cloud services that allow users to operate custom LLMs & generative AI trained with their proprietary data. And this technology allows game developers access to their first taste of using generative AI technology to enhance their game creation and game assets.
The latest upcoming Omniverse Audio2Face generative AI update includes mandarin support, overall facial animation and lip-sync quality improvement across multi-languages. It also brings the latest in real-time RTX rendering, and the first real-time raytraced subsurface scattering shading for improved realism.
via NVIDIA
In one example, NVIDIA showcases its Ominverse ACE or Avatar Cloud Engine which can be used to bring avatars or models to life by creating expressive character voices using speech & translation AI by leveraging NVIDIA Riva or they can use the Omniverse Audio2Face technology for AI-powered 2D and 3D character animations. Following is a preview of the NVIDIA Omniverse Audio2Face tech in action:
What's even more impressive about this preview is that not only is it entirely AI-generated and looks photo realistic with overall facial animation & lip-sync quality improvements offered across multiple languages but it also leverages NVIDIA RTX technologies. The scene above utilizes full real-time raytraced subsurface scattering shading or SSS to further improve the visual fidelity.
The folks over at GSC Game World will put NVIDIA's Omniverse Audio2Face technology to good use in their upcoming AAA title S.T.A.L.K.E.R. 2 Head of Chernobyl while a second studio, Fallen Leaf, is going to use this technology in their third-person Sci-Fi Thriller game, Fort Solis.
In another developer session, Tencent's Xihao Fu talks about bringing real-time strand-based hair rendering with ray tracing. The problem with real-time hair rendering is that you either have to compromise on performance or quality as a Hair Cards solution looks generic but offers better performance while a Hair strand implementation looks much better in terms of visuals but costs a lot of performance to render. Xihao wants to address this by utilizing NVIDIA's RTX technology such as ray tracing and DLSS. Comparisons are made against Unreal Engine 5, which uses a Temporal Antialiasing / Upsamling technique, to a Spatial-Temporal Filter.
The first block in implementing such an approach in ray tracing is to B-Spline primitive but since that isn't available for DXR, the only other choice is to go with custom primitive. After the strand creation in DXR using raytracing, a performance chart is showcased which shows how an NVIDIA GeForce RTX 3090 offers around 50% better performance utilizing RT than the native solution. In another comparison, the RT implementation ends up faster than the rasterization path across a range of different strand-hair-based models.
The conclusion made is that:
This leads us to the second issue discussed in the topic and that's the noise associated with rendering strand-based hair. For this, a spatial-temporal filter is proposed as the solution as going with multi-sample tracing has a linear and sub-linear cost in performance.
As shown in the pictures below, Spatial Filtering provides much better results which look even better when temporal filters are applied.
This hair-strand rendering technology can definitely help make the NVIDIA Omniverse generative content look even better. Tencent Games is also adopting another set of Omniverse tools known as USD or Universal Scene Description that helps creators & developers build interoperability between their favorite tools.
What's Your Reaction?






