
NVIDIA Omniverse - what is it, and what does it mean for the way you work
With data getting bigger, teams working collaboratively across multiple locations and pipelines becoming increasingly more intricate projects can get complicated very quickly.
Chasing the single source of truth at any given time on client projects has long been a pain point for artists and designers. Waiting for data sets to render and using software applications that ‘should talk to each other’ at the same time pile on extra bottlenecks within the creative process. Movement and storage of data and the need to bolt more ‘stuff’ onto existing infrastructure to keep it all going puts heavy pressure on engineering teams. Sounds like studio life right? It also sounds like some clever tech needed to be created in order to remove these well known issues - welcome to NVIDIA’s Omniverse.
NVIDIA Omniverse is an end-to-end design collaboration and true-to-reality digital twin simulation platform that will revolutionise 3D workflows across organizations of any scale.
Although not a silver bullet, it’s close to one and a very powerful tool indeed. Using the rendering power of Ampere and Universal Scene Description (USD) NVIDIA have created an ecosystem that allows artists, designers and team members to collaborate in real time, at the same time, within a virtual space.
What is NVIDIA’s Ominiverse
NVIDIA Omniverse is a cloud-native and multi-GPU-enabled platform that allows for scalable, remote collaboration with true real-time performance for teams working across geographies, apps, and systems. Omninverse has five core components: AI, Materials/MDL (Materials Definition Library), Path Tracing, Physics/VFX and USD- which is core to how it all works, and is a fundamental building block of how it all interacts and works.
Omniverse’s framework comprises layers of plugins and simulation tools, called ‘connectors’- plus a very very powerful real-time clustered rendering system. Not only does it enable creative 3D content pipelines (specifically VFX) to work at its optimal level, it can also enable 3D pipelines for AEC, Engineering, Manufacturing to all take full advantage of this new approach - which we feel is extremely powerful and a value proposition for all content creators.
With the primary data now stored in USD format on centralized storage, teams can run numerous types of analysis and simulations within Omniverse on this data via USD native applications in one space. Artists and designers can all work simultaneously, in real-time, at the same time on interior assets or a scene in different locations geographically. Clients can sit in real-time creative sessions with artists to make alterations and changes to their designs/visions, make decisions and sign off on projects.
For instance, within Omniverse a 3ds Max user, a Revit user and a lighting artist can all work simultaneously, in real-time, on an interior asset or a scene in different locations geographically. Creative projects will be able to move much more quickly with artists being able to work this way, which ultimately will increase efficiency saving time and money.
Deeper Dive: The NVIDIA Metaverse and Digital Twins
This new approach has also led NVIDIA to generate a product and toolset called Metaverse - think ready player one version alpha 0.1.
This virtual environment toolset allows users to leverage all that Omniverse has to offer regarding collaboration from across the room, or across the globe, though takes it to the next level by leveraging Ai, machine learning (ML), and deep learning all in a virtual space.
NVIDIA has been clear in their ambition to push innovation within the Ai, ML and deep learning space. The Metaverse has the potential to be invaluable for buildings, manufacturing, production and cities of the future.
A great example of this is the partnership NVIDIA has with BMW who are using this sim-to-real technology at their Regensburg, Bavaria factory. Within the factory robots run on production lines and feed into the facilities management system. The management system can monitor it’s process, run diagnostics and replacement schedules. Within the Metaverse they have created a ‘Digital Twin’ of these robots and the production line, where the robot can make suggestions (using Ai and ML) to optimise performance, and run these suggestions within the virtual world of the Metaverse. This removes risk of failure, or outage on the factory floor, and ultimately improves the output of the operation and increases safety.
The other great thing about NVIDIA’s Metaverse is the ability to generate and collaborate in a virtual world with numerous inputs and viewing systems (including VR and AR). The usage for this in AEC, manufacturing, games, film workflows we see will be invaluable, and dare we say, commonplace going forward.
For instance in AEC, using VR/AR and the Metaverse together you could view outputs from ‘digital twins’ of buildings and wander around a virtual environment that could be in the early design phases. But equally it could be a digital twin of a real environment/building that you can’t necessarily get to - e.g. geographically at the other side of the world - and view/feedback that way.
The challenges and conclusions
Getting data directly into GPUs and CPUs as fast as possible is now the biggest challenge.
The load that NVIDIA’s Omniverse pipelines will inflict on current generation infrastructures has the potential to move that bottleneck back to the central storage and networking. However, the data sets that artists, architects, designers and engineers will be able to generate will be huge!
With the acquisition of Mellanox and Arm, NVIDIA now has the potential to run the entire infrastructure, end-to-end. The only missing element would be storage. With the general move to solid state storage systems, combined with networking now running in the hundreds of gigabits per second bracket, moving data around infrastructure is not the limiting factor.
To feed the GPUs, new offload technologies such as NVIDIA DPU will allow users to send data from central storage across the network directly to the GPU’s memory and not touch the CPU - thus increasing throughput for machine learning, rendering, and freeing up the CPU to do other things at the same time - crazy, we know!
Omiverse has been conceived to eliminate traditional bottlenecks within the high performance compute infrastructures that we have today. Loading a single Maya, Revit or 3ds Max file into memory on a single CPU or core thread, often on a 1Gb/s network, is limiting given today's multi-core and high bandwidth networking options. With USD all cores and all bandwidth can be utilised to fill main memory, or GPU memory, in seconds. No longer will we hear the horror stories of people loading scenes taking up to an hour at a time, or losing and ove-writing files - which we feel is great news and that’s why we’re excited about the Omniverse!
Pricing
NVIDIA Omniverse platform, now in open beta, will always be free for individuals to create.
NVIDIA Omniverse Enterprise is a new platform offering that helps businesses of any size transform their 3D production workflows. NVIDIA Omniverse Enterprise is an end-to-end remote collaboration and true-to-reality simulation platform, optimized and certified by NVIDIA to run on NVIDIA-Certified Systems.
The Omniverse Enterprise solution starts at $14,000 per year for a workgroup of 5 concurrent 3D Creators and an unlimited number of “Viewers”.
You can find out more about the Omniverse for Media and Entertainment here
Have questions about this, or any other aspect relating to 3D workflows or pipelines? This email address is being protected from spambots. You need JavaScript enabled to view it.