Staying in Sync: NVIDIA Combines Digital Twins With Real-Time AI for Industrial Automation

Real-time AI is helping with the heavy lifting in manufacturing, factory logistics and robotics.

In such industries — often involving bulky products, expensive equipment, cobot environments and logistically complex facilities — a simulation-first approach is ushering in the next phase of automation.

NVIDIA founder and CEO Jensen Huang today demonstrated in his GTC keynote how developers can use digital twins to develop, test and refine their large-scale, real-time AIs entirely in simulation before rolling them out in industrial infrastructure, saving significant time and cost.

NVIDIA Omniverse, Metropolis, Isaac and cuOpt interact in AI gyms where developers can train AI agents to help robots and humans navigate unpredictable or complex events.

In the demo, a digital twin of a 100,000-square-foot warehouse — built using the NVIDIA Omniverse platform for developing and connecting OpenUSD applications — operates as a simulation environment for dozens of digital workers and multiple autonomous mobile robots (AMRs), vision AI agents and sensors.

Each AMR, running the NVIDIA Isaac Perceptor multi-sensor stack, processes visual information from six sensors, all simulated in the digital twin.

At the same time, the NVIDIA Metropolis platform for vision AI creates a single centralized map of worker activity across the entire warehouse, fusing together data from 100 simulated ceiling-mounted camera streams with multi-camera tracking. This centralized occupancy map helps inform optimal AMR routes calculated by the NVIDIA cuOpt engine for solving complex routing problems.

cuOpt, a record-breaking optimization AI microservice, solves complex routing problems with multiple constraints using GPU-accelerated evolutionary algorithms.

All of this happens in real time, while Isaac Mission Control coordinates the entire fleet using map data and route graphs from cuOpt to send and execute AMR commands.

An AI Gym for Industrial Digitalization

AI agents can assist in large-scale industrial environments by, for example, managing fleets of robots in a factory or identifying streamlined configurations for human-robot collaboration in supply chain distribution centers. To build these complex agents, developers need digital twins that function as AI gyms — physically accurate environments for AI evaluation, simulation and training.

Such software-in-the-loop AI testing enables AI agents and AMRs to adapt to real-world unpredictability.

In the demo, an incident occurs along an AMR’s planned route, blocking the path and preventing it from picking up a pallet. NVIDIA Metropolis updates an occupancy grid, mapping all humans, robots and objects in a single view. cuOpt then plans an optimal route, and the AMR responds accordingly to minimize downtime.

With Metropolis vision foundation models powering the NVIDIA Visual Insight Agent (VIA) framework, AI agents can be built to help operations teams answer questions like, “What situation occurred in aisle three of the factory?” And the generative AI-powered agent offers immediate insights such as, “Boxes fell from the shelves at 3:30 p.m., blocking the aisle.”

Developers can use the VIA framework to build AI agents capable of processing large amounts of live or archived videos and images with vision-language models — whether deployed at the edge or in the cloud. This new generation of visual AI agents will help nearly every industry summarize, search and extract actionable insights from video using natural language.

All of these AI functions can be enhanced through continuous, simulation-based training and are deployed as modular NVIDIA NIM inference microservices.

Learn more about the latest advancements in generative AI and industrial digitalization at NVIDIA GTC, a global AI conference running through Thursday, March 21, at the San Jose Convention Center and online.

Read More