NVIDIA Isaac Nova Orin Opens New Era of Innovation for Autonomous Mobile Robots

Next-day packages. New vehicle deliveries. Fresh organic produce. Each of these modern conveniences is accelerated by fleets of mobile robots.

NVIDIA today is announcing updates to Nova Orin — an autonomous mobile robot (AMR) reference platform — that advance its roadmap. We’re releasing details of three reference platform configurations. Two use a single Jetson AGX Orin — which runs the NVIDIA Isaac robotics stack and the Robot Operating System (ROS) with the GPU-accelerated framework — and one relies on two Orin modules.

The Nova Orin platform is designed to improve reliability and reduce development costs worldwide for building and deploying AMRs.

AMRs are like self-driving cars but for unstructured environments. They don’t need fixed, preprogrammed tracks and are capable of avoiding obstacles. This makes them ideal in logistics for moving items in warehouses, distribution centers and factories, or for applications in hospitality, cleaning, roaming security and last-mile delivery.

For years, AMR manufacturers have been designing robots by sourcing and integrating compute hardware, software and sensors in house. This time-consuming effort demands years of engineering resources, lengthens go-to-market pipelines and distracts from developing domain-specific applications.

Nova Orin offers a better way forward with tested, industrial-grade configurations of sensors, software and GPU-computing capabilities. Tapping into the NVIDIA AI platform frees developers to focus on building their unique software stack of robot applications.

Much is at stake for intralogistics enabled by AMRs across industries, a market expected to increase nearly 6x to $46 billion by 2030, up from $8 billion in 2021, according to estimates from ABI Research.

Designing a Highly Capable, Flexible Reference Architecture 

The Nova Orin reference architecture designs are provided for specific use cases. There is one Orin-based design without safety-certified sensors, and one that includes them, along with a safety programmable logic controller. The third architecture has a dual Orin-based design that depends on vision AI for enabling functional safety.

Sensor support is included for stereo cameras, lidars, ultrasonic sensors and inertial measurement units. The chosen sensors have been selected to balance performance, price and reliability for industrial applications. The suite of sensors provides a multimodal diversity of coverage that is required for developing and deploying safe and collaborative AMRs.

The stereo cameras and fisheye cameras are custom designed by NVIDIA in coordination with camera partners. All sensors are calibrated and time synchronized, and come with drivers for reliable data capture. These sensors allow AMRs to detect objects and obstacles across a wide range of situations while also enabling simultaneous localization and mapping (SLAM).

NVIDIA provides two lidar options, one for applications that don’t need sensors certified for functional safety, and the other for those that do. In addition to these 2D lidars, Nova Orin supports 3D lidar for mapping and ground-truth data collection.

Building a Comprehensive AI Platform for OEMs, ISVs

NVIDIA is driving the Nova Orin platform forward with extensive software support in addition to the hardware and integration tools.

The base OS includes drivers and firmware for all the hardware and adaptation tools, as well as design guides for integrating it with robots. Nova can be integrated easily with a ROS-based robot application.

The sensors will have validated models in Isaac Sim for application development and testing without the need for an actual robot.

The cloud-native data acquisition tools eliminate the arduous task of setting up data pipelines for the vast amount of sensor data needed for training models, debugging and analytics. State-of-the-art GEMs developed for Nova sensors are GPU accelerated with the Jetson Orin platform, providing key building blocks such as visual SLAM, stereo depth estimation, obstacle detection, 3D reconstruction, semantic segmentation and pose estimation.

Nova Orin also addresses the need to quickly create high-fidelity, city-scale 3D maps for indoor environments in the cloud. These generated maps allow robot navigation, fleet planning and simulation. Plus, the maps can be continuously updated using data from the robots.

AMRs That Are Ready for Industries

As robotics systems evolve, the need for secure deployment and management of the critical AI software on board is paramount for future AMRs.

Nova Orin supports secure over-the-air updates, as well as device management and monitoring, to enable easy deployment and reduce the cost of maintenance. Its open, modular design enables developers to use some or all capabilities of the platform and extend it to quickly develop robotics applications.

NVIDIA is working closely with regulatory bodies to develop vision-enabled safety technology to further reduce the cost and improve reliability of AMRs. And we’re providing a software development kit for navigation, so developers can quickly develop applications.

Improving productivity for factories and warehouses will depend on AMRs working safely and efficiently side by side at scale. High levels of autonomy driven by 3D perception from Nova Orin will help drive that revolution.

Learn more about Nova Orin and sign up to be notified of its availability.

The post NVIDIA Isaac Nova Orin Opens New Era of Innovation for Autonomous Mobile Robots appeared first on NVIDIA Blog.

Read More

On Track: Digitale Schiene Deutschland Building Digital Twin of Rail Network in NVIDIA Omniverse

Deutsche Bahn’s rail network consists of 5,700 stations and 33,000 kilometers of track, making it the largest in Western Europe.

Digitale Schiene Deutschland (Digital Rail for Germany, or DSD), part of Germany’s national railway operator Deutsche Bahn, is working to increase the network’s capacity without building new tracks. It’s striving to create a powerful railway system in which trains are automated, safely run with less headway between each other and are optimally steered through the network.

In collaboration with NVIDIA, DSD is beginning to build the first country-scale digital twin to fully simulate automatic train operation across an entire network. That means creating a photorealistic and physically accurate emulation of the entire rail system. It will include tracks running through cities and countrysides, and many details from sources such as station platform measurements and vehicle sensors.

Using the AI-enabled digital twin created with NVIDIA Omniverse, DSD can develop highly capable perception and incident prevention and management systems to optimally detect and react to irregular situations during day-to-day railway operation.

“With NVIDIA technologies, we’re able to begin realizing the vision of a fully automated train network,” said Ruben Schilling, who leads the perception group at DB Netz, part of Deutsche Bahn. The envisioned future railway system improves the capacity, quality and efficiency of the network.

This is the basis for satisfied passengers and cargo customers, leading to more traffic on the tracks and thereby reducing the carbon footprint of the mobility sector.

Data, Data and More Data

Creating a digital twin at such a large scale is a massive undertaking. It needs a custom-built 3D pipeline that connects computer-aided design datasets that are built, for example, within the Siemens JT ecosystem with DSD’s high-definition 3D maps and various simulation tools. Using the Universal Scene Description 3D framework, DSD can connect and combine data sources into a single shared virtual model.

With its network perfectly synchronized with the real world, DSD can run optimization tests and “what if” scenarios to test and validate changes in the railway system, such as reactions to unforeseen situations.

Running on NVIDIA OVX, the computing system for running Omniverse simulations, DSD will be able to operate the persistent simulation, which is regularly improved by data stream updates from the physical world.

Watch the demo to see the digital twin in action:

Future computer vision-powered systems could continually perform route observation and incident recognition, automatically warning of and reacting to potential hazards.

The AI sensor models will be trained and optimized with a combination of real-world and synthetic data, some of which will be generated by the Omniverse Replicator software development kit framework. This will ensure models can perceive, plan and act when faced with everyday and unexpected scenarios.

The Future of Rail

With its pioneering approach to rail network optimization, DSD is contributing to the future of Europe’s rail system and industry development. Sharing its data pool across countries allows for continuous improvement and deployment across future vehicles, resulting in the highest possible quality while reducing costs.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register free for the conference, running through Thursday, Sept. 22, to explore how digital twins are transforming industries.

The post On Track: Digitale Schiene Deutschland Building Digital Twin of Rail Network in NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins

With tens of millions of weekly transactions across its more than 2,000 stores, Lowe’s helps customers achieve their home-improvement goals. Now, the Fortune 50 retailer is experimenting with high-tech methods to elevate both the associate and customer experience.

Using NVIDIA Omniverse Enterprise to visualize and interact with a store’s digital data, Lowe’s is testing digital twins in Mill Creek, Wash,. and Charlotte, N.C. Its ultimate goal is to empower its retail associates to better serve customers, collaborate with one another in new ways and optimize store operations.

“At Lowe’s, we are always looking for ways to reimagine store operations and remove friction for our customers,” said Seemantini Godbole, executive vice president and chief digital and information officer at Lowe’s. “With NVIDIA Omniverse, we’re pulling data together in ways that have never been possible, giving our associates superpowers.”

Augmented Reality Restocking and ‘X-Ray Vision’

With its interactive digital twin, Lowe’s is exploring a variety of novel augmented reality use cases, including reconfiguring layouts, restocking support, real-time collaboration and what it calls “X-ray vision.”

Wearing a Magic Leap 2 AR headset, store associates can interact with the digital twin. This AR experience helps an associate compare what a store shelf should look like with what it actually looks like, and ensure it’s stocked with the right products in the right configurations.

And this isn’t just a single-player activity. Store associates on the ground can communicate and collaborate with centralized store planners via AR. For example, if a store associate notices an improvement that could be made to a proposed planogram for their store, they can flag it on the digital twin with an AR “sticky note.”

Lastly, a benefit of the digital twin and Magic Leap 2 headset is the ability to explore “X-ray vision.” Traditionally, a store associate might need to climb a ladder to scan or read small labels on cardboard boxes held in a store’s top stock. With an AR headset and the digital twin, the associate could look up at a partially obscured cardboard box from ground level, and, thanks to computer vision and Lowe’s inventory application programming interfaces, “see” what’s inside via an AR overlay.

Store Data Visualization and Simulation

Home-improvement retail is a tactile business. And when making decisions about how to create a new store display, a common way for retailers to see what works is to build a physical prototype, put it out into a brick-and-mortar store and examine how customers react.

With NVIDIA Omniverse and AI, Lowe’s is exploring more efficient ways to approach this process.

Just as e-commerce sites gather analytics to optimize the customer shopping experience online, the digital twin enables new ways of viewing sales performance and customer traffic data to optimize the in-store experience. 3D heatmaps and visual indicators that show the physical distance of items frequently bought together can help associates put these objects near each other. Within a 100,000 square-foot location, for example, minimizing the number of steps needed to pick up an item is critical.

Using historical order and product location data, Lowe’s can also use NVIDIA Omniverse to simulate what might happen when a store is set up differently. Using AI avatars created in Lowe’s Innovation Labs, the retailer can simulate how far customers and associates might need to walk to pick up items that are often bought together.

NVIDIA Omniverse allows for hundreds of simulations to be run in a fraction of the time that it takes to build a physical store display, Godbole said.

Expanding Into the Metaverse

Lowe’s also announced today at NVIDIA GTC that it will soon make the over 600 photorealistic 3D product assets from its home-improvement library free for other Omniverse creators to use in their virtual worlds. All of these products will be available in the Universal Scene Description format on which Omniverse is built, and can be used in any metaverse created by developers using NVIDIA Omniverse Enterprise.

For Lowe’s, the future of home improvement is one in which AI, digital twins and mixed reality play a part in the daily lives of its associates, Godbole said. With NVIDIA Omniverse, the retailer is taking steps to build this future – and there’s a lot more to come as it tests new strategies.

Join a GTC panel discussion on Wednesday, Sept. 21, with Lowe’s Innovation Labs VP Cheryl Friedman and Senior Director of Creative Technology Mason Sheffield, who will discuss how Lowe’s is using AI and NVIDIA Omniverse to make the home-improvement retail experience even better.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register free for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins appeared first on NVIDIA Blog.

Read More

Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat

With NVIDIA DRIVE, in-vehicle infotainment, or IVI, is so much more than just giving directions and playing music.

NVIDIA founder and CEO Jensen Huang demonstrated the capabilities of a truly IVI experience during today’s GTC keynote. Using centralized, high-performance compute, the NVIDIA DRIVE Concierge platform spans traditional cockpit and cluster capabilities, as well as personalized, AI-powered safety, convenience and entertainment features for every occupant.

Drivers in the U.S. spend an average of nearly 450 hours in their car every year. With just a traditional cockpit and infotainment display, those hours can seem even longer.

DRIVE Concierge makes time in vehicles more enjoyable, convenient and safe, extending intelligent features to every passenger using the DRIVE AGX compute platform, DRIVE IX software stack and Omniverse Avatar Cloud Engine (ACE).

These capabilities include crystal-clear graphics and visualizations in the cockpit and cluster, intelligent digital assistants, driver and occupant monitoring, and streaming content such as games and movies.

With DRIVE Concierge, every passenger can enjoy their own intelligent experience.

Cockpit Capabilities

By running on the cross-domain DRIVE platform, DRIVE Concierge can virtualize, as well as host, multiple virtual machines on a single chip — rather than distributed computers — for streamlined development.

With this centralized architecture, DRIVE Concierge seamlessly orchestrates driver information, cockpit and infotainment functions. It supports the Android Automotive operating system, so automakers can easily customize and scale their IVI offerings.

And digital cockpit and cluster features are just the beginning. DRIVE Concierge extends this premium functionality to the entire vehicle, with world-class confidence view, video-conferencing capabilities, digital assistants, gaming and more.

Visualizing Intelligence

Speed, fuel range and distance traveled are key data for human drivers to be aware of. When AI is at the wheel, however, a detailed view of the vehicle’s perception and planning layers is also crucial.

DRIVE Concierge is tightly integrated with the DRIVE Chauffeur platform to provide high-quality, 360-degree, 4D visualization with low latency. Drivers and passengers can always see what’s in the mind of the vehicle’s AI, with beautiful 3D graphics.

This visualization is critical to building trust between the autonomous vehicle and its passengers, so occupants can be confident in the AV system’s perception and planned path.

How May AI Help You?

In addition to revolutionizing driving, AI is creating a more intelligent vehicle interior with personalized digital assistants.

Omniverse ACE is a collection of cloud-based AI models and services for developers to easily build, customize and deploy interactive avatars.

With ACE, AV developers can create in-vehicle assistants that are easily customizable with speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.

These avatars can help make recommendations, book reservations, access vehicle controls and provide alerts for situations like if a valuable item is left behind.

Game On

With software-defined capabilities, cars are becoming living spaces, complete with the same entertainment available at home.

NVIDIA DRIVE Concierge lets passengers watch videos and experience high-performance gaming wherever they go. Users can choose from their favorite apps and stream videos and games on any vehicle screen.

By using the NVIDIA GeForce NOW cloud gaming service, passengers can access more than 1,400 titles without the need for downloads, benefitting from automatic updates and unlimited cloud storage.

Safety and Security

Intelligent interiors provide an added layer of safety to vehicles, in addition to convenience and entertainment.

DRIVE Concierge uses interior sensors and dedicated deep neural networks for driver monitoring, which ensures attention is on the road in situations where the human is in control.

It can also perform passenger monitoring to make sure that occupants are safe and no precious cargo is left behind.

Using NVIDIA DRIVE Sim on Omniverse, developers can collaborate to design passenger interactions with such cutting-edge features in the vehicle.

By tapping into NVIDIA’s past heritage of infotainment technology, DRIVE Concierge is revolutionizing the future of in-vehicle experiences.

The post Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat appeared first on NVIDIA Blog.

Read More

NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer

The next generation of autonomous vehicle computing is improving performance and efficiency at the speed of light.

During today’s GTC keynote, NVIDIA founder and CEO Jensen Huang unveiled DRIVE Thor, a superchip of epic proportions. The automotive-grade system-on-a-chip (SoC) is built on the latest CPU and GPU advances to deliver 2,000 teraflops of performance while reducing overall system costs.

DRIVE Thor succeeds NVIDIA DRIVE Orin in the company’s product lineup, incorporating the newest compute technology to accelerate industry deployment of intelligent-vehicle technology, targeting automakers’ 2025 models.

DRIVE Thor is the next generation in the NVIDIA AI compute roadmap.

Geely-owned premium EV maker ZEEKR will be the first customer for the next-generation platform, with production starting in 2025.

DRIVE Thor unifies traditionally distributed functions in vehicles — including digital cluster, infotainment, parking and assisted driving — for greater efficiency in development and faster software iteration.

Manufacturers can configure the DRIVE Thor superchip in multiple ways. They can dedicate all of the platform’s 2,000 teraflops to the autonomous driving pipeline, or use a portion for in-cabin AI and infotainment and another portion for driver assistance.

Like the current-generation NVIDIA DRIVE Orin, DRIVE Thor uses the productivity of the NVIDIA DRIVE software development kit, is designed to be ASIL-D functionally safe, and is built on a scalable architecture, so developers can seamlessly port their past software development to the latest platform.

Lightning Fast

In addition to raw performance, DRIVE Thor delivers an incredible leap in deep neural network accuracy.

DRIVE Thor marks the first inclusion of a transformer engine in the AV platform family. The transformer engine is a new component of the NVIDIA GPU Tensor Core. Transformer networks process video data as a single perception frame, enabling the compute platform to process more data over time.

With 8-bit floating point (FP8) precision, the SoC introduces a new data type for automotive. Traditionally, AV developers see a loss in accuracy when moving from 32-bit floating point to 8-bit integer data formats. FP8 precision eases this transition, making it possible for developers to transfer data types without sacrificing accuracy.

Additionally, DRIVE Thor uses updated ARM Poseidon AE cores, making it one of the highest performance processors in the industry.

Multi-Domain Computing

DRIVE Thor is as efficient as it is powerful.

The SoC is capable of multi-domain computing, meaning it can partition tasks for autonomous driving and in-vehicle infotainment. This multi-compute domain isolation lets concurrent time-critical processes run without interruption. On one computer, the vehicle can simultaneously run Linux, QNX and Android.

Typically, these types of functions are controlled by tens of electronic control units distributed throughout a vehicle. Rather than relying on these distributed ECUs, manufacturers can now consolidate vehicle functions using DRIVE Thor’s ability to isolate specific tasks.

With DRIVE Thor, automakers can consolidate intelligent vehicle functions on a single SoC.

All vehicle displays, sensors and more can connect to this single SoC, simplifying what has been an incredibly complex supply chain for automakers.

Two Is Always Better Than One

If one DRIVE Thor seems incredible, try two.

Customers can use one DRIVE Thor SoC, or they can connect two via the latest NVLink-C2C chip interconnect technology to serve as a monolithic platform that runs a single operating system.

This capability provides automakers with the compute headroom and flexibility to build software-defined vehicles that are continuously upgradeable through secure, over-the-air updates.

Designed with the best of NVIDIA GPU technology, DRIVE Thor is truly an AV SoC of heroic proportions.

The post NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer appeared first on NVIDIA Blog.

Read More

HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse

Telecoms began touting the benefits of 5G networks six years ago. Yet the race to deliver ultrafast wireless internet today resembles a contest between the tortoise and the hare, as some mobile network operators struggle with costly and complex network requirements.

Advanced data analytics company HEAVY.AI today unveiled solutions to put carriers on more even footing. Its initial product, HeavyRF, delivers a next-generation network planning and operations tool based on the NVIDIA Omniverse platform for creating digital twins.

“Building out 5G networks globally will cost trillions of dollars over the next decade, and our telco network customers are rightly worried about how much of that is money not well spent,” said Jon Kondo, CEO of HEAVY.AI. “Using HEAVY advanced analytics and NVIDIA Omniverse-based real-time simulations, they’ll see big savings in time and money.”

HEAVY.AI also announced that Charter Communications is collaborating on incorporating the tool into its modeling and planning operations for its Spectrum telco network, which has 32 million customers in 41 U.S. states. The collaboration extends HEAVY’s relationship with Charter, building on the existing analytics operations to 5G network planning.

“HEAVY.AI’s new digital twin capabilities give us a way to explore and fine-tune our expanding 5G networks in ways that weren’t possible before,” said Jared Ritter, senior director of analytics and automation at Charter Communications.

Without the digital twin approach, telco operators must either: physically place microcell towers in densely populated areas to understand the interaction between radio transmitters, the environment, and humans and devices that are on the move — or use tools that offer less detail about key factors  such as tree density or high-rise interference.

Early deployments of 5G needed 300% more base stations for the same level of coverage offered by the previous generation, called Long Term Evolution (LTE), because of higher spectrum bands. A 5G site will consume 300% more power and cost 4x more than an LTE site if they’re deployed in the same way, according to researcher Analysys Mason.

Those sobering figures are prompting the industry to look for efficiencies. Harnessing GPU-accelerated analytics and real-time geophysical mapping, HEAVY.AI’s digital twin solution allows telcos to test radio frequency (RF) propagation scenarios in seconds, powered by the HeavyRF module. This results in significant time and cost savings, because the base stations and microcells can be more accurately placed and tuned at first installation.

The HeavyRF module supports telcos’ goals to plan, build and operate new networks more efficiently by tightly integrating key business information such as mobility and parcels data, as well as customer experience data, within RF planning workflows.

Using an RF-synchronized digital twin would enable planners at Charter Communications to optimize capacity and coverage, plus interactively see how changes in deployment patterns translate into customer acquisition and retention at the household level.

The goal is to use machine learning and big data pipelines to continuously mirror existing real-world conditions.

The digital twin will use the parallel computing capabilities of modern GPUs for visual simulation, as well as to generate physical simulations of RF signals using real-time RTX ray tracing, powered by NVIDIA Omniverse’s RTX Renderer.

For telcos, it’s not just about investing in traditional networks. With the rise of AI applications and services, these companies seek to lay the foundation for 5G-enabled devices, autonomous vehicles, appliances, robots and city infrastructure.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Reconstructing the Real World in DRIVE Sim With AI

Autonomous vehicle simulation poses two challenges: generating a world with enough detail and realism that the AI driver perceives the simulation as real, as well as creating simulations at a large enough scale to cover all the cases on which the AI driver needs to be fully trained and tested.

To address these challenges, NVIDIA researchers have created new AI-based tools to build simulations directly from real-world data. NVIDIA founder and CEO Jensen Huang previewed the breakthrough during the GTC keynote.

This research includes award-winning work first published at SIGGRAPH, a computer graphics conference held last month.

Neural Reconstruction Engine

The Neural Reconstruction Engine is a new AI toolset for the NVIDIA DRIVE Sim simulation platform that uses multiple AI networks to turn recorded video data into simulation.

The new pipeline uses AI to automatically extract the key components needed for simulation, including the environment, 3D assets and scenarios. These pieces are then reconstructed into simulation scenes that have the realism of data recordings, but are fully reactive and can be manipulated as needed. Achieving this level of detail and diversity by hand is costly, time consuming and not scalable.

Environments and Assets

A simulation needs an environment in which to operate. The AI pipeline converts 2D video data from a real-world drive to a dynamic, 3D digital twin environment that can be loaded into DRIVE Sim.

A 3D simulation environment generated from recorded driving data using AI.

The DRIVE Sim AI pipeline follows a similar process to reconstruct other 3D assets. Engineers can use the assets to reconstruct the current scene or place them in a larger library of assets to be used in any simulation.

Using the asset-harvesting pipeline is key to growing the DRIVE Sim library and ensuring it matches the diversity and distribution of the real world.

Assets can be harvested from real-world data, turned into 3D objects and reused in other scenes. Here, the tow truck is reconstructed from the scene on the left and used in a different simulation shown on the right.

Scenarios

Scenarios are the events that take place during a simulation in an environment combined with assets.

The Neural Reconstruction Engine assigns AI-based behaviors to the actors in the scene, so that when presented with the original events, they behave precisely as they did in the real drive. However, since they have an AI behavior model, the figures in the simulation can respond and react to changes by the AV or other scene elements.

Because these scenarios are all occurring in simulation, they can also be manipulated to add new situations. Timing and location of events can be altered. Developers can even incorporate entirely new elements, synthetic or real, to make a scenario more challenging, such as the addition of a child chasing a ball to the scene below.

Synthetic objects can be mixed with real-world scenarios.

Integration Into DRIVE Sim

Once the environment, assets and scenario have been extracted, they’re reassembled in DRIVE Sim to create a 3D simulation of the recorded scene or mixed with other assets to create a completely new scene.

DRIVE Sim provides the tools for developers to adjust dynamic and static objects, the vehicle’s path, and the location, orientation and parameters of the vehicle sensors.

The same scenes in DRIVE Sim are also used to generate pre-labeled synthetic data to train perception systems. Randomizations are applied on top of recreated scenes to add diversity to the training data. Building scenes out of real-world data greatly reduces the sim-to-real gap.

Reconstructed scenes can be augmented with synthetic assets and used to produce new data with ground truth for training AV perception systems.

The ability to mix and match simulation formats is a significant advantage in comprehensively testing and validating AVs at scale. Engineers can manipulate events in a world that is responsive and matches their needs precisely.

The Neural Reconstruction Engine is the result of work by the research team at NVIDIA, and will be integrated into future releases of DRIVE Sim. This breakthrough will enable developers to take advantage of both physics-based and neural-driven simulation on the same cloud-based platform.

The post Reconstructing the Real World in DRIVE Sim With AI appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Christopher Scott Constructs Architectural Designs, Virtual Environments With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Christopher Scott

Growing up in a military family, Christopher Scott moved more than 30 times, which instilled in him “the ability to be comfortable with, and even motivated by, new environments,” he said.

Today, the environments he explores — and creates — are virtual ones.

As chief technical director for 3D design and visualization services at Infinite-Compute, Scott creates physically accurate virtual environments using familiar architectural products in conjunction with NVIDIA Omniverse Enterprise, a platform for connecting and building custom 3D pipelines.

With a background in leading cutting-edge engineering projects for the U.S. Department of Defense, Scott now creates virtual environments focused on building renovation and visualization for the architecture, engineering, construction and operations (AECO) industry.

These true-to-reality virtual environments — whether of electrical rooms, manufacturing factories, or modern home designs — enable quick, efficient design of products, processes and facilities before bringing them to life in the real world.

They also help companies across AECO and other industries save money, speed project completion and make designs interactive for customers — as will be highlighted at NVIDIA GTC, a global conference on AI and the metaverse, running online Sept. 19-22.

“Physically accurate virtual environments help us deliver client projects faster, while maintaining a high level of quality and performance consistency,” said Scott, who’s now based in Austin, Texas. “The key value we offer clients is the ability to make better decisions with confidence.”

To construct his visualizations, Scott uses Omniverse Create and Omniverse Connectors for several third-party applications: Trimble SketchUp for 3D models for drawing and design; Autodesk Revit for 3D design and 2D annotation of buildings; and Unreal Engine for creating walkthrough simulations and 3D virtual spaces.

In addition, he uses software like Blender for visual effects, motion graphics and animation, and PlantFactory for modeling 3D vegetation, which gives his virtual spaces a lively and natural aesthetic.

Project Speedups With Omniverse

Within just four years, Scott went from handling 50 projects a year to more than 3,500, he said.

Around 80 of his projects each month include lidar-to-point-cloud work, a complex process that involves transforming spatial data into a collection of coordinates for 3D models for manufacturing and design.

Using Omniverse doubles productivity for this demanding workload, he said, as it offers physically accurate photorealism and rendering in real time, as well as live-sync collaboration across users.

“Previously, members of our team functioned as individual islands of productivity,” Scott said. “Omniverse gave us the integrated collaboration we desired to enhance our effectiveness and efficiency.”

At Omniverse’s core is Universal Scene Description — an open-source, extensible 3D framework and common language for creating virtual worlds.

“Omniverse’s USD standard to integrate outputs from multiple software programs allowed our team to collaborate on a source-of-truth project — letting us work across time zones much faster,” said Scott, who further accelerates his workflow by running it on NVIDIA RTX GPUs, including the RTX A6000 on Infinite-Compute’s on-demand cloud infrastructure.

“It became clear very soon after appreciating the depth and breadth of Omniverse that investing in this pipeline was not just enabling me to improve current operations,” he added. “It provides a platform for future growth — for my team members and my organization as a whole.”

While Scott says his work leans more technical than creative, he sees using Omniverse as a way to bridge these two sides of his brain.

“I’d like to think that adopting technologies like Omniverse to deliver cutting-edge solutions that have a meaningful and measurable impact on my clients’ businesses is, in its own way, a creative exercise, and perhaps even a work of art,” he said.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Hear about NVIDIA’s latest AI breakthroughs powering graphics and virtual worlds at GTC, running online Sept. 19-22. Register free now and attend the top sessions for 3D creators and developers to learn more about how Omniverse can accelerate workflows.

Join the NVIDIA Omniverse User Group to connect with the growing community and see Scott’s work in Omniverse celebrated.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Christopher Scott Constructs Architectural Designs, Virtual Environments With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

GFN Thursday Delivers Seven New Games This Week

TGIGFNT: thank goodness it’s GFN Thursday. Start your weekend early with seven new games joining the GeForce NOW library of over 1,400 titles.

Whether it’s streaming on an older-than-the-dinosaurs PC, a Mac that normally couldn’t dream of playing PC titles, or mobile devices – it’s all possible to play your way thanks to GeForce NOW.

Get Right Into the Gaming

Test your tactical skills in the new authentic WW1 first person shooter, Isonzo.

Isonzo
The Great War on the Italian Front is brought to life and streaming from the cloud.

Battle among the scenic peaks, rugged valleys and idyllic towns of northern Italy. Choose from six classes based on historical combat roles and build a loadout from a selection of weapons, equipment and perks linked to that class. Shape a dynamic battlefield by laying sandbags and wire, placing ammo crates, deploying trench periscopes or sniper shields, and more.

Lead to charge to victory in this game and six more this week, including:

Members can also discover impressive new prehistoric species with the Jurassic World Evolution 2: Late Cretaceous Pack DLC, available on GeForce NOW this week.

Inspired by the fascinating Late Cretaceous period, this pack includes four captivating species that roamed the land, sea and air over 65 million years ago from soaring, stealthy hunters of the skies to one of the largest dinosaurs ever discovered.

Finally, kick off the weekend by telling us about a game that you love on Twitter or in the comments below.

The post GFN Thursday Delivers Seven New Games This Week appeared first on NVIDIA Blog.

Read More

Reinventing the Wheel: Gatik’s Apeksha Kumavat Accelerates Autonomous Delivery for Wal-Mart and More

As consumers expect faster, cheaper deliveries, companies are turning to AI to rethink how they move goods.

Foremost among these new systems are “hub-and-spoke,” or middle-mile, operations, where companies place distribution centers closer to retail operations for quicker access to inventory. However, faster delivery is just part of the equation. These systems must also be low-cost for consumers.

Autonomous delivery company Gatik seeks to provide lasting solutions for faster and cheaper shipping. By automating the routes between the hub — the distribution center — and the spokes — retail stores — these operations can run around the clock efficiently and with minimal investment.

Gatik co-founder and Chief Engineer Apeksha Kumavat joined NVIDIA’s Katie Burke Washabaugh on the latest episode of the AI Podcast to walk through how the company is developing autonomous trucks for middle-mile delivery.

Kumavat also discussed the progress of commercial pilots with companies such as Walmart and Georgia-Pacific.

She’ll elaborate on Gatik’s autonomous vehicle development in a virtual session at NVIDIA GTC on Tuesday, Sept. 20. Register free to learn more.

You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game, Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

 

The post Reinventing the Wheel: Gatik’s Apeksha Kumavat Accelerates Autonomous Delivery for Wal-Mart and More appeared first on NVIDIA Blog.

Read More