NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future

The factories of the future will have a soul — a “digital twin” that blends man and machine in stunning new ways.

In a demo blending reality and virtual reality, robotics and AI, to manage one of BMW’s automotive factories, NVIDIA CEO Jensen Huang Monday rolled out a stunning vision of the future of manufacturing.

“We are working with BMW to create a future factory,” Huang announced during his keynote address at NVIDIA’s GPU Technology Conference before giving his audience a look.

The demo highlights the general availability of NVIDIA Omniverse Enterprise, the first technology platform enabling global 3D design teams to work together simultaneously across multiple software suites in a shared virtual space.

The AI factory demo brings a full suite of NVIDIA technologies on Omniverse, including the NVIDIA Isaac platform for robotics, the NVIDIA EGX edge computing platform and the NVIDIA Aerial software development kit, which brings GPU-accelerated, software-defined 5G wireless radio access networks to the factory floor.

‘The World’s Largest Custom-Manufacturing Company’

Inside the digital twin of BMW’s assembly system, powered by Omniverse, an entire factory in simulation.

Each of BMW’s factory lines can produce up to 10 different cars, and BMW prides itself on giving customers plenty of choices.

There are over 100 options for each car, and more than 40 BMW models. In all, there are 2,100 possible ways to configure a new BMW.

“BMW may very well be the world’s largest custom-manufacturing company,” Huang said.

These vehicles are produced in 31 factories located around the world, explained Milan Nedeljković, member of the Board of Management of BMW AG.

Moving the Parts That Go into the Machines That Move Your Parts

In an instant, Huang and Nedeljković summoned a digital twin of one of BMW’s factories — and the screen was filled with gleaming cars being assembled by banks of perfectly synchronized robots — all simulated.

To design and reconfigure its factories, BMW’s global teams can collaborate in real-time using different software packages like Revit, Catia, or point clouds to design and plan the factory in 3D and all the changes are visible, in real-time, on Omniverse.

“The capability to operate in a perfect simulation revolutionalizes BMW’s planning processes,” Nedeljković said.

Some of that work has to be hands-on. BMW regularly reconfigures our factories to accommodate new vehicle launches. Now, thanks to Omniverse that doesn’t mean workers have to travel.

Nedeljković showed two BMW planning experts located in different parts of the world testing a new line design in Omniverse.

One of them “wormholes” — or travels virtually — into an assembly simulation with a motion capture suit and records task movements.

The other adjusts the line design, in real time.

“They work together to optimize the line as well as worker ergonomics and safety,” Nedeljković said.

The next step: recreating these kinds of interactions, at scale, in simulations, Nedeljković said.

To simulate workflow in Omniverse, digital humans are trained with data from real associates, they’re then used to test new workflows in simulation to plan for worker ergonomics and efficiency.

“That’s exactly why NVIDIA has Digital Human for simulation,” Huang said. “Digital Humans are trained with data from real associates.”

These digital humans can be used in simulations to test new workflows for worker ergonomics and efficiency.

BMW’s 57,000 factory workers share workspace with robots designed to make their jobs easier.

Omniverse, Nedeljković said, will help robots adapt to BMW’s reconfigured factories rapidly.

“With NVIDIA Isaac robotics platform, BMW is deploying a fleet of intelligent robots for logistics to improve the material flow in our production,” Nedeljković said.

That agility is necessary since BMW produces 2.5 million vehicles per year, and 99 percent of them are custom.

Omniverse can tap into NVIDIA Isaac for synthetic data generation and domain randomization, Huang said. That’s key to bootstrapping machine learning.

“Isaac Sim generates millions of relevant synthetic images, and varies the environment to teach robots. ” Huang said.

Domain randomization can generate an infinite permutation of photorealistic objects, textures, orientations, and lighting conditions, Huang said.

“Simulation offers perfect ground truth, whether for detection, segmentation or depth perception,” he added.

Huang and Nedeljković showed a BMW employee monitoring operations in the factory. The operator is able to assign missions to different robots, and see a photorealistic digital win of its progress in Omniverse — all updated by sensors throughout the factory.

With NVIDIA Fleet Command software, workers can securely orchestrate robots, and other devices, in the factory, Huang explained.

They can monitor complex manufacturing cells in real-time, update software over the air, and launch robots in the factory on missions.

Humans can even lend robots a “helping hand.” When an alert is sent to MIssion Control, one of BMW”s human associations can teleoperate the robot — looking through its camera to guide it through a 5G connection.

Then, with a push of a button, the operator returns the robot to autonomous control.

Continuous Improvement, Continually Improving

Omniverse will help BMW reduce planning time and improve flexibility and precision, producing 30% more efficient planning processes.

“NVIDIA Omniverse and NVIDIA AI give us the chance to simulate the 31 factories in our production network,” Nedeljković said.

All the elements of the complete factory model — including the associates, the robots, the buildings, the assembly parts — can be simulated to support a wide range of AI-enabled use cases such as virtual factory planning, autonomous robots, predictive maintenance, big data analytics, he explained.

“These new innovations will reduce the planning times, improve flexibility and precision, and at the end produce 30 percent more efficient planning processes,” Nedeljković said.

The result: a beautifully crafted new car, an amazing machine that’s the product of an amazing machine — a factory able to capture and replicate every motion in the real world to a digital one, and back.

The post NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future appeared first on The Official NVIDIA Blog.

Read More

New Energy Vehicles Power Up with NVIDIA DRIVE

The electric vehicle revolution is about to reach the next level.

Leading startups and EV brands have all announced plans to deliver intelligent vehicles to the mass market beginning in 2022. And these new, clean-energy fleets will achieve AI capabilities for greater safety and efficiency with the high-performance compute of NVIDIA DRIVE.

The car industry has become a technology industry — future cars will be completely programmable with software-driven business models. Companies will offer services and subscriptions over the air for the life of the car.

These new energy vehicles, or NEVs, will kickstart this transition with centralized, software-defined compute that enables continuously improving, cutting-edge AI capabilities.

NEV Newcomers

For some companies, 2022 marks the initial launch of their visionary concepts into production reality.

Canoo unveiled its first vehicle — an all-electric van — in 2019. Now, the startup is on track to deliver an entire line of EVs, including a delivery truck, pickup and sports sedan, to customers starting in 2022.

Canoo’s flagship personal vehicle will leverage NVIDIA DRIVE AGX Xavier for smart driver assistance features. And since the DRIVE AGX platform is open and scalable, Canoo can continue to develop increasingly advanced capabilities through the life of its vehicles.

Also on the horizon is the much anticipated Faraday Future FF91. This premium EV is designed to be an intelligent third living space, with a luxurious interior packed with convenience features powered by NVIDIA DRIVE.

Also charging onto the EV scene is Vinfast, a startup planning to launch a fleet of smart vehicles beginning in 2022. These vehicles will provide industry-leading safety and enhanced autonomy, leveraging the AI compute of NVIDIA DRIVE Xavier, and for proceeding generations, NVIDIA DRIVE Orin.

“NVIDIA is a vital partner for our work in autonomous driving,” said Hung Bui, chief executive of VinAI. “NVIDIA DRIVE delivers the core compute for our vehicles, delivering advanced sensing and other expanding capabilities.”

A Leading Legacy

NIO has announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, will achieve over 1,000 trillion operations per second of performance with the redundancy and diversity necessary for safe autonomous driving. It also enables personalization in the vehicle, learning from individual driving habits and preferences while continuously improving from fleet data.

The Orin-powered supercomputer will debut in the flagship ET7 sedan, scheduled for production in 2022, and will be in every NIO model to follow.

Breakout EV maker Li Auto will also develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin. These new vehicles are being developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

This high-performance platform will enable Li Auto to deploy an independent, advanced autonomous driving system with its upcoming fleet.

Xpeng is already putting its advanced driving technology on the road. In March, the automaker completed a six-day cross-country autonomous drive with a fleet of intelligent P7 sedans. The vehicles operated without human intervention using the XPilot 3.0 autonomous driving system, powered by NVIDIA DRIVE AGX Xavier.

Finally, one of the world’s largest automakers, SAIC, is evolving to meet the industry’s biggest technological transformations with two new EV brands packed with advanced AI features.

R-Auto is a family of next-generation vehicles featuring the R-Tech advanced intelligent assistant, powered by NVIDIA DRIVE AGX Orin. R-Tech uses the unprecedented level of compute performance of Orin to run perception, sensor fusion and prediction for automated driving features in real time.

The ultra-premium IM brand is the product of a partnership with etail giant Alibaba. The long-range electric vehicles will feature AI capabilities powered by the high-performance, energy-efficient NVIDIA DRIVE Orin compute platform.

The first two vehicles in the lineup — a flagship sedan and SUV — will have autonomous parking and other automated driving features, as well as a 93kWh battery that comes standard. SAIC will begin taking orders for the sedan at the Shanghai Auto Show later this month, with the SUV following in 2022.

EVs are driving the next decade of transportation. And with NVIDIA DRIVE at the core, these vehicles have the intelligence and performance to go the distance.

The post New Energy Vehicles Power Up with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’

Buckle up. NVIDIA CEO Jensen Huang just laid out a singular vision filled with autonomous machines, super-intelligent AIs and sprawling virtual worlds – from silicon to supercomputers to AI software – in a single presentation.

“NVIDIA is a computing platform company, helping to advance the work for the Da Vincis of our time – in language understanding, drug discovery, or quantum computing,” Huang said in a talk delivered from behind his kitchen counter to NVIDIA’s GPU Technology Conference. “NVIDIA is the instrument for your life’s work.”

During a presentation punctuated with product announcements, partnerships, and demos that danced up and down the modern technology stack, Huang spoke about how NVIDIA is investing heavily in CPUs, DPUs, and GPUs and weaving them into new data center scale computing solutions for researchers and enterprises.

He talked about NVIDIA as a software company, offering a host of software built on NVIDIA AI as well as NVIDIA Omniverse for simulation, collaboration, and training autonomous machines.

Finally, Huang spoke about how NVIDIA is moving automotive computing forward with a new SoC, NVIDIA Atlan, and new simulation capabilities.

CPUs, DPUs and GPUs

Huang announced NVIDIA’s first data center CPU, Grace, named after Grace Hopper, a U.S. Navy rear admiral and computer programming pioneer.

Grace is a highly specialized processor targeting largest data intensive HPC and AI applications as the training of next-generation natural-language processing models that have more than one trillion parameters.

When tightly coupled with NVIDIA GPUs, a Grace-based system will deliver 10x faster performance than today’s state-of-the-art NVIDIA DGX-based systems, which run on x86 CPUs.

While the vast majority of data centers are expected to be served by existing CPUs, Gracewill serve a niche segment of computing.“Grace highlights the beauty of Arm,” Huang said.

Huang also announced that the Swiss National Supercomputing Center will build a supercomputer, dubbed Alps, will be powered by Grace and NVIDIA’s next-generation GPU. U.S. Department of Energy’s Los Alamos National Laboratory will also bring a Grace-powered supercomputer online in 2023, NVIDIA announced.

Accelerating Data Centers with BlueField-3

Further accelerating the infrastructure upon which hyperscale data centers, workstations, and supercomputers are built, Huang announced the NVIDIA BlueField-3 DPU.

The next-generation data processing unit will deliver the most powerful software-defined networking, storage and cybersecurity acceleration capabilities.

Where BlueField-2 offloaded the equivalent of 30 CPU cores, it would take 300 CPU cores to secure, offload, and accelerate network traffic at 400 Gbps as BlueField-3— a 10x leap in performance, Huang explained.

‘Three Chips’

Grace and BlueField are key parts of a data center roadmap consisting of 3 chips: CPU, GPU, and DPU, Huang said. Each chip architecture has a two-year rhythm with likely a kicker in between. One year will focus on x86 platforms, the next on Arm platforms.

“Every year will see new exciting products from us,” Huang said. “Three chips, yearly leaps, one architecture.”

Expanding Arm into the Cloud 

Arm, Huang said, is the most popular CPU in the world. “For good reason – it’s super energy-efficient and its open licensing model inspires a world of innovators,” he said.

For other markets like cloud, enterprise and edge data centers, supercomputing, and PC, Arm is just starting. Huang announced key Arm partnerships — Amazon Web Services in cloud computing, Ampere Computing in scientific and cloud computing, Marvel in hyper-converged edge servers, and MediaTek to create a Chrome OS and Linux PC SDK and reference system.

DGX – A Computer for AI

Weaving together NVIDIA silicon and software, Huang announced upgrades to NVIDIA’s DGX Station “AI data center in-a-box” for workgroups, and the NVIDIA DGX SuperPod, NVIDIA’s AI-data-center-as-a-product for intensive AI research and development.

The new DGX Station 320G harnesses 320Gbytes of super-fast HBM2e connected to 4 NVIDIA A100 GPUs over 8 terabytes per second of memory bandwidth. Yet it plugs into a normal wall outlet and consumes just 1500 watts of power, Huang said.

The DGX SuperPOD gets the new 80GB NVIDIA A100, bringing the SuperPOD to 90 terabytes of HBM2e memory. It’s been upgraded with NVIDIA BlueField-2, and NVIDIA is now offering it with the NVIDIA Base Command DGX management and orchestration tool.

NVIDIA EGX for Enterprise 

Further democratizing AI, Huang introduced a new class of NVIDIA-certified systems, high-volume enterprise servers from top manufacturers. They’re now certified to run the NVIDIA AI Enterprise software suite, exclusively certified for VMware vSphere 7, the world’s most widely used compute virtualization platform.

Expanding the NVIDIA-certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, announced today.

AI-on-5G

Huang also discussed NVIDIA’s AI-on-5G computing platform – bringing together 5G and AI into a new type of computing platform designed for the edge that pairs the NVIDIA Aerial software development kit with the NVIDIA BlueField-2 A100, combining GPUs and CPUs into “the most advanced PCIE card ever created.”

Partners Fujitsu, Google Cloud, Mavenir, Radisys and Wind River are all developing solutions for NVIDIA’s AI-on-5G platform.

NVIDIA AI and NVIDIA Omniverse

Virtual, real-time, 3d worlds inhabited by people, AIs, and robots are no longer science-fiction.

NVIDIA Omniverse is cloud-native, scalable to multiple GPUs, physically accurate, takes advantage of RTX real-time path tracing and DLSS, simulates materials with NVIDIA MDL, simulates physics with NVIDIA PhysX, and fully integrates NVIDIA AI, Huang explained.

“Omniverse was made to create shared virtual 3D worlds,” Huang said. “Ones not unlike the science fiction metaverse described by Neal Stephenson in his early 1990s novel ‘Snow Crash’”

Huang announced that starting this summer, Omniverse will be available for enterprise licensing. Since its release in open beta partners such as Foster and Partners in architecture, ILM in entertainment, Activision in gaming, and advertising powerhouse WPP have put Omniverse to work.

The Factory of the Future

To show what’s possible with Omniverse Huang, along with Milan Nedeljković, member of the Board of Management of BMW AG, showed how a photorealistic, real-time digital model — a “digital twin” of one of BMW’s highly-automated factories — can accelerate modern manufacturing.

“These new innovations will reduce the planning times, improve flexibility and precision and at the end produce 30 percent more efficient planning,” Nedeljković said.

A Host of AI Software

Huang announced NVIDIA Megatron — a framework for training Transformers, which have led to breakthroughs in natural-language processing. Transformers generate document summaries, complete phrases in email, grade quizzes, generate live sports commentary, even code.

He detailed new models for Clara Discovery — NVIDIA’s acceleration libraries for computational drug discovery, and a partnership with Schrodinger — the leading physics-based and machine learning computational platform for drug discovery and material science.

To accelerate research into quantum computing — which relies on quantum bits, or qubits, that can be 0, 1, or both — Huang introduced cuQuantum to accelerate quantum circuit simulators so researchers can design better quantum computers.

To secure modern data centers, Huang announced NVIDIA Morpheus – a data center security platform for real-time all-packet inspection built on NVIDIA AI, NVIDIA BlueField, Net-Q network telemetry software, and EGX.

To accelerate conversational AI, Huang announced the availability of NVIDIA Jarvis – a state-of-the-art deep learning AI for speech recognition, language understanding, translations, and expressive speech.

To accelerate recommender systems — the engine for search, ads, online shopping, music, books, movies, user-generated content, and news — Huang announced NVIDIA Merlin is now available on NGC, NVIDIA’s catalog of deep learning framework containers.

And to help customers turn their expertise into AI, Huang introduced NVIDIA TAO to fine-tune and adapt NVIDIA pre-trained models with data from customers and partners while protecting data privacy.

“There is infinite diversity of application domains, environments, and specializations,” Huang said. “No one has all the data – sometimes it’s rare, sometimes it’s a trade secret.

The final piece is the inference server, NVIDIA Triton, to glean insights from the continuous streams of data coming into customer’s EGX servers or cloud instances, Huang said.

‘Any AI model that runs on cuDNN, so basically every AI model,” Huang said. “From any framework – TensorFlow, Pytorch, ONNX, OpenVINO, TensorRT, or custom C++/python backends.”

Advancing Automotive with NVIDIA DRIVE

Autonomous vehicles are “one of the most intense machine learning and robotics challenges – one of the hardest but also with the greatest impact,” Huang said.

NVIDIA is building modular, end-to-end solutions for the $10 trillion transportation industry so partners can leverage the parts they need.

Huang said NVIDIA DRIVE Orin, NVIDIA’s AV computing system-on-a-chip, which goes into production in 2022, was designed to be the car’s central computer.

Volvo Cars has been using the high-performance, energy-efficient compute of NVIDIA DRIVE since 2016 and developing AI-assisted driving features for new models on NVIDIA DRIVE Xavier with software developed in-house and by Zenseact, Volvo Cars’ autonomous driving software development company.

And Volvo Cars announced during the GTC keynote today that it will use NVIDIA DRIVE Orin to power the autonomous driving computer in its next-generation cars.

The decision deepens the companies’ collaboration to even more software-defined model lineups, beginning with the next-generation XC90, set to debut next year.

Meanwhile, NVIDIA DRIVE Atlan, NVIDIA’s next-generation automotive system-on-a-chip, and a true data center on wheels, “will be yet another giant leap,” Huang announced.

Atlan will deliver more than 1,000 trillion operations per second, or TOPS, and targets 2025 models.

“Atlan will be a technical marvel – fusing all of NVIDIA’s technologies in AI, auto, robotics, safety, and BlueField secure data centers,” Huang said.

Huang also announced the NVIDIA 8th generation Hyperion car platform – including reference sensors, AV and central computers, 3D ground-truth data recorders, networking, and all of the essential software.

Huang also announced that DRIVE Sim will be available for the community this summer.

Just as Omniverse can build a digital twin of the factories that produce cars, DRIVE Sim can be used to create a digital twin of autonomous vehicles to be used throughout AV development.

“The DRIVE digital twin in Omniverse is a virtual space that every engineer and every car in the fleet is connected to,” Huang said.

The ‘Instrument for Your Life’s Work’

Huang wrapped up with four points.

NVIDIA is now a 3-chip company – offering GPUs, CPUs, and DPUs.

NVIDIA is a software platform company and is dedicating enormous investment in NVIDIA AI and NVIDIA Omniverse.

NVIDIA is an AI company with Megatron, Jarvis, Merlin, Maxine, Isaac, Metropolis, Clara, and DRIVE, and pre-trained models you can customize with TAO.

NVIDIA is expanding AI with DGX for researchers, HGX for cloud, EGX for enterprise and 5G edge, and AGX for robotics.

“Mostly,” Huang said. “NVIDIA is the instrument for your life’s work.”

The post NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’ appeared first on The Official NVIDIA Blog.

Read More

Top Robotaxi Companies Hail Rides on NVIDIA DRIVE

It’s time to hail the new era of transportation.

During his keynote at the GPU Technology Conference today, NVIDIA founder and CEO Jensen Huang outlined the broad ecosystem of companies developing next-generation robotaxis on NVIDIA DRIVE. These forward-looking manufacturers are set to transform the way we move with safer, more efficient vehicles for everyday mobility.

The world moves 2 trillion miles a year. With some of those miles traveled through a mobility service, the opportunity for innovation is tremendous.

Robotaxis are fully autonomous vehicles that can operate without human supervision in geofenced areas, such as cities or residential communities. With a set of high-resolution sensors and a supercomputing platform in place of a driver, they can safely operate 24 hours a day, seven days a week.

Currently, the industry is working to roll out level 2+ AI-assisted driving features in private vehicles, with experts forecasting level 4 vehicles arriving later this decade.

And as a safer alternative to current modes of transit, robotaxis are expected to draw quick adoption once deployed at scale, making up more than 5 percent of vehicle miles traveled worldwide by 2030.

Achieving this mobility revolution requires centralized, high-performance compute. The amount of sensor data a robotaxi needs to process is 100 times greater than today’s most advanced vehicles. The complexity in software also increases exponentially, with an array of redundant and diverse deep neural networks running simultaneously as part of an integrated software stack.

NVIDIA is the only company that enables this level of AI development from end to end, which is why virtually every robotaxi maker and supplier is using its GPU-powered offerings.

Redesigning the Wheel

Some companies are approaching robotaxi development from square one, introducing entirely new vehicles purpose-built for autonomous ride-hailing.

Cruise, the San Francisco self-driving company, unveiled the Cruise Origin robotaxi in early 2020. The all-electric, self-driving, shared vehicle was purpose-built in partnership with GM and Honda to transform what it means to travel and commute in a city.

Cruise leverages the high-performance, energy-efficient compute of NVIDIA DRIVE GPUs to process the massive amounts of data its fleet collects on San Francisco’s chaotic streets in real time. The result is a safer, cleaner and more efficient transportation alternative for city dwellers.

“In a single day we ingest and process what would be equivalent to multiple Netflix libraries,” said Mo ElShenawy, senior vice president of engineering at Cruise during a GTC session. “With NVIDIA DRIVE GPUs, we’re able to use this data to catch our robotaxis up with human evolution.”

In December, robotaxi maker Zoox took the wraps off its rider-focused autonomous vehicle. It features four-wheel steering, allowing it to pull into tight curb spaces without parallel parking. The vehicle is also bidirectional, so there is no fixed front or back end. It can pull forward into a driveway and forward out onto the road without reversing. In the case of an unexpected road closure, the vehicle can simply flip directions or use four-wheel steering to turn around. No reversing required.

Inside the vehicle, carriage seating facilitates clear visibility of the vehicle’s surroundings as well as socializing. Each seat has the same amount of space and delivers the same experience — there’s no bad seat in the house. Carriage seating also makes room for a wider aisle, allowing passengers to easily pass by each other without getting up or contorting into awkward positions.

Zoox was able to optimize this robotaxi design with centralized, high-performance compute built on NVIDIA DRIVE.

“Working with NVIDIA, that’s allowed us to get a couple of orders more magnitude of computation done with the same amount of power, just over the last decade, and that makes a lot of difference,” said Zoox CTO Jesse Levinson during a GTC session.

Moving the World

These companies are also delivering autonomous innovation worldwide.

This past year, self-driving startup AutoX launched a commercial autonomous ride-hailing service in Shenzhen, China.

The 100-vehicle AutoX fleet uses the NVIDIA DRIVE platform for AI compute, achieving up to 2,000 trillion operations per second to power the numerous redundant and deep neural networks for full self-driving.

Based in the United Kingdom, Oxbotica has developed a “Universal Autonomy” platform that enables companies to build a variety of autonomous vehicles, including robotaxis. Its upcoming Selenium platform leverages NVIDIA DRIVE Orin to achieve level 4 self-driving capabilities.

Additionally, ride-hailing giant Didi Chuxing is developing level 4 autonomous vehicles for its mobility services using NVIDIA DRIVE and AI technology. Delivering 10 billion passenger trips per year, DiDi is working toward the safe, large-scale application of autonomous driving technology.

NVIDIA Inception member Pony.AI is collaborating with global automakers such as Toyota and Hyundai, developing a robotaxi fleet with the NVIDIA DRIVE AGX platform at its core.

With the power of NVIDIA DRIVE, these robotaxi companies can continue to roll out this transformative technology to more consumers, ushering in a new age in mobility.

The post Top Robotaxi Companies Hail Rides on NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Brings Powerful Virtualization Performance with NVIDIA A10 and A16

Enterprises rely on GPU virtualization to keep their workforces productive, wherever they work. And NVIDIA virtual GPU (vGPU) performance has become essential to powering a wide range of graphics- and compute-intensive workloads from the cloud and data center.

Now, designers, engineers and knowledge workers across industries can experience accelerated performance with the NVIDIA A10 and A16 GPUs.

Based on the NVIDIA Ampere architecture, A10 and A16 deliver more power, memory and user density to boost any workflow, from graphics and AI to VDI. And when combined with NVIDIA vGPU software, the new GPUs greatly improve user experience, performance and flexibility.

A10 Delivers Powerful, Flexible Virtual Workstation Performance

Professionals are increasingly using advanced technologies like real-time ray tracing, AI, compute, simulation and virtual reality for their work. But to run these workflows, and with employee mobility crucial today, they require more power and flexibility to work from anywhere.

The NVIDIA A10 combined with NVIDIA RTX Virtual Workstation software delivers the performance to efficiently power these complex workflows while ensuring employees get the best user experience possible.

With virtual workstations powered by the A10, businesses can deliver enhanced graphics and video with AI-accelerated applications from mainstream enterprise servers.

Since the A10 can support graphics and AI workloads on virtualized infrastructure, data center administrators can flexibly provision resources and take advantage of any underutilized compute power to run AI inference or VDI workloads.

The A10 combines second-generation RT Cores and third-generation Tensor Cores to enrich graphics and video applications with powerful AI. It’s built specifically for graphics, media and game developer workstations, delivering 2.5x faster graphics performance and over 2.5x the inference performance compared to the previous generation NVIDIA T4 Tensor Core GPU.

Users can also run inference workloads on the A10 with NVIDIA AI Enterprise software and achieve bare-metal performance. The A10 includes new streaming microprocessors with 24GB of GDDR6 memory, enabling versatile graphics, rendering, AI and compute performance. The single-wide, full-height, full-length PCIe form factor enables GPU server density, often five to six GPUs per server.

A16 Enhances VDI User Experience for Remote Workers

With the rising adoption of web conferencing and video collaboration tools, the remote work environment is here to stay. According to an IDC survey, 87 percent of U.S. enterprises expect their employees to continue working from home three or more days per week once mandatory pandemic closures are lifted.(1)

Knowledge workers use multiple devices and monitors to efficiently do their work. They also require easy access to productivity tools and applications and need to collaborate with remote teammates. Everything from email and web browsing to video conferencing and streaming can benefit from GPU acceleration — and NVIDIA A16 provides that powerful performance by delivering the next generation of VDI.

The A16 combined with NVIDIA vPC software is ideal for providing graphics-rich VDI and an enhanced user experience for knowledge workers. It offers improved user density versus the previous generation M10, with up to 64 concurrent users per board and reduces the total cost of ownership by up to 20 percent.

Virtual desktops powered by NVIDIA vPC software and the A16 deliver an experience indistinguishable from a physical PC, which allows remote workers to seamlessly transition between working at the office and at home.

GPU-accelerated VDI with A16 and NVIDIA vPC also provides increased frame rates and lower end-user latency, so productivity applications and tools are more responsive, and remote workers achieve the optimal user experience.

Availability

NVIDIA A10 is supported as part of NVIDIA-Certified Systems, in the on-prem data center, in the cloud and at the edge, and will be available starting this month. Learn more about the NVIDIA A10 by watching a replay of NVIDIA CEO Jensen Huang’s GTC keynote address.

NVIDIA A16 will be available later this year.

(1) IDC Press Release, Mobile Workers Will Be 60% of the Total U.S. Workforce by 2024, According to IDC, September 2020

The post NVIDIA Brings Powerful Virtualization Performance with NVIDIA A10 and A16 appeared first on The Official NVIDIA Blog.

Read More

Volvo Cars Extends Collaboration with NVIDIA to Use NVIDIA DRIVE Orin Across Fleet

Volvo Cars is extending its long-held legacy of safety far into the future.

The global automaker announced during the GTC keynote today that it will use NVIDIA DRIVE Orin to power the autonomous driving computer in its next-generation cars. The decision deepens the companies’ collaboration to even more software-defined model lineups, beginning with the next-generation XC90, set to debut next year.

Volvo Cars has been using the high-performance, energy-efficient compute of NVIDIA DRIVE since 2016 and developing AI-assisted driving features for new models on NVIDIA DRIVE Xavier with software developed in-house and by Zenseact, Volvo Cars’ autonomous driving software development company.

Building safe self-driving cars is one of the most complex computing challenges today. Advanced sensors surrounding the car generate enormous amounts of data that must be processed in a fraction of a second. That’s why NVIDIA developed Orin, the industry’s most advanced, functionally safe and secure, software-defined autonomous vehicle computing platform.

Orin is software compatible with Xavier, allowing customers to leverage their existing development investments. It’s also scalable — with a range of configurations and even able to deliver unsupervised driverless operation.

Volvo Cars’ next-generation vehicle architecture will be hardware-ready for autonomous driving from production start. Its unsupervised autonomous driving feature, called Highway Pilot, will be activated when it’s verified to be safe for individual geographic locations and conditions.

Redundancy and Diversity for Any Adversity

The NVIDIA DRIVE platform is architected for redundancy and diversity to deliver the highest level of safety.

Like its predecessors, NVIDIA Orin (pictured below) maintains this safety architecture with the highest possible compute performance. The system-on-a-chip (SoC) achieves up to 254 TOPS and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D.

By combining the compute performance of Orin with software developed in-house and by Zenseact, and state-of-the-art sensors such as LiDAR and radar, Volvo Cars’ upcoming generations of intelligent cars will feature safe and robust AI capabilities.

Continuous Improvement

The next generation of vehicles will be state-of-the-art data centers on wheels. They’ll be richly programmable and receive software updates over the air.

These software-defined capabilities will deliver new skills and features that will delight drivers and passengers for the life of the car.

By centralizing the vehicle’s compute on NVIDIA DRIVE Orin, Volvo Cars’ next generation cars will be safer, more personal and more sustainable, that become better and smarter every day. Even when these cars aren’t in autonomous driving mode, they can still improve the safety of their occupants by anticipating and reacting to hazards faster than a human driver.

With an architecturally coherent and programmable fleet, Volvo Cars will extend its legacy of safety and quality far into the future, nurturing a growing installed base with its upcoming cars to offer software upgradeable applications for the entire life of the car.

The post Volvo Cars Extends Collaboration with NVIDIA to Use NVIDIA DRIVE Orin Across Fleet appeared first on The Official NVIDIA Blog.

Read More

NVIDIA DRIVE Sim Ecosystem Creates Diverse Proving Ground for Self-Driving Vehicles

Developing  autonomous vehicles with large scale simulation requires an ecosystem of partners and tools that’s just as wide ranging.

NVIDIA DRIVE Sim powered by Omniverse addresses AV development challenges with a scalable, diverse and physically accurate simulation platform. With DRIVE Sim, developers can improve productivity and test coverage, accelerating their time to market while minimizing the need for real-world driving.

The variety and depth of companies that form the DRIVE Sim ecosystem are core components to what makes the platform the foremost solution for autonomous vehicle simulation.

DRIVE Sim enables high-fidelity simulation by tapping into NVIDIA’s core technologies, including NVIDIA RTX, Omniverse and AI, to deliver a powerful, cloud-based simulation platform. It can generate datasets to train the vehicle’s perception system or provide a virtual proving ground to test the vehicle’s decision-making and control logic.

The platform can be connected to the AV stack in software-in-the-loop or hardware-in-the-loop configurations to test the full driving experience.

DRIVE Sim comes with a rich library of configurable models for environments, scenarios, vehicles, sensors and traffic that work right out-of-the-box.

It also includes dedicated application programming interfaces that enable developers to build DRIVE Sim connectors, plugins, and extensions to tailor the simulation experience to specific requirements and workflows. These APIs make it possible to leverage past investment and development by allowing integration into pre-established AV simulation tool-chains.

A broad partner ecosystem provides connectors, plugins and extensions to tailor the DRIVE Sim simulation experience to specific requirements and workflows.

With a broad ecosystem of simulation partners, DRIVE Sim always features the cutting edge in virtual simulation models, rich environments as well as verification and validation tools.

Ever-Changing Environments

Driving behavior varies with the environment the vehicle is driving in. From the dense traffic of urban driving to the sparse, winding roads of highways, self-driving cars must be able to handle different domains, as well as follow the unique laws of different countries.

DRIVE Sim ecosystem partners provide realistic virtual models of the three-dimensional road environment, including tools to create such environments, reference maps to create accurate road network and environment assets such as traffic signs and lights, other vehicles, pedestrians, bicyclists, buildings, trees, lamp posts, fire hydrants and road debris.

DRIVE Sim features realistic virtual models of complex road environments, either via out-of-the-box sample environments or via imported environments and assets from ecosystem partners.

NVIDIA is partnering with various 3D model providers to make these assets available for easy download and import via Omniverse into simulated environments and scenarios for DRIVE Sim.

Modeling Vehicle Behavior

In addition to recreating the real-world environment in the virtual world, simulation must accurately reproduce the way the vehicle itself responds to road inputs and controls, such as acceleration, steering and braking.

Vehicle dynamics models respond to vehicle control signals sent by DRIVE Sim with the correct position and orientation of the vehicle given the inputs.

These models simulate the vehicle dynamics to help validate planning and control algorithms with the highest possible fidelity. They can recreate the orientation and motion of sensors as the vehicle turns or brakes suddenly, as well as the sensor reaction to road vibration or other harsh conditions.

Vehicle models also help assess the robustness of the autonomous driving system itself. As the vehicle experiences tire and brake wear, varying cargo loads and wheel alignment, it’s critical to see how the system responds to ensure safety.

High-fidelity vehicle dynamics models are necessary to evaluate planning & control algorithms, even for low-speed parking maneuvers.

NVIDIA is collaborating with all major vehicle dynamics model providers to ensure that their models can be integrated into DRIVE Sim.

Sensing Simulation

Just as with autonomous vehicles in the physical world, virtual vehicles also need sensors to perceive their surroundings. DRIVE Sim comes with a library of standard models for camera, radar, lidar and ultrasonic sensors.

Through APIs, it’s also possible for users and ecosystem partners to integrate dedicated models for sensor simulation into DRIVE Sim.

These models typically simulate sensor components such as transmitters, receivers, imagers and lenses, as well as include signal-processing software and transcoders.

Physically accurate light simulation using RTX real-time raytracing, in combination with detailed sensor models, is used to validate perception edge cases, for example at sunrise or sunset when sunlight is directly shining into the camera.

Multiple camera, radar and lidar suppliers already provide models of their sensors for DRIVE Sim. By incorporating sensor models with this level of granularity, DRIVE Sim can accurately recreate the output of what a physical sensor in the real world would create as the vehicle drives.

Finding the Unknowns

Vehicles driving in the real world aren’t the only ones on the road, and the same is true in simulation.

With detailed traffic models, developers can play out specific scenarios with the same variables and unpredictability of the real world. Some DRIVE Sim partners develop naturalistic traffic — or situations where the end result is unknown — to test and validate the autonomous vehicle systems.

Getting realistic (and sometimes unpredictable) events into DRIVE Sim can be achieved via scenario catalogues, traffic simulation models and scenario-based V&V methodologies from ecosystem partners.

Other partners contribute specific scenario-catalogs and scenario-based verification and validation methodologies that evaluate whether an autonomous vehicle system meets specific key performance indicators.

These criteria can be regulatory requirements or industry standards. NVIDIA is participating in multiple projects, consortia and standards organizations across the globe aimed at creating standards for autonomous vehicle simulation.

Always in the Loop

Finally, the DRIVE Sim ecosystem makes it possible to use simulation to test and validate the full autonomous vehicle hardware system.

The NVIDIA DRIVE Constellation hardware-in-the-loop platform, which contains the AI compute system that runs in the vehicle, allows for bit-accurate at-scale validation of the AV stack on the target hardware.

System integration partners provide the infrastructure to connect DRIVE Constellation to the rest of the vehicle’s electronic architecture. This full integration with components like the braking, engine and cockpit control units enables developers to evaluate how the full vehicle reacts in specific self-driving scenarios.

With experienced partners contributing diverse and constantly updated models, self-driving systems can be continually developed, tested and validated using the highest quality content.

The post NVIDIA DRIVE Sim Ecosystem Creates Diverse Proving Ground for Self-Driving Vehicles appeared first on The Official NVIDIA Blog.

Read More

Carestream Health and Startups Develop AI-Enabled Medical Instruments with NVIDIA Clara AGX Developer Kit

Carestream Health, a leading maker of medical imaging systems, is investigating the use of  NVIDIA Clara AGX — an embedded AI platform for medical devices — in the development of AI-powered features on single-frame and streaming x-ray applications.

Startups around the world, too, are adopting Clara AGX for AI solutions in medical imaging, surgery and electron microscopy. Among them is Boston-based Activ Surgical, which recently received FDA clearance for a hardware imaging module to deliver real-time AI insights to the operating room.

Now in general availability, the NVIDIA Clara AGX developer kit advances the development of software-defined instruments, such as microscopes, ultrasounds and endoscopes.

This emerging generation of medical devices is equipped with dozens of real-time AI applications providing support at every step of the clinical experience — from automating patient set-up for scans and improving image quality to analyzing data streams and delivering critical insights to care providers.

NVIDIA Clara AGX is accelerating the development of these new medical instruments by providing a universal platform that can deliver high-bandwidth signal processing, accelerated computing reconstruction, AI processing and advanced 3D visualization.

Helping Clinicians Sense in Real Time 

Medical instruments like endoscopes and surgical robots are mounted with cameras, sending a live video feed to the clinicians operating the devices. Capturing these streams and applying computer vision AI to the video content can give medical professionals tools to improve patient care and bolster the capabilities of hospitals that lack adequate medical imaging resources.

Architected with NVIDIA Jetson AGX Xavier, an NVIDIA RTX 6000 GPU and the NVIDIA Mellanox ConnectX-6 SmartNIC, the Clara AGX developer kit comes with an SDK that makes it easy for developers to get up and running with real-time system software, libraries for input/output and video pipelining, and reference applications to create AI models for ultrasound and endoscopy.

Built into the platform is the NVIDIA EGX stack for cloud-native containerized software and microservices, including NVIDIA Fleet Command to securely deploy fleets of devices in hospitals, which together transform everyday sensors into smart sensors.

These smart sensors will be software-defined, meaning they can be regularly updated with AI algorithms as they improve — an essential capability to continuously connect research breakthroughs with the day-to-day practice of medicine.

Enabling Intelligent Instruments

Carestream Health is creating smart X-ray rooms that will include AI-powered features for an enhanced imaging workflow and faster, more efficient exams. The devices include automated positioning and exposure settings for similar exam types, which helps improve the consistency of X-ray images, boosting diagnostic confidence.

And Activ Surgical, a member of the NVIDIA Inception startup accelerator program, is using NVIDIA GPU-accelerated AI to deliver real-time surgical guidance. The company’s newly FDA-cleared ActivSight module will power its ActivINSIGHT product, which will provide surgeons with previously unavailable visual overlays, including blood flow and perfusion without the need for the injection of dyes.

Carestream Health and Activ Surgical are just two of the pioneering companies worldwide using NVIDIA AGX systems to power intelligent medical devices. Others include:

  • AJA Video Systems, based in California’s Gold Country, develops professional video and audio PCIe cards for high-bandwidth streaming. When combined with the NVIDIA Clara AGX developer kit, which includes two PCIe slots and high-speed network ports, the company’s cards can be used for endoscopy and surgical visualization applications.
  • Kaliber Labs, an NVIDIA Inception member, is building real-time AI-powered software solutions to support surgeons performing arthroscopic and minimally invasive procedures. Kaliber uses NVIDIA Clara AGX to deploy its surgical software suite, which equips surgeons with a first-of-its-kind contextualized and personalized surgical toolkit to help surgeons perform at the highest level and reduce surgical variability.
  • KAYA Instruments, an NVIDIA Inception member, develops computer vision products that can be used with imaging devices, including electron microscopes, ultrasound machines and MRI equipment. The Israel-based company’s video acquisition cards and cameras transfer medical imaging content to NVIDIA GPUs for real-time processing and AI-accelerated analysis.
  • Subtle Medical, an NVIDIA Inception member, has deployed FDA-cleared and CE-marked deep-learning powered image enhancement software solutions for PET and MRI protocols. The company will leverage NVIDIA Clara AGX for SubtleIR, an AI-powered software under development that improves the speed and quality of interventional imaging procedures.
  • Theator, an NVIDIA Inception member, will use NVIDIA Clara AGX to develop its surgical analytics platform. The Palo Alto-based startup is developing edge GPU-accelerated AI systems to annotate operation room footage, allowing surgeons to conduct post-surgery reviews where they can compare parts of a procedure with previous identical procedures.
  • us4us, a Poland-based maker of ultrasound research systems, is using NVIDIA AGX systems for a portable ultrasound platform that will support real-time digital beamforming — a compute-intensive technique essential to capturing quality ultrasound images. The software-defined system uses embedded GPU modules so medical researchers can develop and deploy custom AI models for image processing during ultrasound scans.

Learn more about Clara AGX for AI-powered medical devices and instruments in the GTC talk, “Using Ethernet to Stream High-Throughput, Low-Latency Medical Sensor Data.” The NVIDIA GPU Technology Conference is free to register. The healthcare track includes 16 live webinars, 18 special events and over 100 recorded sessions.

Registration isn’t required to watch NVIDIA CEO Jensen Huang’s keynote address.

Subscribe to NVIDIA healthcare news, and follow NVIDIA Healthcare on Twitter.

The post Carestream Health and Startups Develop AI-Enabled Medical Instruments with NVIDIA Clara AGX Developer Kit appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Gives Arm a Second Shot of Acceleration

The Arm ecosystem got a booster shot of advances from NVIDIA at GTC today.

NVIDIA discussed work with Arm-based silicon, software and service providers, showing the potential of energy-efficient, accelerated platforms and applications across client, cloud, HPC and edge computing.

NVIDIA also announced three new processors built around Arm IP, including “Grace,” its first data center CPU which takes AI, cloud and high performance computing to new heights.

Separately, the new BlueField-3 data processing unit (DPU) sports more Arm cores, opening doors to new more powerful applications in data center networking.

And NVIDIA DRIVE Atlan becomes the company’s first processor for autonomous vehicles packing an Arm-enabled DPU, showing the potential for high performance networks in automaker’s 2025 models.

A Vision of What’s Possible

In his GTC keynote, NVIDIA CEO Jensen Huang shared his vision for AI, HPC, data science, graphics and more. He also reaffirmed his pledge to expand the Arm ecosystem as part of the Arm acquisition deal NVIDIA announced in September 2020.

On the road to making that vision a reality, NVIDIA described a set of efforts to accelerate CPUs from four key Arm partners with NVIDIA GPUs, DPUs and software, enhancing apps from Arm developers.

GPUs Boost AWS Graviton2 Instances

In the cloud, NVIDIA announced it will provide GPU acceleration for Amazon Web Services Graviton2, the cloud-service provider’s own Arm-based processor. The accelerated Graviton2 instances will provide rich game-streaming experiences and lower the cost of powerful AI inference capabilities.

For example, game developers will use the AWS instances to stream Android games and other services that combine the efficiency of Graviton2 with NVIDIA RTX graphics technologies like ray tracing and DLSS.

In high performance computing, the new NVIDIA Arm HPC Developer Kit provides a high-performance, energy-efficient platform for supercomputers that combine Ampere Computing’s Altra — a CPU packing 80 Arm cores running up to 3.3 GHz — with the latest NVIDIA GPUs and DPUs.

The devkit runs a suite of NVIDIA compilers, libraries and tools for AI and HPC so developers can accelerate Arm-based systems for science and technical computing. Leading researchers including Oak Ridge and Los Alamos National Labs in the U.S. as well as national labs in South Korea and Taiwan will be among its first users.

Pumping Up Client, Edge Platforms

In PCs, NVIDIA is working with MediaTek, the world’s largest supplier of smartphone chips, to create a new class of notebooks powered by an Arm-based CPU alongside an NVIDIA RTX GPU.

The notebooks will use Arm cores and NVIDIA graphics to give consumers energy-efficient portables with no-compromise media capabilities based on a reference platform that supports Chromium, Linux and NVIDIA SDKs.

And in edge computing, NVIDIA is working with Marvell Semiconductor to team its OCTEON Arm-based processors with NVIDIA’s GPUs. Together they will speed up AI workloads for network optimization and security.

Top AI Systems Join Arm’s Family

Two powerful AI supercomputers will come online next year.

The Swiss National Supercomputing Centre is building a system with 20 exaflops of AI performance. And in the U.S., the Los Alamos National Laboratory will switch on a new AI supercomputer for its researchers.

Both will be powered by NVIDIA’s first data center CPU, “Grace,” an Arm-based processor that will deliver 10x the performance of today’s fastest servers on the most complex AI and HPC workloads.

Named after pioneering computer scientist Grace Hopper, this CPU has the plumbing needed for the data-driven AI era. It sports coherent connections running at 900 GB/s to NVIDIA GPUs, thanks to a fourth generation NVLink — that’s 14x the bandwidth of today’s servers.

More Arm Cores for Networking

NVIDIA Mellanox networking is more than doubling down on its investment in Arm. The BlueField-3 DPU announced today packs 400-Gbps links and 5x the Arm compute power of the current DPU, the BlueField-2 available today.

Simple math shows why bulking up on Arm makes sense: One BlueField-3 DPU delivers the equivalent data center services that could consume up to 300 x86 CPU cores.

The advance gives Arm developers an expanding set of opportunities to build fast, efficient and smart data center networks.

Today DPUs offload communications, storage, security and systems-management tasks. That’s enabling whole new classes of systems such as the cloud-native supercomputer NVIDIA announced today.

NVIDIA and Arm Behind the Wheel

Arm cores will debut in next-generation AI-enabled autonomous vehicles powered by NVIDIA DRIVE Atlan, the next leap on NVIDIA’s roadmap.

DRIVE Atlan will pack quite a punch, kicking out more than 1,000 trillion operations per second. Atlan marks the first time the DRIVE platform integrates a DPU, carrying Arm cores that will help it pack the equivalent of data center networking into autonomous vehicles.

The DPU in Atlan provides a platform for Arm developers to create innovative applications in security, storage, networking and more.

The Best Is Yet to Come 

The expanding products and partnerships mark progress on our intention announced in October to bring the Arm ecosystem four acceleration suites:

  • NVIDIA AI – the industry standard for accelerating AI training and inference
  • RAPIDS – a suite of open-source software libraries maintained by NVIDIA to run data science and analytics on GPUs
  • NVIDIA HPC SDK – compilers, libraries and software tools for high performance computing
  • NVIDIA RTX – graphics drivers that deliver ray tracing and AI capabilities

And we’re just getting started. There’s much more to come and much more to say.

Learn about new opportunities combining NVIDIA and Arm at GTC21. Registration is free.

The post NVIDIA Gives Arm a Second Shot of Acceleration appeared first on The Official NVIDIA Blog.

Read More

NVIDIA DRIVE Sim Powered by Omniverse Available for Early Access This Summer

The path to autonomous vehicle deployment is accelerating through the Omniverse.

During his opening keynote at GTC, NVIDIA founder and CEO Jensen Huang announced the next generation of autonomous vehicle simulation, NVIDIA DRIVE Sim, now powered by NVIDIA Omniverse.

DRIVE Sim enables high-fidelity simulation by tapping into NVIDIA’s core technologies to deliver a powerful, cloud-based computing platform. It can generate datasets to train the vehicle’s perception system and provide a virtual proving ground to test the vehicle’s decision-making process while accounting for edge cases. The platform can be connected to the AV stack in software-in-the-loop or hardware-in-the-loop configurations to test the full driving experience.

DRIVE Sim on Omniverse is a major step forward as NVIDIA transitions the foundation for autonomous vehicle simulation from a game engine to a simulation engine.

This shift to simulation architected specifically for self-driving development has required significant effort, but brings an array of new capabilities and opportunities.

Enter the Omniverse

Creating a purpose-built autonomous vehicle simulation platform is not a simple undertaking. Game engines are powerful tools that provide incredible capabilities, however, they’re designed to build games, not scientific, physically accurate, repeatable simulations.

Designing the next generation of DRIVE Sim required a new approach. This new simulator had to be repeatable with precise timing, easily scale across GPUs and server nodes, simulate sensor feeds with physical accuracy and act as a modular and extensible platform.

NVIDIA Omniverse is the confluence of almost every core technology developed by NVIDIA. And DRIVE Sim takes advantage of the company’s expertise in graphics, high performance computing, AI and hardware design. Combining these capabilities provides a technology platform that is perfect for autonomous vehicle simulation.

Specifically, Omniverse provides a platform that was designed from the ground up to support multi-GPU computing. It incorporates a physically accurate, ray-tracing renderer based on NVIDIA RTX technology.

NVIDIA Omniverse also includes “Kit,” a scalable and extensible simulation framework for building interactive 3D applications and microservices. Using Kit over the last year, NVIDIA has implemented the DRIVE Sim core simulation engine in a way that supports repeatable simulation with precise control over all processes.

Timing and Repeatability

Autonomous vehicle simulation can only be an effective development tool if scenarios are repeatable and timing is accurate.

For instance, NVIDIA Omniverse schedules and manages all sensor and environment rendering functions to ensure repeatability without loss of accuracy.  It does this across GPUs and across nodes giving DRIVE Sim the ability to handle detailed environments and test vehicles with complex sensor suites. Additionally, it can manage such workloads at slower or faster than real time, while generating repeatable results.

Omniverse was designed to scale to many GPUs providing DRIVE Sim real-time rendering capabilities with repeatable results for complex sensor sets.

Not only does the platform enable this flexibility and accuracy, it does so in a way that’s scalable, so developers can run fleets of vehicles with various sensor suites at large scale and at the highest levels of fidelity.

Physically Accurate Sensors

In addition to accurately recreating real-world driving conditions, the simulation environment must also render vehicle sensor data in the exact same way cameras, radars and lidars take in data from the physical world.

With NVIDIA RTX technology, DRIVE Sim is able to render physically accurate sensor data in real time. Ray tracing provides realistic lighting by simulating the physical properties of visible and non-visible waveforms. And the NVIDIA Omniverse RTX renderer coupled with NVIDIA RTX GPUs enables ray tracing at real-time frame rates.

This scene of vehicles in a tunnel uses indirect lighting, which is challenging to render accurately in real-time, but is enabled in DRIVE Sim by the Omniverse RTX renderer.

The capability to simulate light in real time has significant benefits for autonomous vehicle simulation. It makes it possible to recreate lighting environments that can be virtually impossible to capture using rasterization — from the reflections off a tanker truck to the shadows inside a dim tunnel.

Generating physically accurate sensor data is especially powerful for building datasets to train AI-based perception networks, outputting the ground-truth data with the virtual sensor data. DRIVE Sim includes tools for advanced dataset creation including a powerful Python scripting interface and domain randomization tools.

Using this synthetic data in the DNN training process saves the cost of collecting and labeling real-world data, and speeds up iteration for streamlined autonomous vehicle deployment.

DRIVE Sim provides tools to generate ground truth data with simulation data, enabling rapid generation of complex datasets to train Deep Neural Networks (DNNs) for autonomous vehicle perception.

Modular and Extensible

As a modular, open and extensible platform, DRIVE Sim provides developers the ultimate flexibility and efficiency in simulation testing.

DRIVE Sim on Omniverse allows different components of the simulator to be run to support different use cases. One group of engineers can run just the perception stack in simulation. Another can focus on the planning and control stack by simulating scenarios based on ground-truth object data (thus bypassing the perception stack).

This modularity significantly cuts down on development time by allowing developers to focus on the task at hand, while ensuring that the entire team is using the same tools, scenarios, models and assets in simulation for consistent results.

Using the NVIDIA Omniverse Kit SDK, DRIVE Sim allows developers to build custom models, 3D content and validation tools or to interface with other simulations. Users can create their own plugins or choose from a rich library of vehicle, sensor and traffic plugins provided by DRIVE Sim ecosystem partners. This flexibility enables users to customize DRIVE Sim for their unique use case and tailor the simulation experience to their development and validation needs.

DRIVE Sim on Omniverse will be available to developers via an early access program this summer. Learn more about DRIVE Sim and accelerate the development of safer, more efficient transportation today.

The post NVIDIA DRIVE Sim Powered by Omniverse Available for Early Access This Summer appeared first on The Official NVIDIA Blog.

Read More