NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training

NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training

Two months after their debut sweeping MLPerf inference benchmarks, NVIDIA H100 Tensor Core GPUs set world records across enterprise AI workloads in the industry group’s latest tests of AI training.

Together, the results show H100 is the best choice for users who demand utmost performance when creating and deploying advanced AI models.

MLPerf is the industry standard for measuring AI performance. It’s backed by a broad group that includes Amazon, Arm, Baidu, Google, Harvard University, Intel, Meta, Microsoft, Stanford University and the University of Toronto.

In a related MLPerf benchmark also released today, NVIDIA A100 Tensor Core GPUs raised the bar they set last year in high performance computing (HPC).

Hopper sweeps MLPerf for AI Training
NVIDIA H100 GPUs were up to 6.7x faster than A100 GPUs when they were first submitted for MLPerf Training.

H100 GPUs (aka Hopper) raised the bar in per-accelerator performance in MLPerf Training. They delivered up to 6.7x more performance than previous-generation GPUs when they were first submitted on MLPerf training. By the same comparison, today’s A100 GPUs pack 2.5x more muscle, thanks to advances in software.

Due in part to its Transformer Engine, Hopper excelled in training the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models.

MLPerf gives users the confidence to make informed buying decisions because the benchmarks cover today’s most popular AI workloads — computer vision, natural language processing, recommendation systems, reinforcement learning and more. The tests are peer reviewed, so users can rely on their results.

A100 GPUs Hit New Peak in HPC

In the separate suite of MLPerf HPC benchmarks, A100 GPUs swept all tests of training AI models in demanding scientific workloads run on supercomputers. The results show the NVIDIA AI platform’s ability to scale to the world’s toughest technical challenges.

For example, A100 GPUs trained AI models in the CosmoFlow test for astrophysics 9x faster than the best results two years ago in the first round of MLPerf HPC. In that same workload, the A100 also delivered up to a whopping 66x more throughput per chip than an alternative offering.

The HPC benchmarks train models for work in astrophysics, weather forecasting and molecular dynamics. They are among many technical fields, like drug discovery, adopting AI to advance science.

A100 leads in MLPerf HPC
In tests around the globe, A100 GPUs led in both speed and throughput of training.

Supercomputer centers in Asia, Europe and the U.S. participated in the latest round of the MLPerf HPC tests. In its debut on the DeepCAM benchmarks, Dell Technologies showed strong results using NVIDIA A100 GPUs.

An Unparalleled Ecosystem

In the enterprise AI training benchmarks, a total of 11 companies, including the Microsoft Azure cloud service, made submissions using NVIDIA A100, A30 and A40 GPUs. System makers including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro used a total of nine NVIDIA-Certified Systems for their submissions.

In the latest round, at least three companies joined NVIDIA in submitting results on all eight MLPerf training workloads. That versatility is important because real-world applications often require a suite of diverse AI models.

NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors.

Under the Hood

The NVIDIA AI platform provides a full stack from chips to systems, software and services. That enables continuous performance improvements over time.

For example, submissions in the latest HPC tests applied a suite of software optimizations and techniques described in a technical article. Together they slashed runtime on one benchmark by 5x, to just 22 minutes from 101 minutes.

A second article describes how NVIDIA optimized its platform for the enterprise AI benchmarks. For example, we used NVIDIA DALI  to efficiently load and pre-process data for a computer vision benchmark.

All the software used in the tests is available from the MLPerf repository, so anyone can get these world-class results. NVIDIA continuously folds these optimizations into containers available on NGC, a software hub for GPU applications.

The post NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training appeared first on NVIDIA Blog.

Read More

New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE

New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE

It’s a new age for safety.

Volvo Cars unveiled the Volvo EX90 SUV today in Stockholm, marking the beginning of a new era of electrification, technology and safety for the automaker. The flagship vehicle is redesigned from tip to tail — with a new powertrain, branding and software-defined AI compute — powered by the centralized NVIDIA DRIVE Orin platform.

The Volvo EX90 silhouette is in line with Volvo Cars’ design principle of form following function — and looks good at the same time.

Under the hood, it’s filled with cutting-edge technology for new advances in electrification, connectivity, core computing, safety and infotainment. The EX90 is the first Volvo car that is hardware-ready to deliver unsupervised autonomous driving.

These features come together to deliver an SUV that cements Volvo Cars in the next generation of software-defined vehicles.

“We used technology to reimagine the entire car,” said Volvo Cars CEO Jim Rowan. “The Volvo EX90 is the safest that Volvo has ever produced.”

Computer on Wheels

The Volvo EX90 looks smart and has the brains to back it up.

Volvo Cars’ proprietary software runs on NVIDIA DRIVE Orin to operate most of the core functions inside the car, including safety, infotainment and battery management. This intelligent architecture is designed to deliver a highly responsive and enjoyable experience for every passenger in the car.

The DRIVE Orin system-on-a-chip delivers 254 trillion operations per second — ample compute headroom for a software-defined architecture. It’s designed to handle the large number of applications and deep neural networks needed to achieve systematic safety standards such as ISO 26262 ASIL-D.

The Volvo EX90 isn’t just a new car. It’s a highly advanced computer on wheels, designed to improve over time as Volvo Cars adds more software features.

Just Getting Started

The Volvo EX90 is just the beginning of Volvo Cars’ plans for the software-defined future.

The automaker plans to launch a new EV every year through 2025, with the end goal of having a purely electric, software-defined lineup by 2030.

The new flagship SUV is available for preorder in select markets, launching the next phase in Volvo Cars’ leadership in premium design and safety.

The post New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More

HORN Free! Roaming Rhinos Could Be Guarded by AI Drones

HORN Free! Roaming Rhinos Could Be Guarded by AI Drones

Call it the ultimate example of a job that’s sometimes best done remotely. Wildlife researchers say rhinos are magnificent beasts, but they like to be left alone, especially when they’re with their young.

In the latest example of how researchers are using the latest technologies to track animals less invasively, a team of researchers has proposed harnessing high-flying AI-equipped drones to track the endangered black rhino through the wilds of Namibia.

In a paper published earlier this year in the journal PeerJ, the researchers show the potential of drone-based AI to identify animals in even the remotest areas and provide real-time updates on their status from the air.

While drones — and technology of just about every kind — have been harnessed to track African wildlife, the proposal promises to help gamekeepers move faster to protect rhinos and other megafauna from poachers.

AI Podcast host Noah Kravitz spoke to two of the authors of the paper.

Zoey Jewell is co-founder and president of wild track.org, a global network of biologists and conservationists dedicated to non-invasive wildlife monitoring techniques. And Alice Hua is a recent graduate of the School of Information at UC Berkeley in California, and an ML platform engineer at CrowdStrike.

And for more, read the full paper at https://peerj.com/articles/13779/.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post HORN Free! Roaming Rhinos Could Be Guarded by AI Drones appeared first on NVIDIA Blog.

Read More

3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The warm, friendly animation Mushroom Spirit is featured In the NVIDIA Studio this week, modeled by talented 3D illustrator Julie Greenberg, aka Juliestrator.

In addition, NVIDIA Omniverse, an open platform for virtual collaboration and real-time photorealistic simulation, just dropped a beta release for 3D artists.

And with the approaching winter season comes the next NVIDIA Studio community challenge. Join the #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #WinterArtChallenge to enter.

New in NVIDIA Omniverse

With new support for GeForce RTX 40 Series GPUs, NVIDIA Omniverse is faster, more accessible and more flexible than ever for collaborative 3D workflows across apps.

An example of what’s possible when talented 3D artists collaborate in Omniverse: a scene from the ‘NVIDIA Racer RTX’ demo.

NVIDIA DLSS 3, powered by the GeForce RTX 40 Series, is now available in Omniverse, enabling complete real-time ray-tracing workflows within the platform. The NVIDIA Ada Lovelace GPU architecture delivers a generational leap in performance and power that enables users to work in large-scale, virtual worlds with true interactivity — so creators can navigate viewports at full fidelity in real time.

The Omniverse Create app has new large world-authoring and animation improvements.

In Omniverse Machinima, creators gain AI superpowers with Audio2Gesture — an AI-powered tool that creates lifelike body movements based on an audio file.

PhysX 5, the technology behind Omniverse’s hyperrealistic physics simulation, features built-in audio for collisions, as well as improved cloth and deformable body simulations. Newly available as open source software, PhysX 5 enables artists and developers to modify, build and distribute custom physics engines.

The Omniverse Connect library has received updates to Omniverse Connectors, including Autodesk 3ds Max, Autodesk Maya, Autodesk Revit, Epic Games Unreal Engine, McNeel Rhino, Trimble SketchUp and Graphisoft Archicad. Connectors for Autodesk Alias and PTC Creo are also now available.

The updated Reallusion iClone 8.1.0’s live-sync Connector allows for seamless character interactions between iClone and Omniverse apps. And OTOY’s OctaneRender Hydra Relegate enables Omniverse users to access OctaneRender directly in Omniverse apps.

Learn more about the Omniverse release and tune into the Twitch livestream detailing announcements on Wednesday, Nov. 9. Download Omniverse, which is free for NVIDIA RTX and GeForce RTX GPU owners.

Featuring a Fun-gi 

Juliestrator’s artistic inspiration comes from the examination of the different worlds that people create. “No matter if it’s the latest Netflix show or an artwork I see on Twitter, I love when a piece of art leaves space for my own imagination to fill in the gaps and come up with my own stories,” she said.

Mushroom Spirit was conceived as a sketch for last year’s Inktober challenge, which had the prompt of “spirit.” Rather than creating a ghost like many others, Juliestrator took a different approach. Mushroom Spirit was born as a cute nature spirit lurking in a forest, like the Kodama creatures from the Princess Mononoke film from which she drew inspiration.

Juliestrator gathered reference material using Pinterest. She then used PureRef’s overlay feature to help position reference imagery while modeling in Blender software. Though it’s rare for Juliestrator to sketch in 2D for 3D projects, she said Mushroom Spirit called for a more personal touch, so she generated a quick scribble in Procreate.

The origins of ‘Mushroom Spirit.’

Using Blender, she then entered the block-out phase — creating a rough-draft level built using simple 3D shapes, without details or polished art assets. This helped to keep base meshes clean, eliminating the need to create new meshes in the next round, which required only minor edits.

Getting the basic shapes down by blocking out ‘Mushroom Spirit’ in Blender.

At this point, many artists would typically start to model detailed scene elements, but Julistrator prioritizes coloring. “I’ve noticed how much color influences the compositions and mood of the artwork, so I try to make this important decision as early as possible,” the artist said.

Color modifications in Adobe Substance 3D Painter.

She used Adobe Substance 3D Painter software to apply a myriad of colors and experimental textures to her models. On her NVIDIA Studio laptop, the Razer Blade 15 Studio equipped with an NVIDIA Quadro RTX 5000 GPU, Juliestrator used RTX-accelerated light and ambient occlusion to bake assets in mere seconds.

She then refined the existing models in Blender. “This is where powerful hardware helps a lot,” she said. “The NVIDIA OptiX AI-accelerated denoiser helps me preview any changes I make in Blender almost instantly, which lets me test more ideas at the same time and as a result get better finished renders.”

Tinkering and tweaking color palettes in Blender.

Though she enjoys the modeling stage, Juliestrator said that the desire to refine an endless number of details can be overwhelming. As such, she deploys an “80/20 rule,” dedicating no more than 20% of the entire project’s timeline to detailed modeling. “That’s the magic of the 80/20 rule: tackle the correct 20%, and the other 80% often falls into place,” she said.

Juliestrator finally adjusts the composition in 3D — manipulating the light objects, rotating the camera and adding animations. She completed all of this quickly with an assist from RTX-accelerated OptiX ray tracing in the Blender viewport, using Blender Cycles for the fastest frame renders.

Animations in Blender during the final stage.

Blender is Juliestrator’s preferred 3D modeling app, she said, due to its ease of use and powerful AI features, as well as its accessibility. “I truly appreciate the efforts of the Blender Foundation and all of its partners in keeping Blender free and available to people from all over the world, to enhance anyone’s creativity,” she said.

 

Juliestrator chose to use an NVIDIA Studio laptop, a “porta-bella” system for efficiency and convenience, she said. “I needed a powerful computer that would let me use both Blender and a game engine like Unity or Unreal Engine 5, while staying mobile and on the go,” the artist added.

Illustrator Julie Greenberg, aka Juliestrator.

Check out Juliestrator’s portfolio and social media links.

For more direction and inspiration for building 3D worlds, check out Juliestrator’s five-part tutorial, Modeling 3D New York Diorama, which covers the critical stages in 3D workflows: sketching composition, modeling details and more. The tutorials can be found on the NVIDIA Studio YouTube channel, which posts new videos every week.

And don’t forget to enter the NVIDIA Studio #WinterArtChallenge on Instagram, Twitter or Facebook.

The post 3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

“DIY” and “supercomputer” aren’t words typically used together.

But a do-it-yourself supercomputer is exactly what students built at Southern Methodist University, in Dallas, using 16 NVIDIA Jetson Nano modules, four power supplies, more than 60 handmade wires, a network switch and some cooling fans.

The project, dubbed SMU’s “baby supercomputer,” aims to help educate those who may never get hands-on with a normal-sized supercomputer, which can sometimes fill a warehouse, or be locked in a data center or in the cloud.

Instead, this mini supercomputer fits comfortably on a desk, allowing students to tinker with it and learn about what makes up a cluster. A touch screen displays a dashboard with the status of all of its nodes.

“We started this project to demonstrate the nuts and bolts of what goes into a computer cluster,” said Eric Godat, team lead for research and data science in the internal IT organization at SMU.

Next week, the baby supercomputer will be on display at SC22, a supercomputing conference taking place in Dallas, just down the highway from SMU.

The SMU team will host a booth to talk to researchers, vendors and students about the university’s high-performance computing programs and the recent deployment of its NVIDIA DGX SuperPOD for AI-accelerated research.

Plus, in collaboration with Mark III Systems — a member of the NVIDIA Partner Network — the SMU Office of Information Technology will provide conference attendees with a tour of the campus data center to showcase the DGX SuperPOD in action. Learn details at SMU’s booth #3834.

“We’re bringing the baby supercomputer to the conference to get people to stop by and ask, ‘Oh, what’s that?’” said Godat, who served as a mentor for Conner Ozenne, a senior computer science major at SMU and one of the brains behind the cluster.

“I started studying computer science in high school because programming fulfilled the foreign language requirement,” said Ozenne, who now aims to integrate AI and machine learning with web design for his career. “Doing those first projects as a high school freshman, I immediately knew this is what I wanted to do for the rest of my life.”

Ozenne is a STAR at SMU — a Student Technology Associate in Residence. He first pitched the design and budget for the baby supercomputer to Godat’s team two summers ago. With a grant of a couple thousand dollars and a whole lot of enthusiasm, he got to work.

Birth of a Baby Supercomputer

Ozenne, in collaboration with another student, built the baby supercomputer from scratch.

“They had to learn how to strip wires and not shock themselves — they put together everything from the power supplies to the networking all by themselves,” Godat said. With a smile, he added, “We only started one small fire.”

The first iteration was a mess of wires on a table connecting the NVIDIA Jetson Nano developer kits, with cardboard boxes as heatsinks, Ozenne said.

“We chose to use NVIDIA Jetson modules because no other small compute devices have onboard GPUs, which would let us tackle more AI and machine learning problems,” he added.

Soon Ozenne gave the baby supercomputer case upgrades: from cardboard to foam to acrylic plates, which he laser cut from 3D vector files in SMU’s innovation gym, a makerspace for students.

“It was my first time doing all of this, and it was a great learning experience, with lots of fun nights in the lab,” Ozenne said.

A Work in Progress

In just four months, the project went from nothing to something that resembled a supercomputer, according to Ozenne. But the project is ongoing.

The team is now developing the mini cluster’s software stack, with the help of the NVIDIA JetPack software development kit, and prepping it to accomplish some small-scale machine learning tasks. Plus, the baby supercomputer could level up with the recently announced NVIDIA Jetson Orin Nano modules.

“Our NVIDIA DGX SuperPOD just opened up on campus, so we don’t really need this baby supercomputer to be an actual compute environment,” Godat said. “But the mini cluster is an effective teaching tool for how all this stuff really works — it lets students experiment with stripping the wires, managing a parallel file system, reimaging cards and deploying cluster software.”

SMU’s NVIDIA DGX SuperPOD, which includes 160 NVIDIA A100 Tensor Core GPUs, is in an alpha-rollout phase for faculty, who are using it to train AI models for molecular dynamics, computational chemistry, astrophysics, quantum mechanics and a slew of other research topics.

Godat collaborates with the NVIDIA DGX team to flexibly configure the DGX SuperPOD to support tens of different AI, machine learning, data processing and HPC projects.

“I love it, because every day is different — I could be working on an AI-related project in the school of the arts, and the next day I’m in the law school, and the next I’m in the particle physics department,” said Godat, who himself has a Ph.D. in theoretical particle physics from SMU.

“There are applications for AI everywhere,” Ozenne agreed.

Learn more from Godat and other experts on designing an AI Center of Excellence in this NVIDIA GTC session available on demand.

Join NVIDIA at SC22 to explore partner booths on the show floor and engage with virtual content all week — including a special address, demos and other sessions.

The post Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Rafi Nizam

3D artist Rafi Nizam has worn many hats since starting his career as a web designer more than two decades ago, back when “designing for the web was still wild,” as he put it.

He’s now becoming a leader in the next wave of creation — using extended reality and virtual production — with the help of NVIDIA Omniverse, a platform for building and connecting custom 3D pipelines.

The London-based showrunner, creative consultant and entertainment executive previously worked at advertising agencies and led creative teams at Sony Pictures, BBC and NBCUniversal.

In addition to being an award-winning independent animator, director, character designer and storyteller who serves as chief creative officer at Masterpiece Studio, he’s head of story at game developer Opis Group, and showrunner at Lunar-X, a next-gen entertainment company.

Plus, in recent years, he’s taken on what he considers his most important role of all — being a father. And his art is now often inspired by family.

“Being present in the moment with my children and observing the world without preconceptions often sparks ideas for me,” Nizam said.

His animated shorts have so far focused on themes of self care and finding stillness amidst chaos. He’s at work on a new computer-graphics-animated series, ArtSquad, in which fun-loving, vibrant 3D characters form a band, playing instruments made of classroom objects and solving problems through the power of art.

“The myriad of 3D apps in my animation pipeline can sync and come together in Omniverse using the Universal Scene Description framework,” he said. “This interoperability allows me to be 10x more productive when visualizing my show concepts — and I’ve cut my outsourcing costs by 50%, as Omniverse enables me to render, lookdev, lay out scenes and manipulate cameras by myself.”

From Concept to Creation

Nizam said he often starts his projects with “good ol’ pencil and paper on a Post-it note or napkin, whenever inspiration strikes.”

He then takes his ideas to a drawing desk, where he creates a simple sketch before honing in on pre-production using digital content-creation apps like Adobe Illustrator, Adobe Photoshop and Procreate.

Nizam next creates 3D production assets from his 2D sketches, manipulating them in virtual reality using Adobe Substance 3D Modeler software.

“Things start to move pretty rapidly from here,” he said, “because VR is such an intuitive way to make 3D assets. Plus, rigging and texturing in the Masterpiece Studio creative suite and Adobe Substance 3D can be near automatic.”

The artist uses the Omniverse Create XR spatial computing app to lay out his scenes in VR. He blocks out character actions, designs sets and finalizes textures using Unreal Engine 5, Autodesk Maya and Blender software.

Performance capture through Perception Neuron Studio quickly gets Nizam close to final animation. And with the easily extensible USD framework, Nizam brings his 3D assets into the Omniverse Create app for rapid look development. Here he enhances character animation with built-in hyperrealistic physics and renders final shots in real time.

“Omniverse offers me an easy entry point to USD-based workflows, live collaboration across disciplines, rapid visualization, real-time rendering, an accessible physics engine and the easy modification of preset simulations,” Nizam said. “I can’t wait to get back in and try out more ideas.”

At home, Nizam uses an NVIDIA Studio workstation powered by an NVIDIA RTX A6000 GPU. To create on the go, the artist turns to his NVIDIA Studio laptop from ASUS, equipped with a GeForce RTX 3060 GPU.

In addition, his entire workflow is accelerated by NVIDIA Studio, a platform of NVIDIA RTX and AI-accelerated creator apps, Studio Drivers and a suite of exclusive creative tools.

When not creating transmedia projects and franchises for his clients, Nizam can be found mentoring young creators for Sony Talent League, playing make believe with his children or chilling with his two cats, Hamlet and Omelette.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

The numbers are in, and they paint a picture of data centers going a deeper shade of green, thanks to energy-efficient networks accelerated with data processing units (DPUs).

A suite of tests run with help from Ericsson, RedHat and VMware show power reductions up to 24% on servers using NVIDIA BlueField-2 DPUs. In one case, they delivered 54x the performance of CPUs.

The work, described in a recent whitepaper, offloaded core networking jobs from power-hungry host processors to DPUs designed to run them more efficiently.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient. It’s the latest of a handful of optimizations, described in the whitepaper, for data centers moving into the era of green computing.

DPUs Tested on VMware vSphere

Seeing the trend toward energy-efficient networks, VMware enabled DPUs to run its virtualization software, used by thousands of companies worldwide. NVIDIA has run several tests with VMware since its vSphere 8 software release this fall.

For example, on VMware vSphere Distributed Services Engine —  software that offloads and accelerates networking and security functions using DPUs — BlueField-2 delivered higher performance while freeing up 20% of the CPU’s resources required without DPUs.

That means users can deploy fewer servers to run the same workload, or run more applications on the same servers.

Power Costs Cut Nearly $2 Million

Few data centers face a more demanding job than those run by telecoms providers. Their networks shuttle every bit of data that smartphone users generate or request between their cellular networks and the internet.

Researchers at Ericsson tested whether operators could reduce their power consumption on this massive workload using SmartNICs, the network interface cards that handle DPU functions. Their test let CPUs slow down or sleep while an NVIDIA ConnectX SmartNIC handled the networking tasks.

The results, detailed in a recent article, were stunning.

Energy consumption of server CPUs fell 24%, from 190 to 145 watts on a fully loaded network. This single DPU application could cut power costs by nearly $2 million over three years for a large data center.

Ericsson tests with Bluefield DPUs
Ericsson ran the user-plane function for 5G networks on DPUs in three scenarios.

In the article, Ericsson’s CTO, Erik Ekudden, underscored the importance of the work.

“There’s a growing sense of urgency among communication service providers to find and implement innovative solutions that reduce network energy consumption,” he wrote. And the DPU techniques “save energy across a wide range of traffic conditions.”

70% Less Overhead, 54x More Performance

Results were even more dramatic for tests on Red Hat OpenShift, used by half of all Fortune 500 banks, airlines and telcos to manage software containers.

In the tests, BlueField-2 DPUs handled virtualization, encryption and networking jobs needed to manage these portable packages of applications and code.

The DPUs slashed networking demands on CPUs by 70%, freeing them up to run other applications. What’s more, they accelerated networking jobs by a whopping 54x.

A technical blog provides more detail on the tests.

Speeding the Way to Zero Trust

Across every industry, businesses are embracing a philosophy of zero trust to improve network security. So, NVIDIA tested IPsec, one of the most popular data center encryption protocols, on BlueField DPUs.

The test showed data centers could improve performance and cut power consumption 21% for servers and 34% for clients on networks running IPsec on DPUs. For large data centers, that could translate to nearly $9 million in savings on electric bills over three years.

NVIDIA and its partners continue to put DPUs to the test in an expanding portfolio of use cases, but the big picture is clear.

“In a world facing rising energy costs and rising demand for green IT infrastructure, the use of DPUs will become increasingly popular,” the whitepaper concludes.

It’s good to know the numbers, but seeing is believing. So apply to run your own test of DPUs on VMware’s vSphere.

The post Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency appeared first on NVIDIA Blog.

Read More

Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction

Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction

Skycatch, a San Francisco-based startup, has been helping companies mine both data and minerals for nearly a decade.

The software-maker is now digging into the creation of digital twins, with an initial focus on the mining and construction industry, using the NVIDIA Omniverse platform for connecting and building custom 3D pipelines.

SkyVerse, which is a part of Skycatch’s vision AI platform, is a combination of computer vision software and custom Omniverse extensions that enables users to enrich and animate virtual worlds of mines and other sites with near-real-time geospatial data.

“With Omniverse, we can turn massive amounts of non-visual data into dynamic visual information that’s easy to contextualize and consume,” said Christian Sanz, founder and CEO of Skycatch. “We can truly recreate the physical world.”

SkyVerse can help industrial sites simulate variables such as weather, broken machines and more up to five years into the future — while learning from happenings up to five years in the past, Sanz said.

The platform automates the entire visualization pipeline for mining and construction environments.

First, it processes data from drones, lidar and other sensors across the environment, whether at the edge using the NVIDIA Jetson platform or in the cloud.

It then creates 3D meshes from 2D images, using neural networks built from NVIDIA’s pretrained models to remove unneeded objects like dump trucks and other equipment from the visualizations.

Next, SkyVerse stitches this into a single 3D model that’s converted to the Universal Scene Description (USD) framework. The master model is then brought into Omniverse Enterprise for the creation of a digital twin that’s live-synced with real-world telemetry data.

“The simulation of machines in the environment, different weather conditions, traffic jams — no other platform has enabled this, but all of it is possible in Omniverse with hyperreal physics and object mass, which is really groundbreaking,” Sanz said.

Skycatch is a Premier partner in NVIDIA Inception, a free, global program that nurtures startups revolutionizing industries with cutting-edge technologies. Premier partners receive additional go-to-market support, exposure to venture capital firms and technical expertise to help them scale faster.

Processing and Visualizing Data

Companies have deployed Skycatch’s fully automated technologies to gather insights from aerial data across tens of thousands of sites at several top mining companies.

The Skycatch team first determines optimal positioning of the data-collection sensors across mine vehicles using the NVIDIA Isaac Sim platform, a robotics simulation and synthetic data generation (SDG) tool for developing, testing and training AI-based robots.

“Isaac Sim has saved us a year’s worth of testing time — going into the field, placing a sensor, testing how it functions and repeating the process,” Sanz said.

The team also plans to integrate the Omniverse Replicator software development kit into SkyVerse to generate physically accurate 3D synthetic data and build SDG tools to accelerate the training of perception networks beyond the robotics domain.

Once data from a site is collected, SkyVerse uses edge devices powered by the NVIDIA Jetson Nano and Jetson AGX Xavier modules to automatically process up to terabytes of it per day and turn it into kilobyte-size analytics that can be easily transferred to frontline users.

This data processing was sped up 3x by the NVIDIA CUDA parallel computing platform, according to Sanz. The team is also looking to deploy the new Jetson Orin modules for next-level performance.

“It’s not humanly possible to go through tens of thousands of images a day and extract critical analytics from them,” Sanz said. “So we’re helping to expand human eyesight with neural networks.”

Using pretrained models from the NVIDIA TAO Toolkit, Skycatch also built neural networks that can remove extraneous objects and vehicles from the visualizations, and texturize over these spots in the 3D mesh.

The digital terrain model, which has sub-five-centimeter precision, can then be brought into Omniverse for the creation of a digital twin using the easily extensible USD framework, custom SkyVerse Omniverse extensions and NVIDIA RTX GPUs.

“It took just around three months to build the Omniverse extensions, despite the complexity of our extensions’ capabilities, thanks to access to technical experts through NVIDIA Inception,” Sanz said.

Skycatch is working with one of Canada’s leading mining companies, Teck Resources, to implement the use of Omniverse-based digital twins for its project sites.

“Teck Resources has been using Skycatch’s compute engine across all of our mine sites globally and is now expanding visualization and simulation capabilities with SkyVerse and our own digital twin strategy,” said Preston Miller, lead of technology and innovation at Teck Resources. “Delivering near-real-time visual data will allow Teck teams to quickly contextualize mine sites and make faster operational decisions on mission-critical, time-sensitive projects.”

The Omniverse extensions built by Skycatch will be available soon — learn more.

Safety and Sustainability

AI-powered data analysis and digital twins can make operational processes for mining and construction companies safer, more sustainable and more efficient.

For example, according to Sanz, mining companies need the ability to quickly locate the toe and crest (or bottom and top) of “benches,” narrow strips of land beside an open-pit mine. When a machine is automated to go in and out of a mine, it must be programmed to stay 10 meters away from the crest at all times to avoid the risk of sliding, Sanz said.

Previously, surveying and analyzing landforms to determine precise toes and crests typically took up to five days. With the help of NVIDIA AI, SkyVerse can now generate this information within minutes, Sanz said.

In addition, SkyVerse eliminates 10,000 open-pit interactions for customers per year, per site, Sanz said. These are situations in which humans and vehicles can intersect within a mine, posing a safety threat.

“At its core, Skycatch’s goal is to provide context and full awareness for what’s going on at a mining or construction site in near-real time — and better environmental context leads to enhanced safety for workers,” Sanz said.

Skycatch aims to boost sustainability efforts for the mining industry, too.

“In addition to mining companies, governmental organizations want visibility into how mines are operating — whether their surrounding environments are properly taken care of — and our platform offers these insights,” Sanz said.

Plus, minerals like cobalt, nickel and lithium are required for electrification and the energy transition. These all come from mine sites, Sanz said, which can become safer and more efficient with the help of SkyVerse’s digital twins and vision AI.

Dive deeper into technology for a sustainable future with Skycatch and other Inception partners in the on-demand webinar, Powering Energy Startup Success With NVIDIA Inception.

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Learn more about and apply to join NVIDIA Inception.

The post Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction appeared first on NVIDIA Blog.

Read More

Check Out 26 New Games Streaming on GeForce NOW in November

Check Out 26 New Games Streaming on GeForce NOW in November

It’s a brand new month, which means this GFN Thursday is all about the new games streaming from the cloud.

In November, 26 titles will join the GeForce NOW library. Kick off with 11 additions this week, like Total War: THREE KINGDOMS and new content updates for Genshin Impact and Apex Legends.

Plus, leading 5G provider Rain has announced it will be introducing “GeForce NOW powered by Rain” to South Africa early next year. Look forward to more updates to come.

And don’t miss out on the 40% discount for GeForce NOW 6-month Priority memberships. This offer is only available for a limited time.

Build Your Empire

Lead the charge this week with Creative Assembly and Sega’s Total War: THREE KINGDOMS, a turn-based, empire-building strategy game and the 13th entry in the award-winning Total War franchise. Become one of many great leaders from history and conquer enemies to build a formidable empire.

Total War Three Kingdoms
Resolve an epic conflict in ancient China to unify the country and rebuild the empire.

The game is set in ancient China, and gamers must save the country from the oppressive rule of a warlord. Choose from a cast of a dozen legendary heroic characters to unify the nation and dominate enemies. Each has their own agenda, and there are plenty of different tactics for players to employ.

Extend your campaign with up to six-hour gaming sessions at 1080p 60 frames per second for Priority members. With an RTX 3080 membership, gain support for 1440p 120 FPS streaming and up to 8-hour sessions, with performance that will bring foes to their knees.

Sega Row on GeForce NOW
Members can find ‘Total War: THREE KINGDOMSand other Sega games from a dedicated row in the GeForce NOW app.

More to Explore

Alongside the 11 new games streaming this week, members can jump into updates for the hottest free-to-play titles on GeForce NOW.

Genshin Impact Version 3.2, “Akasha Pulses, the Kalpa Flame Rises,” is available to stream on the cloud. This latest update introduces the last chapter of the Sumeru Archon Quest, two new playable characters — Nahida and Layla — as well as new events and game play. Stream it now from devices, whether PC, Mac, Chromebook or on mobile with enhanced touch controls.

Genshin Impact 3.2 on GeForce NOW
Catch the conclusion of the main storyline for Sumeru, the newest region added to ‘Genshin Impact.’

Or squad up in Apex Legends: Eclipse, available to stream now on the cloud. Season 15 brings with it the new Broken Moon map, the newest defensive Legend — Catalyst — and much more.

Apex Legends on GeForce NOW
Don’t mess-a with Tressa.

Also, after working closely with Square Enix, we’re happy to share that members can stream STAR OCEAN THE DIVINE FORCE on GeForce NOW beginning this week.

Here’s the full list of games joining this week:

  • Against the Storm (Epic Games and New release on Steam)
  • Horse Tales: Emerald Valley Ranch (New release on Steam, Nov. 3)
  • Space Tail: Every Journey Leads Home (New release on Steam, Nov. 3)
  • The Chant (New release on Steam, Nov. 3)
  • The Entropy Centre (New release on Steam, Nov. 3)
  • WRC Generations — The FIA WRC Official Game (New Release on Steam, Nov. 3)
  • Filament (Free on Epic Games, Nov. 3-10)
  • STAR OCEAN THE DIVINE FORCE (Steam)
  • PAGUI (Steam)
  • RISK: Global Domination (Steam)
  • Total War: THREE KINGDOMS (Steam)

Arriving in November

But wait, there’s more! Among the total 26 games joining GeForce NOW in November is the highly anticipated Warhammer 40,000: Darktide, with support for NVIDIA RTX and DLSS.

Here’s a sneak peak:

  • The Unliving (New release on Steam, Nov. 7)
  • TERRACOTTA (New release on Steam and Epic Games, Nov. 7)
  • A Little to the Left (New Release on Steam, Nov. 8)
  • Yum Yum Cookstar (New Release on Steam, Nov. 11)
  • Nobody — The Turnaround (New release on Steam, Nov. 17)
  • Goat Simulator 3 (New release on Epic Games, Nov. 17)
  • Evil West (New release on Steam, Nov. 22)
  • Colortone: Remixed (New Release on Steam, Nov. 30)
  • Warhammer 40,000: Darktide (New Release on Steam, Nov. 30)
  • Heads Will Roll: Downfall (Steam)
  • Guns Gore and Cannoli 2 (Steam)
  • Hidden Through TIme (Steam)
  • Cave Blazers (Steam)
  • Railgrade (Epic Games)
  • The Legend of Tianding (Steam)

While The Unliving was originally announced in October, the release date of the game shifted to Monday, Nov. 7.

Howlin’ for More

October brought more treats for members. Don’t miss the 14 extra titles added last month. 

With all of these sweet new titles coming to the cloud, getting your game on is as easy as pie. Speaking of pie, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Check Out 26 New Games Streaming on GeForce NOW in November appeared first on NVIDIA Blog.

Read More

Stormy Weather? Scientist Sharpens Forecasts With AI

Stormy Weather? Scientist Sharpens Forecasts With AI

Editor’s note: This is the first in a series of blogs on researchers advancing science in the expanding universe of high performance computing.

A perpetual shower of random raindrops falls inside a three-foot metal ring Dale Durran erected outside his front door (shown above). It’s a symbol of his passion for finding order in the seeming chaos of the planet’s weather.

A part-time sculptor and full-time professor of atmospheric science at the University of Washington, Durran has co-authored dozens of papers describing patterns in Earth’s ever-changing skies. It’s a field for those who crave a confounding challenge trying to express with math the endless dance of air and water.

meteorologist Dale Durran
Dale Durran

In 2019, Durran acquired a new tool, AI. He teamed up with a grad student and a Microsoft researcher to build the first model to demonstrate deep learning’s potential to predict the weather.

Though crude, the model outperformed the complex equations used for the first computer-based forecasts. The descendants of those equations now run on the world’s biggest supercomputers. In contrast, AI slashes the traditional load of required calculations and works faster on much smaller systems.

“It was a dramatic revelation that said we better jump into this with both feet,” Durran recalled.

Sunny Outlook for AI

Last year, the team took their work to the next level. Their latest neural network can process 320 six-week forecasts in less than a minute on the four NVIDIA A100 Tensor Core GPUs in an NVIDIA DGX Station. That’s more than 6x the 51 forecasts today’s supercomputers synthesize to make weather predictions.

In a show of how rapidly the technology is evolving, the model was able to forecast, almost as well as traditional methods, what became the path of Hurricane Irma through the Caribbean in 2017. The same model also could crank out a week’s forecast in a tenth of a second on a single NVIDIA V100 Tensor Core GPU.

AI forecasts Hurricane Irma's path
Durran’s latest work used AI to forecast Hurricane Irma’s path in Florida more efficiently and nearly as accurately as traditional methods.

Durran foresees AI crunching thousands of forecasts simultaneously to deliver a clearer statistical picture with radically fewer resources than conventional equations. Some suggest the performance advances will be measured in as many as five orders of magnitude and use a fraction of the power.

AI Ingests Satellite Data

The next big step could radically widen the lens for weather watchers.

The complex equations today’s predictions use can’t readily handle the growing wealth of satellite data on details like cloud patterns, soil moisture and drought stress in plants. Durran believes AI models can.

One of his graduate students hopes to demonstrate this winter an AI model that directly incorporates satellite data on global cloud cover. If successful, it could point the way for AI to improve forecasts using the deluge of data types now being collected from space.

In a separate effort, researchers at the University of Washington are using deep learning to apply a grid astronomers use to track stars to their work understanding the atmosphere. The novel mesh could help map out a whole new style of weather forecasting, Durran said.

Harvest of a Good Season

In nearly 40 years as an educator, Durran has mentored dozens of students and wrote two highly rated textbooks on fluid dynamics, the math used to understand the weather and climate.

One of his students, Gretchen Mullendore, now heads a lab at the U.S. National Center for Atmospheric Research, working with top researchers to improve weather forecasting models.

“I was lucky to work with Dale in the late 1990s and early 2000s on adapting numerical weather prediction to the latest hardware at the time,” said Mullendore. “I am so thankful to have had an advisor that showed me it’s cool to be excited by science and computers.”

Carrying on a Legacy

Durran is slated to receive in January the American Meteorological Society’s most prestigious honor, the Jule G. Charney Medal. It’s named after the scientist who worked with John von Neumann to develop in the 1950s the algorithms weather forecasters still use today.

Charney was also author in 1979 of one of the earliest scientific papers on global warming. Following in his footsteps, Durran wrote two editorials last year for The Washington Post to help a broad audience understand the impacts of climate change and rising CO2 emissions.

The editorials articulate a passion he discovered at his first job in 1976, creating computer models of air pollution trends. “I decided I’d rather work on the front end of that problem,” he said of his career shift to meteorology.

It’s a field notoriously bedeviled by effects as subtle as a butterfly’s wings that motivates his passion to advance science.

The post Stormy Weather? Scientist Sharpens Forecasts With AI appeared first on NVIDIA Blog.

Read More