Happy Thanksgiving, members. It’s a very special GFN Thursday.
As the official kickoff to what’s sure to be a busy holiday season for our members around the globe, this week’s GFN Thursday brings a few reminders of the joys of PC gaming in the cloud.
Plus, kick back for the holiday with four new games coming to the GeForce NOW library this week.
Game Away the Holiday
The holidays are often spent celebrating with extended family — which is great, until Aunt Petunia starts trying to teach you cross-stitch or Grandpa Harold begins another one of his fishing trip stories. If you need a break from the relatives, get your gaming in, powered by the cloud.
With GeForce NOW, nearly any device can become a GeForce gaming rig. Grab Uncle Buck’s Chromebook and get a few rounds of Apex Legends in, or check in with Star-Lord and the crew from your mobile device in Marvel’s Guardians of the Galaxy. You can even squad up on some Macbooks with your cousins for a few Destiny 2 raids at the kid’s table, where we know the real fun is.
How about escaping for a bit to a tropical jungle? For a limited time, get a copy of Crysis Remastered free with the purchase of a six-month Priority membership or the new GeForce NOW RTX 3080 membership. Terms and conditions apply.
GeForce NOW members can experience the first game in the Crysis series — or 1,000+ more games — across nearly all of their devices, turning even a Mac or a mobile device into the ultimate gaming rig. It’s the perfect way to keep the gaming going after pumpkin pie is served.
The Gift of Gaming
The easiest upgrade in PC gaming makes a perfect gift for gamers.
GeForce NOW Priority Membership digital gift cards are now available in 2-month, 6-month or 12-month options. Give the gift of powerful PC gaming to a special someone who uses a low-powered device, a budding gamer using a Mac, or a squadmate who’s gaming on the go.
Gift cards can be redeemed on an existing GeForce NOW account or added to a new one. Existing Founders and Priority members will have the number of months added to their accounts.
Eat, Play and Be Merry
Between bites of stuffing and mashed potatoes, members can look for the following games joining the GeForce NOW library:
Fate Seeker II (day-and-date release on Steam, Nov. 23)
theHunter: Call of the Wild (day-and-date release on Epic Games Store, Nov. 25)
We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.
We initially planned to add Farming Simulator 2022 to GeForce NOW in November, but discovered an issue during our onboarding process. We hope to add the game in the coming weeks.
Whether you’re celebrating Thanksgiving or just looking forward to a gaming-filled weekend, tell us what you’re thankful for on Twitter or in the comments below.
A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo.
The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like “sunset at a beach” and AI generates the scene in real time. Add an additional adjective like “sunset at a rocky beach,” or swap “sunset” to “afternoon” or “rainy day” and the model, based on generative adversarial networks, instantly modifies the picture.
With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images.
The new GauGAN2 text-to-image feature can now be experienced on NVIDIA AI Demos, where visitors to the site can experience AI through the latest demos from NVIDIA Research. With the versatility of text prompts and sketches, GauGAN2 lets users create and customize scenes more quickly and with finer control.
An AI of Few Words
GauGAN2 combines segmentation mapping, inpainting and text-to-image generation in a single model, making it a powerful tool to create photorealistic art with a mix of words and drawings.
The demo is one of the first to combine multiple modalities — text, semantic segmentation, sketch and style — within a single GAN framework. This makes it faster and easier to turn an artist’s vision into a high-quality AI-generated image.
Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple trees in the foreground, or clouds in the sky.
It doesn’t just create realistic images — artists can also use the demo to depict otherworldly landscapes.
Imagine for instance, recreating a landscape from the iconic planet of Tatooine in the Star Wars franchise, which has two suns. All that’s needed is the text “desert hills sun” to create a starting point, after which users can quickly sketch in a second sun.
It’s an iterative process, where every word the user types into the text box adds more to the AI-created image.
The AI model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that’s among the world’s 10 most powerful supercomputers. The researchers used a neural network that learns the connection between words and the visuals they correspond to like “winter,” “foggy” or “rainbow.”
Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher quality of images.
The GauGAN2 research demo illustrates the future possibilities for powerful image-generation tools for artists. One example is the NVIDIA Canvas app, which is based on GauGAN technology and available to download for anyone with an NVIDIA RTX GPU.
NVIDIA Research has more than 200 scientists around the globe, focused on areas including AI, computer vision, self-driving cars, robotics and graphics. Learn more about their work.
You don’t need a private plane to be at the forefront of personal travel.
Electric automaker Xpeng took the wraps off the G9 SUV this week at the international Auto Guangzhou show in China. The intelligent, software-defined vehicle is built on the high-performance compute of NVIDIA DRIVE Orin and delivers AI capabilities that are continuously upgraded with each over-the-air update.
The new flagship SUV debuts Xpeng’s centralized electronic and electrical architecture and Xpilot 4.0 advanced driver assistance system for a seamless driving experience. The G9 is also compatible with the next-generation “X-Power” superchargers for charging up to 124 miles in 5 minutes.
The Xpeng G9 and its fellow EVs are elevating the driving experience with intelligent features that are always at the cutting edge.
Intelligence at the Edge
The G9 is intelligently designed from the inside out.
The SUV is the first to be equipped with Xpilot 4.0, an AI-assisted driving system capable of address-to-address automated driving, including valet parking.
Xpilot 4.0 is built on two NVIDIA DRIVE Orin systems-on-a-chip (SoC), achieving 508 trillion operations per second (TOPS). It uses an 8-million-pixel front-view camera and 2.9-million-pixel side-view cameras that cover front, rear, left and right views, as well as a highly integrated and expandable domain controller.
This technology is incorporated into a centralized compute architecture for a streamlined design, powerful performance and seamless upgrades.
Charging Ahead
The G9 is designed for the international market, bringing software-defined innovation to roads around the world.
It incorporates new signature details, such as daytime running lights designed to make a sharp-eyed impression. Four daytime running lights at the top and bottom of the headlights form the Xpeng logo. These headlights also include discrete lidar sensors, merging cutting-edge technology with an elegant exterior.
In addition to fast charging, the electric SUV meets global sustainability requirements as well as NCAP five-star safety standards. The G9 is scheduled to officially launch in China in the third quarter of 2022, with plans to expand to global markets soon after.
The intelligent EV joined a growing lineup of software-defined vehicles powered by NVIDIA DRIVE that are transforming the way the world moves.Also on the Auto Guangzhou showfloor until the event closes on Nov. 28 are the Human HorizonsHiPhi Z Digital-GT, NIOET7 and SAIC’sIM Motors all-electric lineup, displaying the depth and diversity of the NVIDIA DRIVE Orin ecosystem.
When Julien Trombini and Guillaume Cazenave founded video-analytics startup Two-i four years ago, they had an ambitious goal: improving the quality of urban life by one day being able to monitor a city’s roads, garbage collection and other public services.
Along the way, the pair found a wholly different niche. Today, the company’s technology — which combines computer vision, data science and deep learning — is helping to prevent deadly accidents in the oil and gas industry, one of the world’s most dangerous sectors.
Initially, Trombini and Cazenave envisioned a system that would enable civic leaders to see what improvements were needed across a municipality.
“It would be like having a weather map of the city, only one that measures efficiency,” said Trombini, who serves as chairman of Two-i, an NVIDIA Metropolis partner based in Metz, a historic city in northeast France.
That proved a tall order, so the two refocused on specific facilities, such as stadiums, retirement homes and transit stations, where its tech helps with security and incident detection. For instance, it can alert the right people when a retirement home resident falls in a corridor. Or when a transit rider using a wheelchair can’t get on a train because of a broken lift.
More recently, the company was approached by ExxonMobil to help with a potentially deadly issue: improving worker safety around open oil tanks.
Together with the energy giant, Two-i has created an AI-enabled video analytics application to detect when individuals near a danger zone and risk falling and immediately alert others to take quick action. In its initial months of operation, the vision AI system prevented two accidents from occurring.
While this use case is highly specific, the company’s AI architecture is designed to flexibly support many different algorithms and functions.
“The algorithms are exactly the same as what we’re using for different clients,” said Trombini. “It’s the same technology, but it’s packaged in a different way.”
Making the Most of Vision AI
Two-i’s flexibility stems from its reliance on using the NVIDIA Metropolis platform for AI-enabled video analytics applications, leveraging advanced tools and adopting a full-stack approach.
To do so, it relies on a variety of NVIDIA-Certified Systems, using the latest workstation and data center GPUs based on the high-performance NVIDIA Ampere architecture, for both training and inference. To shorten training times further, Two-i is looking to test its huge image dataset on the powerful NVIDIA A100 GPU.
The company looks to frequently upgrade its GPUs to ensure it’s offering customers the fastest possible solution, no matter how many cameras are feeding data into its system.
“The time we can save there is crucial, and the better the hardware, the more accurate the results and faster we get to market,” said Trombini.
Two-i taps the CUDA 11.1 toolkit and cuDNN 8.1 library to support its deep learning process, and NVIDIA TensorRT to accelerate inference throughput.
Trombini says one of the most compelling pieces of NVIDIA tech is the NVIDIA TAO Toolkit, which helps the company keep costs down as it tinkers with its algorithms.
“The heavier the algorithm, the more expensive,” he said. “We use the TAO toolkit to prune algorithms and make them more tailored to the task.”
For example, training that initially took up to two weeks has been slashed to three days using the NVIDIA TAO Toolkit, a CLI- and Jupyter Notebook-based version of the NVDIA train, adapt and optimize framework.
Two-i has also started benchmarking NVIDIA’s pretrained models against its algorithms and begun using the NVIDIA DeepStream SDK to enhance its video analytics pipeline.
Building on Success
Two-i sees its ability to solve complicated problems in a variety of settings, such as for ExxonMobil, as a springboard to swinging back around to its original smart city aspirations.
Already, it’s monitoring all roads in eight European cities, analyzing traffic flows and understanding where cars are coming from and going to.
Trombini recognizes that Two-i has to keep its focus on delivering one benefit after another to achieve the company’s long-term goals.
“It’s coming slowly,” he said, “but we are starting to implement our vision.”
By the time the night was over, it felt like Jensen Huang had given everyone in the ballroom a good laugh and a few things to think about.
The annual dinner of the Semiconductor Industry Association — a group of companies that together employ a quarter-million workers in the U.S. and racked up U.S. sales over $200 billion last year — attracted the governors of Indiana and Michigan and some 200 industry executives, including more than two dozen chief executives.
They came to network, get an update on the SIA’s work in Washington, D.C., and bestow the 2021 Robert N. Noyce award, their highest honor, on the founder and CEO of NVIDIA.
“Before we begin, I want to say it’s so nice to be back in person,” said John Neuffer, SIA president and CEO, to applause from a socially distanced audience.
The group heard comments on video from U.S. Senator Chuck Schumer, of New York, and U.S. Commerce Secretary Gina Raimondo about pending legislation supporting the industry.
Recognizing ‘an Icon’
Turning to the Noyce award, Neuffer introduced Huang as “an icon in our industry. From starting NVIDIA in a rented townhouse in Fremont, California, in 1993, he has become one of the industry’s longest-serving and most successful CEOs of what is today by market cap the world’s eighth most valuable company,” he said.
“I accept this on behalf of all NVIDIA’s employees because it reflects their body of work,” Huang said. “However, I’d like to keep this at my house,” he quipped.
Since 1991, the annual Noyce award has recognized tech and business leaders including Jack Kilby (1995), an inventor of the integrated circuit that paved the way for today’s chips.
Two of Huang’s mentors won Noyce awards — Morris Chang, the founder and former CEO of TSMC, the world’s first and largest chip foundry in 2008, and, in 2018, John Hennessy, the Alphabet chairman and former Stanford president. Huang, his former student, interviewed Hennessy on stage at the 2018 event.
Programming on an Apple II
In an on-stage interview with John Markoff, author and former senior technology writer for The New York Times, Huang shared some of his story and his observations on technology and the industry.
He recalled high school days programming on an Apple II computer, getting his first job as a microprocessor designer at AMD and starting NVIDIA with Chris Malachowsky and Curtis Priem.
“Chris and Curtis are the two brightest engineers I have met … and all of us loved building computers. Success has a lot to do with luck, and part of my luck was meeting them,” he said.
Making Million-x Leaps
Fast-forwarding to today, he shared his vision for accelerated computing with AI in projects like Earth-2, a supercomputer for climate science.
“We will build a digital twin of Earth and put some of the brightest computer scientists on the planet to work on it” to explore and mitigate impacts of climate change, he said. “We could solve some of the problems in climate science in our generation.”
He also expressed optimism about Silicon Valley’s culture of innovation.
“The concept of Silicon Valley doesn’t have to be geographic, we can carry this sensibility all over the world, but we have to be mindful of being humble and recognize we’re not here alone, so we need to be in service to others,” he said.
A Pivotal Role in AI
The Noyce award came two months after TIME Magazine named Huang one of the 100 most influential people of 2021. He was one of seven honored on the iconic weekly magazine’s cover along with U.S. President Joe Biden, Tesla CEO Elon Musk and singer Billie Eilish.
A who’s who of tech luminaries including executives from Adobe, IBM and Zoom shared stories of Huang and NVIDIA’s impact in a video, included below, screened at the event. In it, Andrew Ng, a machine-learning pioneer and entrepreneur described the pivotal role NVIDIA’s CEO has played in AI.
“A lot of the progress in AI over the last decade would not have been possible if not for Jensen’s visionary leadership,” said Ng, founder and CEO of DeepLearning.AI and Landing AI. “His impact on the semiconductor industry, AI and the world is almost incalculable.”
Manufacturers are bringing product designs to life in a newly immersive world.
Rendermedia, based in the U.K., specializes in immersive solutions for commerce and industries. The company provides clients with tools and applications for photorealistic virtual, augmented and extended reality (collectively known as XR) in areas like product design, training and collaboration.
With NVIDIA RTX graphics and NVIDIA CloudXR, Rendermedia helps businesses get their products in the hands of customers and audiences, allowing them to interact and engage collaboratively on any device, from any location.
Expanding XR Spaces With CloudXR
Previously, Rendermedia could only deliver realistic rendered products to customers through a CG rendered film, which was often time-consuming to create. It also didn’t allow for consumers to dynamically interact with the product.
With NVIDIA CloudXR, Rendermedia and its product manufacturing clients can quickly render and create fully interactive simulated products in photographic detail, while also reducing their time to market.
This can be achieved by transforming raw product computer-aided design (CAD) into a realistic digital twin of the product. The digital twin can then be used across the entire organization, from sales and marketing to health and safety teams.
Rendermedia can also use CloudXR to offer organizations the ability to design, market, sell and train different teams and customers around their products in different languages worldwide.
“With both the range of 3D data evolving and devices enabling us to interact with products and environments in scale, this ultimately drives the demands around the complexity and sophistication across products and environments within an organization,” said Rendermedia founder Mark Miles.
Rendermedia customers Airbus and National Grid are using VR experiences to showcase future products and designs in realistic scenarios.
Airbus, which designs, manufactures and sells aerospace products worldwide, has worked with Rendermedia on over 35 virtual experiences. Recently, Rendermedia helped bring Airbus’ vision to life by creating VR experiences that allowed users to experience its newest products in complete context and at scale.
National Grid is an electricity and gas utility company headquartered in the U.K. With the help of Rendermedia, National Grid used photorealistic digital twins of real-life industrial sites for virtual training for employees.
The power of NVIDIA CloudXR and RTX technology allows product manufacturers to visualize designs and 3D models using Rendermedia’s platform with more realism. And they can easily make changes to designs in real time, helping users iterate more often and get to final product designs quicker. CloudXR is cost-efficient and provides common standards for training across every learner.
“CloudXR combined with RTX means that our customers can virtualize any part of their business and access it on any device at scale,” said Miles. “This is especially important in training, where the abundance of platforms and devices that people consume can vary widely. CloudXR means that any training content can be consumed at the same level of detail, so content does not have to be readapted for different devices.”
With NVIDIA CloudXR, Rendermedia can further push the boundaries of photorealistic graphics in immersive environments, all without worrying about delivering to different devices and audiences.
Learn more about NVIDIA CloudXR and how it can enhance workflows.
You’ve reached your weekly gaming checkpoint. Welcome to a positively packed GFN Thursday.
This week delivers a sweet deal for gamers ready to upgrade their PC gaming from the cloud. With any new, paid six-month Priority or GeForce NOW RTX 3080 subscription, members will receive Crysis Remastered for free for a limited time.
Gamers and music lovers alike can get hyped for an awesome entertainment experience playing Core this week and visiting the digital world of Oberhasli. There, they’ll enjoy the anticipated deadmau5 “Encore” concert and event.
And what kind of GFN Thursday would it be without new games? We’ve got eight new titles joining the GeForce NOW library this week.
For a limited time, get a copy of Crysis Remastered free with select GeForce NOW memberships. Purchase a six-month Priority membership, or the new GeForce NOW RTX 3080 membership, and get a free redeemable code for Crysis Remastered on the Epic Games Store.
Current monthly Founders and Priority members are eligible by upgrading to a six-month membership. Founders, exclusively, can upgrade to a GeForce NOW RTX 3080 membership and receive 10 percent off the subscription price and no risk to their current Founders benefits. They can revert back to their original Founders plan and retain “Founders for Life” pricing, as long as they remain in consistent good standing on any paid membership plan.
This special bonus also applies to existing GeForce NOW RTX 3080 members and preorders, as a thank you for being among the first to upgrade to the next generation in cloud gaming. Current members on the GeForce NOW RTX 3080 plan will receive game codes in the days ahead; while members who have preordered but haven’t been activated yet, will receive their game code when their GeForce NOW RTX 3080 service is activated. Please note, terms and conditions apply.
Stream Crytek’s classic first-person shooter, remastered with graphics optimized for a new generation of hardware and complete with stunning support for RTX ON and DLSS. GeForce NOW members can experience the first game in the Crysis series — or 1,000+ more games — across nearly all of their devices, turning even a Mac or a mobile device into the ultimate gaming rig.
This GFN Thursday brings Core and deadmau5 to the cloud. From shooters, survival and action adventure to MMORPGs, platformers and party games, Core is a multiverse of exciting gaming entertainment with over 40,000 free-to-play, Unreal-powered games and worlds.
This week, members can visit the fully immersive digital world of Oberhasli — designed with the vision of the legendary producer, musician and DJ, deadmau5 — and enjoy an epic “Encore” concert and event. Catch the deadmau5 performance, with six showings running from Friday, Nov. 19, to Saturday, Nov. 20. The concert becomes available every hour, on the hour, the following week.
The fun continues with three games inspired by deadmau5’s music — Hop ‘Til You Drop, Mau5 Hau5 and Ballon Royale — set throughout 19 dystopian worlds featured in the official When The Summer Dies music video. Party on with exclusive deadmau5 skins, emotes and mounts, and interact with other fans while streaming the exclusive, interactive and must-experience deadmau5 performance celebrating the launch of Core on GeForce NOW with the “Encore” concert this week.
A New Challenge Calls
It wouldn’t be GFN Thursday without a new set of games coming to the cloud. Get ready to grind one of the newest joining the GeForce NOW library this week:
Combat Mission Cold War (New release on Steam, Nov. 16)
The Last Stand: Aftermath (New release on Steam, Nov. 16)
We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.
Update on ‘Bright Memory: Infinite’
Bright Memory: Infinite was added last week, but during onboarding it was discovered that enabling RTX in the game requires an upcoming operating system upgrade to GeForce NOW servers. We expect the update to be complete in December and will provide more information here when it happens.
GeForce NOW Coming to LG Smart TVs
We’re working with LG Electronics to add support for GeForce NOW to LG TVs, starting with a beta release of the app in the LG Content Store for select 2021 LG OLED, QNED MiniLED and NanoCell models. If you have one of the supported TVs, check it out and share feedback to help us improve the experience.
NVIDIA-powered systems won four of five tests in MLPerf HPC 1.0, an industry benchmark for AI performance on scientific applications in high performance computing.
They’re the latest results from MLPerf, a set of industry benchmarks for deep learning first released in May 2018. MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI.
MLPerf HPC 1.0 measured training of AI models in three typical workloads for HPC centers.
CosmoFlow estimates details of objects in images from telescopes.
DeepCAM tests detection of hurricanes and atmospheric rivers in climate data.
OpenCatalyst tracks how well systems predict forces among atoms in molecules.
Each test has two parts. A measure of how fast a system trains a model is called strong scaling. Its counterpart, weak scaling, is a measure of maximum system throughput, that is, how many models a system can train in a given time.
Compared to the best results in strong scaling from last year’s MLPerf 0.7 round, NVIDIA delivered 5x better results for CosmoFlow. In DeepCAM, we delivered nearly 7x more performance.
The Perlmutter Phase 1 system at Lawrence Berkeley National Lab led in strong scaling in the OpenCatalyst benchmark using 512 of its 6,144 NVIDIA A100 Tensor Core GPUs.
In the weak-scaling category, we led DeepCAM using 16 nodes per job and 256 simultaneous jobs. All our tests ran on NVIDIA Selene (pictured above), our in-house system and the world’s largest industrial supercomputer.
The latest results demonstrate another dimension of the NVIDIA AI platform and its performance leadership. It marks the eighth straight time NVIDIA delivered top scores in MLPerf benchmarks that span AI training and inference in the data center, the cloud and the network’s edge.
A Broad Ecosystem
Seven of the eight participants in this round submitted results using NVIDIA GPUs.
They include the Jülich Supercomputing Centre in Germany, the Swiss National Supercomputing Centre and, in the U.S., the Argonne and Lawrence Berkeley National Laboratories, the National Center for Supercomputing Applications and the Texas Advanced Computing Center.
“With the benchmark test, we have shown that our machine can unfold its potential in practice and contribute to keeping Europe on the ball when it comes to AI,” said Thomas Lippert, director of the Jülich Supercomputing Centre in a blog.
The MLPerf benchmarks are backed by MLCommons, an industry group led by Alibaba, Google, Intel, Meta, NVIDIA and others.
How We Did It
The strong showing is the result of a mature NVIDIA AI platform that includes a full stack of software.
In this round, we tuned our code with tools available to everyone, such as NVIDIA DALI to accelerate data processing and CUDA Graphs to reduce small-batch latency for efficiently scaling up to 1,024 or more GPUs.
For a deeper dive into how we used these tools see our developer blog.
All the software we used for our submissions is available from the MLPerf repository. We regularly add such code to the NGC catalog, our software hub for pretrained AI models, industry application frameworks, GPU applications and other software resources.
A partial differential equation is “the most powerful tool humanity has ever created,” Cornell University mathematician Steven Strogatz wrote in a 2009 New York Times opinion piece.
This quote opened last week’s GTC talk AI4Science: The Convergence of AI and Scientific Computing, presented by Anima Anandkumar, director of machine learning research at NVIDIA and professor of computing at the California Institute of Technology.
Anandkumar explained that partial differential equations are the foundation for most scientific simulations. And she showcased how this historic tool is now being made all the more powerful with AI.
“The convergence of AI and scientific computing is a revolution in the making,” she said.
Using new neural operator-based frameworks to learn and solve partial differential equations, AI can help us model weather forecasting 100,000x quicker — and carbon dioxide sequestration 60,000x quicker — than traditional models.
Speeding Up the Calculations
Anandkumar and her team developed the Fourier Neural Operator (FNO), a framework that allows AI to learn and solve an entire family of partial differential equations, rather than a single instance.
It’s the first machine learning method to successfully model turbulent flows with zero-shot super-resolution — which means that FNOs enable AI to make high-resolution inferences without high-resolution training data, which would be necessary for standard neural networks.
FNO-based machine learning greatly reduces the costs of obtaining information for AI models, improves their accuracy and speeds up inference by three orders of magnitude compared with traditional methods.
Mitigating Climate Change
FNOs can be applied to make real-world impact in countless ways.
For one, they offer a 100,000x speedup over numerical methods and unprecedented fine-scale resolution for weather prediction models. By accurately simulating and predicting extreme weather events, the AI models can allow planning to mitigate the effects of such disasters.
The FNO model, for example, was able to accurately predict the trajectory and magnitude of Hurricane Matthew from 2016.
In the video below, the red line represents the observed track of the hurricane. The white cones show the National Oceanic and Atmospheric Administration’s hurricane forecasts based on traditional models. The purple contours mark the FNO-based AI forecasts.
As shown, the FNO model follows the trajectory of the hurricane with improved accuracy compared with the traditional method — and the high-resolution simulation of this weather event took just a quarter of a second to process on NVIDIA GPUs.
In addition, Anandkumar’s talk covered how FNO-based AI can be used to model carbon dioxide sequestration — capturing carbon dioxide from the atmosphere and storing it underground, which scientists have said can help mitigate climate change.
Researchers can model and study how carbon dioxide would interact with materials underground using FNOs 60,000x faster than with traditional methods.
Anandkumar said the FNO model is also a significant step toward building a digital twin of Earth.
The new NVIDIA Modulus framework for training physics-informed machine learning models and NVIDIA Quantum-2 InfiniBand networking platform equip researchers and developers with the tools to combine the powers of AI, physics and supercomputing — to help solve the world’s toughest problems.
“I strongly believe this is the future of science,” Anandkumar said.
She’ll delve into these topics further at a SC21 plenary talk, taking place on Nov. 18 at 10:30 a.m. Central time.
Modern computing workloads — including scientific simulations, visualization, data analytics, and machine learning — are pushing supercomputing centers, cloud providers and enterprises to rethink their computing architecture.
The processor or the network or the software optimizations alone can’t address the latest needs of researchers, engineers and data scientists. Instead, the data center is the new unit of computing, and organizations have to look at the full technology stack.
The latest rankings of the world’s most powerful systems show continued momentum for this full-stack approach in the latest generation of supercomputers.
NVIDIA technologies accelerate over 70 percent, or 355, of the systems on the TOP500 list released at the SC21 high performance computing conference this week, including over 90 percent of all new systems. That’s up from 342 systems, or 68 percent, of the machines on the TOP500 list released in June.
NVIDIA also continues to have a strong presence on the Green500 list of the most energy-efficient systems, powering 23 of the top 25 systems on the list, unchanged from June. On average, NVIDIA GPU-powered systems deliver 3.5x higher power efficiency than non-GPU systems on the list.
Highlighting the emergence of a new generation of cloud-native systems, Microsoft’s GPU-accelerated Azure supercomputer ranked 10th on the list, the first top 10 showing for a cloud-based system.
AI is revolutionizing scientific computing. The number of research papers leveraging HPC and machine learning has skyrocketed in recent years; growing from roughly 600 ML + HPC papers submitted in 2018 to nearly 5,000 in 2020.
The ongoing convergence of HPC and AI workloads is also underscored by new benchmarks such as HPL-AI and MLPerf HPC.
HPL-AI is an emerging benchmark of converged HPC and AI workloads that uses mixed-precision math — the basis of deep learning and many scientific and commercial jobs — while still delivering the full accuracy of double-precision math, which is the standard measuring stick for traditional HPC benchmarks.
And MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI, with the benchmark measuring performance on three key workloads for HPC centers: astrophysics (Cosmoflow), weather (Deepcam) and molecular dynamics (Opencatalyst).
NVIDIA addresses the full stack with GPU-accelerated processing, smart networking, GPU-optimized applications, and libraries that support the convergence of AI and HPC. This approach has supercharged workloads and enabled scientific breakthroughs.
Let’s look more closely at how NVIDIA is supercharging supercomputers.
Accelerated Computing
The combined power of the GPU’s parallel processing capabilities and over 2,500 GPU-optimized applications allows users to speed up their HPC jobs, in many cases from weeks to hours.
We’re constantly optimizing the CUDA-X libraries and the GPU-accelerated applications, so it’s not unusual for users to see an x-factor performance gain on the same GPU architecture.
As a result, the performance of the most widely used scientific applications — which we call the “golden suite” — has improved 16x over the past six years, with more advances on the way.
And to help users quickly take advantage of higher performance, we offer the latest versions of the AI and HPC software through containers from the NGC catalog. Users simply pull and run the application on their supercomputer, in the data center or the cloud.
Convergence of HPC and AI
The infusion of AI in HPC helps researchers speed up their simulations while achieving the accuracy they’d get with the traditional simulation approach.
That’s why an increasing number of researchers are taking advantage of AI to speed up their discoveries.
That includes four of the finalists for this year’s Gordon Bell prize, the most prestigious award in supercomputing. Organizations are racing to build exascale AI computers to support this new model, which combines HPC and AI.
That strength is underscored by relatively new benchmarks, such as HPL-AI and MLPerf HPC, highlighting the ongoing convergence of HPC and AI workloads.
Graphs — a key data structure in modern data science — can now be projected into deep-neural network frameworks with Deep Graph Library, or DGL, a new Python package.
NVIDIA Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics.
And NVIDIA introduced three new libraries:
ReOpt – to increase operational efficiency for the $10 trillion logistics industry.
cuQuantum – to accelerate quantum computing research.
cuNumeric – to accelerate NumPy for scientists, data scientists, and machine learning and AI researchers in the Python community.
Weaving it all together is NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.
Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, of the 5G edge, robots, self-driving cars and even avatars.
Using Omniverse, NVIDIA announced last week that it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.
Cloud-Native Supercomputing
As supercomputers take on more workloads across data analytics, AI, simulation and visualization, CPUs are stretched to support a growing number of communication tasks needed to operate large and complex systems.
As a fully integrated data-center-on-a-chip platform, NVIDIA BlueField DPUs can offload and manage data center infrastructure tasks instead of making the host processor do the work, enabling stronger security and more efficient orchestration of the supercomputer.
Combined with NVIDIA Quantum InfiniBand platform, this architecture delivers optimal bare-metal performance while natively supporting multinode tenant isolation.
BlueField DPUs isolate applications from infrastructure. NVIDIA DOCA 1.2 — the latest BlueField software platform — enables next-generation distributed firewalls and wider use of line-rate data encryption. And NVIDIA Morpheus, assuming an interloper is already inside the data center, uses deep learning-powered data science to detect intruder activities in real time.
And all of the trends outlined above will be accelerated by new networking technology.
NVIDIA Quantum-2, also announced last week, is a 400Gbps InfiniBand platform and consists of the Quantum-2 switch, the ConnectX-7 NIC, the BlueField-3 DPU, as well as new software for the new networking architecture.
NVIDIA Quantum-2 offers the benefits of bare-metal high performance and secure multi-tenancy, allowing the next generation of supercomputers to be secure, cloud-native and better utilized.
** Benchmark applications: Amber, Chroma, GROMACS, MILC, NAMD, PyTorch, Quantum Espresso; Random Forest FP32 , TensorFlow, VASP | GPU node: dual-socket CPUs with 4x P100, V100, or A100 GPUs.