In Pursuit of Smart City Vision, Startup Two-i Keeps an AI on Worker Safety

When Julien Trombini and Guillaume Cazenave founded video-analytics startup Two-i four years ago, they had an ambitious goal: improving the quality of urban life by one day being able to monitor a city’s roads, garbage collection and other public services.

Along the way, the pair found a wholly different niche. Today, the company’s technology — which combines computer vision, data science and deep learning — is helping to prevent deadly accidents in the oil and gas industry, one of the world’s most dangerous sectors.

Initially, Trombini and Cazenave envisioned a system that would enable civic leaders to see what improvements were needed across a municipality.

“It would be like having a weather map of the city, only one that measures efficiency,” said Trombini, who serves as chairman of Two-i, an NVIDIA Metropolis partner based in Metz, a historic city in northeast France.

That proved a tall order, so the two refocused on specific facilities, such as stadiums, retirement homes and transit stations, where its tech helps with security and incident detection. For instance, it can alert the right people when a retirement home resident falls in a corridor. Or when a transit rider using a wheelchair can’t get on a train because of a broken lift.

Two-i founders Julien Trombini and Guillaume Cazenave.
Two-i founders Julien Trombini (left) and Guillaume Cazenave.

More recently, the company was approached by ExxonMobil to help with a potentially deadly issue: improving worker safety around open oil tanks.

Together with the energy giant, Two-i has created an AI-enabled video analytics application to detect when individuals near a danger zone and risk falling and immediately alert others to take quick action. In its initial months of operation, the vision AI system prevented two accidents from occurring.

While this use case is highly specific, the company’s AI architecture is designed to flexibly support many different algorithms and functions.

“The algorithms are exactly the same as what we’re using for different clients,” said Trombini. “It’s the same technology, but it’s packaged in a different way.”

Making the Most of Vision AI

Two-i’s flexibility stems from its reliance on using the NVIDIA Metropolis platform for AI-enabled video analytics applications, leveraging advanced tools and adopting a full-stack approach.

To do so, it relies on a variety of NVIDIA-Certified Systems, using the latest workstation and data center GPUs based on the high-performance NVIDIA Ampere architecture, for both training and inference. To shorten training times further, Two-i is looking to test its huge image dataset on the powerful NVIDIA A100 GPU.

The company looks to frequently upgrade its GPUs to ensure it’s offering customers the fastest possible solution, no matter how many cameras are feeding data into its system.

“The time we can save there is crucial, and the better the hardware, the more accurate the results and faster we get to market,” said Trombini.

Two-i taps the CUDA 11.1 toolkit and cuDNN 8.1 library to support its deep learning process, and NVIDIA TensorRT to accelerate inference throughput.

Trombini says one of the most compelling pieces of NVIDIA tech is the NVIDIA TAO Toolkit, which helps the company keep costs down as it tinkers with its algorithms.

“The heavier the algorithm, the more expensive,” he said. “We use the TAO toolkit to prune algorithms and make them more tailored to the task.”

For example, training that initially took up to two weeks has been slashed to three days using the NVIDIA TAO Toolkit, a CLI- and Jupyter Notebook-based version of the NVDIA train, adapt and optimize framework.

Two-i has also started benchmarking NVIDIA’s pretrained models against its algorithms and begun using the NVIDIA DeepStream SDK to enhance its video analytics pipeline.

Two-i Video Analytics

Building on Success

Two-i sees its ability to solve complicated problems in a variety of settings, such as for ExxonMobil, as a springboard to swinging back around to its original smart city aspirations.

Already, it’s monitoring all roads in eight European cities, analyzing traffic flows and understanding where cars are coming from and going to.

Trombini recognizes that Two-i has to keep its focus on delivering one benefit after another to achieve the company’s long-term goals.

“It’s coming slowly,” he said, “but we are starting to implement our vision.”

The post In Pursuit of Smart City Vision, Startup Two-i Keeps an AI on Worker Safety appeared first on The Official NVIDIA Blog.

Read More

NVIDIA CEO Receives Semiconductor Industry’s Top Honor

By the time the night was over, it felt like Jensen Huang had given everyone in the ballroom a good laugh and a few things to think about.

The annual dinner of the Semiconductor Industry Association — a group of companies that together employ a quarter-million workers in the U.S. and racked up U.S. sales over $200 billion last year — attracted the governors of Indiana and Michigan and some 200 industry executives, including more than two dozen chief executives.

They came to network, get an update on the SIA’s work in Washington, D.C., and bestow the 2021 Robert N. Noyce award, their highest honor, on the founder and CEO of NVIDIA.

“Before we begin, I want to say it’s so nice to be back in person,” said John Neuffer, SIA president and CEO, to applause from a socially distanced audience.

The group heard comments on video from U.S. Senator Chuck Schumer, of New York, and U.S. Commerce Secretary Gina Raimondo about pending legislation supporting the industry.

Recognizing ‘an Icon’

Turning to the Noyce award, Neuffer introduced Huang as “an icon in our industry. From starting NVIDIA in a rented townhouse in Fremont, California, in 1993, he has become one of the industry’s longest-serving and most successful CEOs of what is today by market cap the world’s eighth most valuable company,” he said.

“I accept this on behalf of all NVIDIA’s employees because it reflects their body of work,” Huang said. “However, I’d like to keep this at my house,” he quipped.

Since 1991, the annual Noyce award has recognized tech and business leaders including Jack Kilby (1995), an inventor of the integrated circuit that paved the way for today’s chips.

Two of Huang’s mentors won Noyce awards — Morris Chang, the founder and former CEO of TSMC, the world’s first and largest chip foundry in 2008, and, in 2018, John Hennessy, the Alphabet chairman and former Stanford president. Huang, his former student, interviewed Hennessy on stage at the 2018 event.

Programming on an Apple II

In an on-stage interview with John Markoff, author and former senior technology writer for The New York Times, Huang shared some of his story and his observations on technology and the industry.

He recalled high school days programming on an Apple II computer, getting his first job as a microprocessor designer at AMD and starting NVIDIA with Chris Malachowsky and Curtis Priem.

“Chris and Curtis are the two brightest engineers I have met … and all of us loved building computers. Success has a lot to do with luck, and part of my luck was meeting them,” he said.

Making Million-x Leaps

Fast-forwarding to today, he shared his vision for accelerated computing with AI in projects like Earth-2, a supercomputer for climate science.

“We will build a digital twin of Earth and put some of the brightest computer scientists on the planet to work on it” to explore and mitigate impacts of climate change, he said. “We could solve some of the problems in climate science in our generation.”

He also expressed optimism about Silicon Valley’s culture of innovation.

“The concept of Silicon Valley doesn’t have to be geographic, we can carry this sensibility all over the world, but we have to be mindful of being humble and recognize we’re not here alone, so we need to be in service to others,” he said.

A Pivotal Role in AI

The Noyce award came two months after TIME Magazine named Huang one of the 100 most influential people of 2021. He was one of seven honored on the iconic weekly magazine’s cover along with U.S. President Joe Biden, Tesla CEO Elon Musk and singer Billie Eilish.

A who’s who of tech luminaries including executives from Adobe, IBM and Zoom shared stories of Huang and NVIDIA’s impact in a video, included below, screened at the event. In it, Andrew Ng, a machine-learning pioneer and entrepreneur described the pivotal role NVIDIA’s CEO has played in AI.

“A lot of the progress in AI over the last decade would not have been possible if not for Jensen’s visionary leadership,” said Ng, founder and CEO of DeepLearning.AI and Landing AI. “His impact on the semiconductor industry, AI and the world is almost incalculable.”

Feature image credit: Nora Stratton/SFFoto

The post NVIDIA CEO Receives Semiconductor Industry’s Top Honor appeared first on The Official NVIDIA Blog.

Read More

From Process to Product Design: How Rendermedia Elevates Manufacturing Workflows With XR Experiences

Manufacturers are bringing product designs to life in a newly immersive world.

Rendermedia, based in the U.K., specializes in immersive solutions for commerce and industries. The company provides clients with tools and applications for photorealistic virtual, augmented and extended reality (collectively known as XR) in areas like product design, training and collaboration.

With NVIDIA RTX graphics and NVIDIA CloudXR, Rendermedia helps businesses get their products in the hands of customers and audiences, allowing them to interact and engage collaboratively on any device, from any location.

Expanding XR Spaces With CloudXR

Previously, Rendermedia could only deliver realistic rendered products to customers through a CG rendered film, which was often time-consuming to create. It also didn’t allow for consumers to dynamically interact with the product.

With NVIDIA CloudXR, Rendermedia and its product manufacturing clients can quickly render and create fully interactive simulated products in photographic detail, while also reducing their time to market.

This can be achieved by transforming raw product computer-aided design (CAD) into a realistic digital twin of the product. The digital twin can then be used across the entire organization, from sales and marketing to health and safety teams.

Rendermedia can also use CloudXR to offer organizations the ability to design, market, sell and train different teams and customers around their products in different languages worldwide.

“With both the range of 3D data evolving and devices enabling us to interact with products and environments in scale, this ultimately drives the demands around the complexity and sophistication across products and environments within an organization,” said Rendermedia founder Mark Miles.

Rendermedia customers Airbus and National Grid are using VR experiences to showcase future products and designs in realistic scenarios.

Airbus, which designs, manufactures and sells aerospace products worldwide, has worked with Rendermedia on over 35 virtual experiences. Recently, Rendermedia helped bring Airbus’ vision to life by creating VR experiences that allowed users to experience its newest products in complete context and at scale.

National Grid is an electricity and gas utility company headquartered in the U.K. With the help of Rendermedia, National Grid used photorealistic digital twins of real-life industrial sites for virtual training for employees.

The power of NVIDIA CloudXR and RTX technology allows product manufacturers to visualize designs and 3D models using Rendermedia’s platform with more realism. And they can easily make changes to designs in real time, helping users iterate more often and get to final product designs quicker. CloudXR is cost-efficient and provides common standards for training across every learner.

“CloudXR combined with RTX means that our customers can virtualize any part of their business and access it on any device at scale,” said Miles. “This is especially important in training, where the abundance of platforms and devices that people consume can vary widely. CloudXR means that any training content can be consumed at the same level of detail, so content does not have to be readapted for different devices.”

With NVIDIA CloudXR, Rendermedia can further push the boundaries of photorealistic graphics in immersive environments, all without worrying about delivering to different devices and audiences.

Learn more about NVIDIA CloudXR and how it can enhance workflows.

And catch up on a few NVIDIA GTC sessions to see how other companies are using CloudXR.

The post From Process to Product Design: How Rendermedia Elevates Manufacturing Workflows With XR Experiences appeared first on The Official NVIDIA Blog.

Read More

A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership

You’ve reached your weekly gaming checkpoint. Welcome to a positively packed GFN Thursday.

This week delivers a sweet deal for gamers ready to upgrade their PC gaming from the cloud. With any new, paid six-month Priority or GeForce NOW RTX 3080 subscription, members will receive Crysis Remastered for free for a limited time.

Gamers and music lovers alike can get hyped for an awesome entertainment experience playing Core this week and visiting the digital world of Oberhasli. There, they’ll enjoy the anticipated deadmau5 “Encore” concert and event.

And what kind of GFN Thursday would it be without new games? We’ve got eight new titles joining the GeForce NOW library this week.

GeForce NOW Can Run Crysis … And So Can You

Crysis Remastered with RTX ON on GeForce NOW
When your reflection looks this good, you can’t help but stop and admire it. We won’t judge.

But can it run Crysis? GeForce NOW sure can.

For a limited time, get a copy of Crysis Remastered free with select GeForce NOW memberships. Purchase a six-month Priority membership, or the new GeForce NOW RTX 3080 membership, and get a free redeemable code for Crysis Remastered on the Epic Games Store.

Current monthly Founders and Priority members are eligible by upgrading to a six-month membership. Founders, exclusively, can upgrade to a GeForce NOW RTX 3080 membership and receive 10 percent off the subscription price and no risk to their current Founders benefits. They can revert back to their original Founders plan and retain “Founders for Life” pricing, as long as they remain in consistent good standing on any paid membership plan.

This special bonus also applies to existing GeForce NOW RTX 3080 members and preorders, as a thank you for being among the first to upgrade to the next generation in cloud gaming. Current members on the GeForce NOW RTX 3080 plan will receive game codes in the days ahead; while members who have preordered but haven’t been activated yet, will receive their game code when their GeForce NOW RTX 3080 service is activated. Please note, terms and conditions apply.

Stream Crytek’s classic first-person shooter, remastered with graphics optimized for a new generation of hardware and complete with stunning support for RTX ON and DLSS. GeForce NOW members can experience the first game in the Crysis series — or 1,000+ more games — across nearly all of their devices, turning even a Mac or a mobile device into the ultimate gaming rig.

The mission starts here.

Experience the deadmau5 Encore in Core

This GFN Thursday brings Core and deadmau5 to the cloud. From shooters, survival and action adventure to MMORPGs, platformers and party games, Core is a multiverse of exciting gaming entertainment with over 40,000 free-to-play, Unreal-powered games and worlds.

This week, members can visit the fully immersive digital world of Oberhasli — designed with the vision of the legendary producer, musician and DJ, deadmau5 — and enjoy an epic “Encore” concert and event. Catch the deadmau5 performance, with six showings running from Friday, Nov. 19, to Saturday, Nov. 20. The concert becomes available every hour, on the hour, the following week.

Tomorrow, come to the world of Oberhasli, designed by deadmau5, and experience the ‘Encore’ concert in Core.

The fun continues with three games inspired by deadmau5’s music — Hop ‘Til You Drop, Mau5 Hau5 and Ballon Royale — set throughout 19 dystopian worlds featured in the official When The Summer Dies music video. Party on with exclusive deadmau5 skins, emotes and mounts, and interact with other fans while streaming the exclusive, interactive and must-experience deadmau5 performance celebrating the launch of Core on GeForce NOW with the “Encore” concert this week.

A New Challenge Calls

Icarus Beta on GeForce NOW
Explore a savage alien wilderness in the aftermath of terraforming gone wrong — even on a low-powered laptop.

It wouldn’t be GFN Thursday without a new set of games coming to the cloud. Get ready to grind one of the newest joining the GeForce NOW library this week:

  • Combat Mission Cold War (New release on Steam, Nov. 16)
  • The Last Stand: Aftermath (New release on Steam, Nov. 16)
  • Myth of Empires (New release on Steam, Nov. 18)
  • Icarus (Beta weekend on Steam, Nov. 19)
  • Assassin’s Creed: Syndicate Gold Edition (Ubisoft Connect)
  • Core (Epic Games Store)
  • Lost Words: Beyond the Page (Steam)
  • World of Tanks (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Update on ‘Bright Memory: Infinite’

Bright Memory: Infinite was added last week, but during onboarding it was discovered that enabling RTX in the game requires an upcoming operating system upgrade to GeForce NOW servers. We expect the update to be complete in December and will provide more information here when it happens.

GeForce NOW Coming to LG Smart TVs

We’re working with LG Electronics to add support for GeForce NOW to LG TVs, starting with a beta release of the app in the LG Content Store for select 2021 LG OLED, QNED MiniLED and NanoCell models. If you have one of the supported TVs, check it out and share feedback to help us improve the experience.

And finally, here’s our question for the week:

𝙡𝙚𝙩’𝙨 𝙨𝙚𝙩𝙩𝙡𝙚 𝙩𝙝𝙞𝙨 𝙤𝙣𝙘𝙚 𝙖𝙣𝙙 𝙛𝙤𝙧 𝙖𝙡𝙡:

who’s the tougher enemy?

👽 aliens or zombies 🧟‍♂️

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) November 17, 2021

Let us know on Twitter or in the comments below.

The post A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership appeared first on The Official NVIDIA Blog.

Read More

MLPerf HPC Benchmarks Show the Power of HPC+AI 

NVIDIA-powered systems won four of five tests in MLPerf HPC 1.0, an industry benchmark for AI performance on scientific applications in high performance computing.

They’re the latest results from MLPerf, a set of industry benchmarks for deep learning first released in May 2018. MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI.

Recent advances in molecular dynamics, astronomy and climate simulation all used HPC+AI to make scientific breakthroughs. It’s a trend driving the adoption of exascale AI for users in both science and industry.

What the Benchmarks Measure

MLPerf HPC 1.0 measured training of AI models in three typical workloads for HPC centers.

  • CosmoFlow estimates details of objects in images from telescopes.
  • DeepCAM tests detection of hurricanes and atmospheric rivers in climate data.
  • OpenCatalyst tracks how well systems predict forces among atoms in molecules.

Each test has two parts. A measure of how fast a system trains a model is called strong scaling. Its counterpart, weak scaling, is a measure of maximum system throughput, that is, how many models a system can train in a given time.

Compared to the best results in strong scaling from last year’s MLPerf 0.7 round, NVIDIA delivered 5x better results for CosmoFlow. In DeepCAM, we delivered nearly 7x more performance.

The Perlmutter Phase 1 system at Lawrence Berkeley National Lab led in strong scaling in the OpenCatalyst benchmark using 512 of its 6,144 NVIDIA A100 Tensor Core GPUs.

In the weak-scaling category, we led DeepCAM using 16 nodes per job and 256 simultaneous jobs. All our tests ran on NVIDIA Selene (pictured above), our in-house system and the world’s largest industrial supercomputer.

NVIDIA wins MLPerf HPC, Nov 2021
NVIDIA delivered leadership results in both the speed of training a model and per-chip efficiency.

The latest results demonstrate another dimension of the NVIDIA AI platform and its performance leadership. It marks the eighth straight time NVIDIA delivered top scores in MLPerf benchmarks that span AI training and inference in the data center, the cloud and the network’s edge.

A Broad Ecosystem

Seven of the eight participants in this round submitted results using NVIDIA GPUs.

They include the Jülich Supercomputing Centre in Germany, the Swiss National Supercomputing Centre and, in the U.S., the Argonne and Lawrence Berkeley National Laboratories, the National Center for Supercomputing Applications and the Texas Advanced Computing Center.

“With the benchmark test, we have shown that our machine can unfold its potential in practice and contribute to keeping Europe on the ball when it comes to AI,” said Thomas Lippert, director of the Jülich Supercomputing Centre in a blog.

The MLPerf benchmarks are backed by MLCommons, an industry group led by Alibaba, Google, Intel, Meta, NVIDIA and others.

How We Did It

The strong showing is the result of a mature NVIDIA AI platform that includes a full stack of software.

In this round, we tuned our code with tools available to everyone, such as NVIDIA DALI to accelerate data processing and CUDA Graphs to reduce small-batch latency for efficiently scaling up to 1,024 or more GPUs.

We also applied NVIDIA SHARP, a key component within NVIDIA MagnumIO. It provides in-network computing to accelerate communications and offload data operations to the NVIDIA Quantum InfiniBand switch.

For a deeper dive into how we used these tools see our developer blog.

All the software we used for our submissions is available from the MLPerf repository. We regularly add such code to the NGC catalog, our software hub for pretrained AI models, industry application frameworks, GPU applications and other software resources.

The post MLPerf HPC Benchmarks Show the Power of HPC+AI  appeared first on The Official NVIDIA Blog.

Read More

A Revolution in the Making: How AI and Science Can Mitigate Climate Change

A partial differential equation is “the most powerful tool humanity has ever created,” Cornell University mathematician Steven Strogatz wrote in a 2009 New York Times opinion piece.

This quote opened last week’s GTC talk AI4Science: The Convergence of AI and Scientific Computing, presented by Anima Anandkumar, director of machine learning research at NVIDIA and professor of computing at the California Institute of Technology.

Anandkumar explained that partial differential equations are the foundation for most scientific simulations. And she showcased how this historic tool is now being made all the more powerful with AI.

“The convergence of AI and scientific computing is a revolution in the making,” she said.

Using new neural operator-based frameworks to learn and solve partial differential equations, AI can help us model weather forecasting 100,000x quicker — and carbon dioxide sequestration 60,000x quicker — than traditional models.

Speeding Up the Calculations

Anandkumar and her team developed the Fourier Neural Operator (FNO), a framework that allows AI to learn and solve an entire family of partial differential equations, rather than a single instance.

It’s the first machine learning method to successfully model turbulent flows with zero-shot super-resolution — which means that FNOs enable AI to make high-resolution inferences without high-resolution training data, which would be necessary for standard neural networks.

FNO-based machine learning greatly reduces the costs of obtaining information for AI models, improves their accuracy and speeds up inference by three orders of magnitude compared with traditional methods.

Mitigating Climate Change

FNOs can be applied to make real-world impact in countless ways.

For one, they offer a 100,000x speedup over numerical methods and unprecedented fine-scale resolution for weather prediction models. By accurately simulating and predicting extreme weather events, the AI models can allow planning to mitigate the effects of such disasters.

The FNO model, for example, was able to accurately predict the trajectory and magnitude of Hurricane Matthew from 2016.

In the video below, the red line represents the observed track of the hurricane. The white cones show the National Oceanic and Atmospheric Administration’s hurricane forecasts based on traditional models. The purple contours mark the FNO-based AI forecasts.

As shown, the FNO model follows the trajectory of the hurricane with improved accuracy compared with the traditional method — and the high-resolution simulation of this weather event took just a quarter of a second to process on NVIDIA GPUs.

In addition, Anandkumar’s talk covered how FNO-based AI can be used to model carbon dioxide sequestration — capturing carbon dioxide from the atmosphere and storing it underground, which scientists have said can help mitigate climate change.

Researchers can model and study how carbon dioxide would interact with materials underground using FNOs 60,000x faster than with traditional methods.

Anandkumar said the FNO model is also a significant step toward building a digital twin of Earth.

The new NVIDIA Modulus framework for training physics-informed machine learning models and NVIDIA Quantum-2 InfiniBand networking platform equip researchers and developers with the tools to combine the powers of AI, physics and supercomputing — to help solve the world’s toughest problems.

“I strongly believe this is the future of science,” Anandkumar said.

She’ll delve into these topics further at a SC21 plenary talk, taking place on Nov. 18 at 10:30 a.m. Central time.

Watch her full GTC session on demand, here.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote below.

The post A Revolution in the Making: How AI and Science Can Mitigate Climate Change appeared first on The Official NVIDIA Blog.

Read More

World’s Fastest Supercomputers Changing Fast

Modern computing workloads — including scientific simulations, visualization, data analytics, and machine learning — are pushing supercomputing centers, cloud providers and enterprises to rethink their computing architecture.

The processor or the network or the software optimizations alone can’t address the latest needs of researchers, engineers and data scientists. Instead, the data center is the new unit of computing, and organizations have to look at the full technology stack.

The latest rankings of the world’s most powerful systems show continued momentum for this full-stack approach in the latest generation of supercomputers.

NVIDIA technologies accelerate over 70 percent, or 355, of the systems on the TOP500 list released at the SC21 high performance computing conference this week, including over 90 percent of all new systems. That’s up from 342 systems, or 68 percent, of the machines on the TOP500 list released in June.

NVIDIA also continues to have a strong presence on the Green500 list of the most energy-efficient systems, powering 23 of the top 25 systems on the list, unchanged from June. On average, NVIDIA GPU-powered systems deliver 3.5x higher power efficiency than non-GPU systems on the list.

Highlighting the emergence of a new generation of cloud-native systems, Microsoft’s GPU-accelerated Azure supercomputer ranked 10th on the list, the first top 10 showing for a cloud-based system.

AI is revolutionizing scientific computing.  The number of research papers leveraging HPC and machine learning has skyrocketed in recent years; growing from roughly 600 ML + HPC papers submitted in 2018 to nearly 5,000 in 2020.

The ongoing convergence of HPC and AI workloads is also underscored by new benchmarks such as HPL-AI and MLPerf HPC.

HPL-AI is an emerging benchmark of converged HPC and AI workloads that uses mixed-precision math — the basis of deep learning and many scientific and commercial jobs — while still delivering the full accuracy of double-precision math, which is the standard  measuring stick for traditional HPC benchmarks.

And MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI, with the benchmark measuring performance on three key workloads for HPC centers: astrophysics (Cosmoflow), weather (Deepcam) and molecular dynamics (Opencatalyst).

NVIDIA addresses the full stack with GPU-accelerated processing, smart networking, GPU-optimized applications, and libraries that support the convergence of AI and HPC. This approach has supercharged workloads and enabled scientific breakthroughs.

Let’s look more closely at how NVIDIA is supercharging supercomputers.

Accelerated Computing

The combined power of the GPU’s parallel processing capabilities and over 2,500 GPU-optimized applications allows users to speed up their HPC jobs, in many cases from weeks to hours.

We’re constantly optimizing the CUDA-X libraries and the GPU-accelerated applications, so it’s not unusual for users to see an x-factor performance gain on the same GPU architecture.

As a result, the performance of the most widely used scientific applications — which we call the “golden suite” — has improved 16x over the past six years, with more advances on the way.

16x performance on top HPC, AI and ML apps from full-stack innovation.**

And to help users quickly take advantage of higher performance, we offer the latest versions of the AI and HPC software through containers from the NGC catalog. Users simply pull and run the application on their supercomputer, in the data center or the cloud.

Convergence of HPC and AI 

The infusion of AI in HPC helps researchers speed up their simulations while achieving the accuracy they’d get with the traditional simulation approach.

That’s why an increasing number of researchers are taking advantage of AI to speed up their discoveries.

That includes four of the finalists for this year’s Gordon Bell prize, the most prestigious award in supercomputing. Organizations are racing to build exascale AI computers to support this new model, which combines HPC and AI.

That strength is underscored by relatively new benchmarks, such as HPL-AI and MLPerf HPC, highlighting the ongoing convergence of HPC and AI workloads.

To fuel this trend, last week NVIDIA announced a broad range of advanced new libraries and software development kits for HPC.

Graphs — a key data structure in modern data science — can now be projected into deep-neural network frameworks with Deep Graph Library, or DGL, a new Python package.

NVIDIA Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics.

And NVIDIA introduced three new libraries:

  • ReOpt – to increase operational efficiency for the $10 trillion logistics industry.
  • cuQuantum – to accelerate quantum computing research.
  • cuNumeric – to accelerate NumPy for scientists, data scientists, and machine learning and AI researchers in the Python community.

Weaving it all together is NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, of the 5G edge, robots, self-driving cars and even avatars.

Using Omniverse, NVIDIA announced last week that it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

Cloud-Native Supercomputing

As supercomputers take on more workloads across data analytics, AI, simulation and visualization, CPUs are stretched to support a growing number of communication tasks needed to operate large and complex systems.

Data processing units alleviate this stress by offloading some of these processes.

As a fully integrated data-center-on-a-chip platform, NVIDIA BlueField DPUs can offload and manage data center infrastructure tasks instead of making the host processor do the work, enabling stronger security and more efficient orchestration of the supercomputer.

Combined with NVIDIA Quantum InfiniBand platform, this architecture delivers optimal bare-metal performance while natively supporting multinode tenant isolation.

NVIDIA’s Quantum InfiniBand platform provides predictive, bare-metal performance isolation.

Thanks to a zero-trust approach, these new systems are also more secure.

BlueField DPUs isolate applications from infrastructure. NVIDIA DOCA 1.2 — the latest BlueField software platform — enables next-generation distributed firewalls and wider use of line-rate data encryption. And NVIDIA Morpheus, assuming an interloper is already inside the data center, uses deep learning-powered data science to detect intruder activities in real time.

And all of the trends outlined above will be accelerated by new networking technology.

NVIDIA Quantum-2, also announced last week, is a 400Gbps InfiniBand platform and consists of the Quantum-2 switch, the ConnectX-7 NIC, the BlueField-3 DPU, as well as new software for the new networking architecture.

NVIDIA Quantum-2 offers the benefits of bare-metal high performance and secure multi-tenancy, allowing the next generation of supercomputers to be secure, cloud-native and better utilized.

 

** Benchmark applications: Amber, Chroma, GROMACS, MILC, NAMD, PyTorch, Quantum Espresso; Random Forest FP32 , TensorFlow, VASP | GPU node: dual-socket CPUs with 4x P100, V100, or A100 GPUs.

The post World’s Fastest Supercomputers Changing Fast appeared first on The Official NVIDIA Blog.

Read More

Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse

Siemens Energy, a leading supplier of power plant technology in the trillion-dollar worldwide energy market, is relying on the NVIDIA Omniverse platform to create digital twins to support predictive maintenance of power plants.

In doing so, Siemens Energy joins a wave of companies across various industries that are using digital twins to enhance their operations. Among them, BMW Group, which has 31 factories around the world, is building multiple industrial digital twins of its operations; and Ericsson is adopting Omniverse to build digital twins of urban areas to help determine how to construct 5G networks.

Indeed, the worldwide market for digital twin platforms is forecast to reach $86 billion by 2028, according to Grand View Research.

“NVIDIA’s open platforms along with physics-infused neural networks bring great value to Siemens Energy,” said Stefan Lichtenberger, technical portfolio manager at Siemens Energy.

Siemens Energy builds and services combined cycle power plants, which include large gas turbines and steam turbines. Heat recovery steam generators (HRSGs) use the exhaust heat from the gas turbine to create steam used to drive the steam turbine. This improves the thermodynamic efficiency of the power plant by more than 60 percent, according to Siemens Energy.

At some sections of an HRSG, a steam and water mixture can cause corrosion that might impact the lifetime of the HRSG’s parts. Downtime for maintenance and repairs leads to lost revenue opportunities for utility companies.

Siemens Energy estimates that a 10 percent reduction in the industry’s average planned downtime of 5.5 days for HRSGs — required among others to check wall loss thickness of pipes due to corrosion —  would save $1.7 billion a year.

Simulations for Industrial Applications

Siemens Energy is enlisting NVIDIA technology to develop a new workflow to reduce the frequency of planned shutdowns while maintaining safety. Real-time data — water inlet temperature, pressure, pH, gas turbine power and temperature — is preprocessed to compute pressure, temperature and velocity of both water and steam. The pressure, temperature and velocity are fed into a physics-ML model created with the NVIDIA Modulus framework to simulate precisely how steam and water flow through the pipes in real time.

The flow conditions in the pipes are then visualized with NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows. Omniverse scales across multi-GPUs to help Siemens Energy understand and predict the aggregated effects of corrosion in real time.

Accelerating Digital Twin Development

Using NVIDIA software frameworks, running on NVIDIA A100 Tensor Core GPUs, Siemens Energy is simulating the corrosive effects of heat, water and other conditions on metal over time to fine-tune maintenance needs. Predicting maintenance more accurately with machine learning models can help reduce the frequency of maintenance checks without running the risk of failure. The scaled Modulus PINN model was run on AWS Elastic Kubernetes Service (EKS) backed by P4d EC2 instances with A100 GPUs.

Building computational fluid dynamics models for each HRSG, takes as long as eight weeks each to estimate corrosion within pipes at HRSGs plants. This process is required for a portfolio of more than 600 units. Faster workflow using NVIDIA technologies can enable Siemens Energy to accelerate corrosion estimation from weeks to hours.

NVIDIA Omniverse provides a highly scalable platform that lets Siemens Energy replicate and deploy digital twins worldwide, accessing potentially thousands of NVIDIA GPUs as needed.

“NVIDIA’s work as the pioneer in accelerated computing, AI software platforms and simulation offer the scale and flexibility needed for industrial digital twins at Siemen Energy,” said Lichtenberger.

Learn more about Omniverse for virtual simulations and digital twins.

The post Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse appeared first on The Official NVIDIA Blog.

Read More

Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies

Two simulations of a billion atoms, two fresh insights into how the SARS-CoV-2 virus works, and a new AI model to speed drug discovery.

Those are results from finalists for Gordon Bell awards, considered a Nobel prize in high performance computing. They used AI, accelerated computing or both to advance science with NVIDIA’s technologies.

A finalist for the special prize for COVID-19 research used AI to link multiple simulations, showing at a new level of clarity how the virus replicates inside a host.

The research — led by Arvind Ramanathan, a computational biologist at the Argonne National Laboratory — provides a way to improve the resolution of traditional tools used to explore protein structures. That could provide fresh insights into ways to arrest the spread of a virus.

The team, drawn from a dozen organizations in the U.S. and the U.K., designed a workflow that ran across systems including Perlmutter, an NVIDIA A100-powered system, built by Hewlett Packard Enterprise, and Argonne’s NVIDIA DGX A100 systems.

“The capability to perform multisite data analysis and simulations for integrative biology will be invaluable for making use of large experimental data that are difficult to transfer,” the paper said.

As part of its work, the team developed a technique to speed molecular dynamics research using the popular NAMD program on GPUs. They also leveraged NVIDIA NVLink to speed data “far beyond what is currently possible with a conventional HPC network interconnect, or … PCIe transfers.”

A Billion Atoms in High Fidelity

Ivan Oleynik, a professor of physics at the University of South Florida, led a team named a finalist for the standard Gordon Bell award for their work producing the first highly accurate simulation of a billion atoms. It broke by 23x a record set by a Gordon Bell winner last year.

“It’s a joy to uncover phenomena never seen before, it’s a really big achievement we’re proud of,” said Oleynik.

The simulation of carbon atoms under extreme temperature and pressure could open doors to new energy sources and help describe the makeup of distant planets. It’s especially stunning because the simulation has quantum-level accuracy, faithfully reflecting the forces among the atoms.

“It’s accuracy we could only achieve by applying machine learning techniques on a powerful GPU supercomputer — AI is creating a revolution in how science is done,” said Oleynik.

The team exercised 4,608 IBM Power AC922 servers and 27,900 NVIDIA GPUs on the U.S. Department of Energy’s Summit supercomputer, built by IBM, one of the world’s most powerful supercomputers. It demonstrated their code could scale with almost 100-percent efficiency to simulations of 20 billion atoms or more.

That code is available to any researcher who wants to push the boundaries of materials science.

Inside a Deadly Droplet

In another billion-atom simulation, a second finalist for the COVID-19 prize showed the Delta variant in an airborne droplet (below). It reveals biological forces that spread COVID and other diseases, providing a first atomic-level look at aerosols.

The work has “far reaching … implications for viral binding in the deep lung, and for the study of other airborne pathogens,” according to the paper from a team led by last year’s winner of the special prize, researcher Rommie Amaro from the University of California San Diego.

Gordon Bell finalist COVID droplet simulation
The team led by Amaro simulated the Delta SARS-CoV-2 virus in a respiratory droplet with more than a billion atoms.

“We demonstrate how AI coupled to HPC at multiple levels can result in significantly improved effective performance, enabling new ways to understand and interrogate complex biological systems,” Amaro said.

Researchers used NVIDIA GPUs on Summit, the Longhorn supercomputer built by Dell Technologies for the Texas Advanced Computing Center and commercial systems in Oracle Cloud Infrastructure (OCI).

“HPC and cloud resources can be used to significantly drive down time-to-solution for major scientific efforts as well as connect researchers and greatly enable complex collaborative interactions,” the team concluded.

The Language of Drug Discovery

Finalists for the COVID prize at Oak Ridge National Laboratory (ORNL) applied natural language processing (NLP) to the problem of screening chemical compounds for new drugs.

They used a dataset containing 9.6 billion molecules — the largest dataset applied to this task to date — to train in two hours a BERT NLP model that can speed discovery of new drugs. Previous best efforts took four days to train a model using a dataset with 1.1 billion molecules.

The work exercised more than 24,000 NVIDIA GPUs on the Summit supercomputer to deliver a whopping 603 petaflops. Now that the training is done, the model can run on a single GPU to help researchers find chemical compounds that could inhibit COVID and other diseases.

“We have collaborators here who want to apply the model to cancer signaling pathways,” said Jens Glaser, a computational scientist at ORNL.

“We’re just scratching the surface of training data sizes — we hope to use a trillion molecules soon,” said Andrew Blanchard, a research scientist who led the team.

Relying on a Full-Stack Solution

NVIDIA software libraries for AI and accelerated computing helped the team complete its work in what one observer called a surprisingly short time.

“We didn’t need to fully optimize our work for the GPU’s tensor cores because you don’t need specialized code, you can just use the standard stack,” said Glaser.

He summed up what many finalists felt: “Having a chance to be part of meaningful research with potential impact on people’s lives is something that’s very satisfying for a scientist.”

Tune in to our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies appeared first on The Official NVIDIA Blog.

Read More

Universities Expand Research Horizons with NVIDIA Systems, Networks

Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way.

SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community with more than 12,000 students and 2,400 faculty and staff.

It’s one of three universities in the south-central U.S. announcing plans to use NVIDIA technologies to shift research into high gear.

Texas A&M and Mississippi State University are adopting NVIDIA Quantum-2, our 400 Gbit/second InfiniBand networking platform, as the backbone for their latest high-performance computers. In addition, a supercomputer in the U.K. has upgraded its InfiniBand network.

Texas Lassos a SuperPOD

“We’re the second university in America to get a DGX SuperPOD and that will put this community ahead in AI capabilities to fuel our degree programs and corporate partnerships,” said Michael Hites, chief information officer of SMU, referring to a system installed earlier this year at the University of Florida.

A September report called the Dallas area “hobbled” by a lack of major AI research. Ironically, the story hit the local newspaper just as SMU was buttoning up its plans for its DGX SuperPOD.

Previewing its initiative, an SMU report in March said AI is “at the heart of digital transformation … and no sector of society will remain untouched” by the technology. “The potential for dramatic improvements in K-12 education and workforce development is enormous and will contribute to the sustained economic growth of the region,” it added.

SMU Ignite, a $1.5 billion fundraiser kicked off in September, will fuel the AI initiative, helping propel Southern Methodist into the top ranks of university research nationally. The university is hiring a chief innovation officer to help guide the effort.

Crafting a Computational Crucible

It’s all about the people, says Jason Warner, who manages the IT teams that support SMU’s researchers. So, he hired a seminal group of data science specialists to staff a new center at SMU’s Ford Hall for Research and Innovation, a hub Warner calls SMU’s “computational crucible.”

Eric Godat leads that team. He earned his Ph.D. in particle physics at SMU modeling nuclear structure using data from the Large Hadron Collider.

Now he’s helping fire up SMU’s students about opportunities on the DGX SuperPOD. As a first step, he asked two SMU students to build a miniature model of a DGX SuperPOD using NVIDIA Jetson modules.

“We wanted to give people — especially those in nontechnical fields who haven’t done AI — a sense of what’s coming,” Godat said.

SMU's Jetson SuperPOD
SMU undergrad Connor Ozenne helped build a miniature DGX SuperPOD that was featured in SMU’s annual report. It uses 16 Jetson modules in a cluster students will benchmark as if it were a TOP500 system.

The full-sized supercomputer, made up of 20 NVIDIA DGX A100 systems on an NVIDIA Quantum InfiniBand network, could be up and running as early as January thanks to its Lego-like, modular architecture. It will deliver a whopping 100 petaflops of computing power, enough to give it a respectable slot on the TOP500 list of the world’s fastest supercomputers.

Aggies Tap NVIDIA Quantum-2 InfiniBand for ACES

About 200 miles south, the high performance computing center at Texas A&M will be among the first to plug into the NVIDIA Quantum-2 InfiniBand platform. Its ACES supercomputer, built by Dell Technologies, will use the 400G InfiniBand network to connect researchers to a mix of five accelerators from four vendors.

NVIDIA Quantum-2 ensures “that a single job on ACES can scale up using all the computing cores and accelerators.  Besides the obvious 2x jump in throughput from NVIDIA Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling,” said Honggao Liu, ACES’s principal investigator and project director.

Texas A&M already gives researchers access to accelerated computing in four systems that include more than 600 NVIDIA A100 Tensor Core and prior-generation GPUs. Two of the four systems use an earlier version of NVIDIA’s InfiniBand technology.

MSU Rides a 400G Train

Mississippi State University will also tap the NVIDIA Quantum-2 InfiniBand platform. It’s the network of choice for a new system that supplements Orion, the largest of four clusters MSU manages, all using earlier versions of InfiniBand.

Both Orion and the new system are funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) and built by Dell. They conduct work for NOAA’s missions as well as research for MSU.

Orion was listed as the fourth largest academic supercomputer in America when it debuted on the TOP500 list in June 2019.

“We’re using InfiniBand in four generations of supercomputers here at MSU so we know it’s both powerful and mature to run our big jobs reliably,” said Trey Breckenridge, director of high performance computing at MSU.

“We’re adding a new system with NVIDIA Quantum-2 to stay at the leading edge in HPC,” he added.

Quantum Nets Cover the UK

Across the pond in the U.K., the Data Intensive supercomputer at the University of Leicester, known as the DIaL system, has upgraded to NVIDIA Quantum, the 200G version of InfiniBand.

“DIaL is specifically designed to tackle the complex, data-intensive questions which must be answered to evolve our understanding of the universe around us,” said Mark Wilkinson, professor of theoretical astrophysics at the University of Leicester and director of its HPC center.

“The intense requirements of these specialist workloads rely on the unparalleled bandwidth and latency that only InfiniBand can provide to make the research possible,” he said.

DIaL is one of four supercomputers in the U.K.’s DiRAC facility using InfiniBand, including the Tursa system at the University of Edinburgh.

InfiniBand Shines in Evaluation

In a technical evaluation, researchers found Tursa with NVIDIA GPU accelerators on a Quantum network delivered 5x the performance of their CPU-only Tesseract system using an alternative interconnect.

Application benchmarks show 16 nodes of Tursa have twice the performance of 512 nodes of Tesseract. Tursa delivers 10 teraflops/node using 90 percent of the network’s bandwidth at a significant improvement in performance per kilowatt over Tesseract.

It’s another example of why most of the world’s TOP500 systems are using NVIDIA technologies.

For more, watch our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Universities Expand Research Horizons with NVIDIA Systems, Networks appeared first on The Official NVIDIA Blog.

Read More