A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership

You’ve reached your weekly gaming checkpoint. Welcome to a positively packed GFN Thursday.

This week delivers a sweet deal for gamers ready to upgrade their PC gaming from the cloud. With any new, paid six-month Priority or GeForce NOW RTX 3080 subscription, members will receive Crysis Remastered for free for a limited time.

Gamers and music lovers alike can get hyped for an awesome entertainment experience playing Core this week and visiting the digital world of Oberhasli. There, they’ll enjoy the anticipated deadmau5 “Encore” concert and event.

And what kind of GFN Thursday would it be without new games? We’ve got eight new titles joining the GeForce NOW library this week.

GeForce NOW Can Run Crysis … And So Can You

Crysis Remastered with RTX ON on GeForce NOW
When your reflection looks this good, you can’t help but stop and admire it. We won’t judge.

But can it run Crysis? GeForce NOW sure can.

For a limited time, get a copy of Crysis Remastered free with select GeForce NOW memberships. Purchase a six-month Priority membership, or the new GeForce NOW RTX 3080 membership, and get a free redeemable code for Crysis Remastered on the Epic Games Store.

Current monthly Founders and Priority members are eligible by upgrading to a six-month membership. Founders, exclusively, can upgrade to a GeForce NOW RTX 3080 membership and receive 10 percent off the subscription price and no risk to their current Founders benefits. They can revert back to their original Founders plan and retain “Founders for Life” pricing, as long as they remain in consistent good standing on any paid membership plan.

This special bonus also applies to existing GeForce NOW RTX 3080 members and preorders, as a thank you for being among the first to upgrade to the next generation in cloud gaming. Current members on the GeForce NOW RTX 3080 plan will receive game codes in the days ahead; while members who have preordered but haven’t been activated yet, will receive their game code when their GeForce NOW RTX 3080 service is activated. Please note, terms and conditions apply.

Stream Crytek’s classic first-person shooter, remastered with graphics optimized for a new generation of hardware and complete with stunning support for RTX ON and DLSS. GeForce NOW members can experience the first game in the Crysis series — or 1,000+ more games — across nearly all of their devices, turning even a Mac or a mobile device into the ultimate gaming rig.

The mission starts here.

Experience the deadmau5 Encore in Core

This GFN Thursday brings Core and deadmau5 to the cloud. From shooters, survival and action adventure to MMORPGs, platformers and party games, Core is a multiverse of exciting gaming entertainment with over 40,000 free-to-play, Unreal-powered games and worlds.

This week, members can visit the fully immersive digital world of Oberhasli — designed with the vision of the legendary producer, musician and DJ, deadmau5 — and enjoy an epic “Encore” concert and event. Catch the deadmau5 performance, with six showings running from Friday, Nov. 19, to Saturday, Nov. 20. The concert becomes available every hour, on the hour, the following week.

Tomorrow, come to the world of Oberhasli, designed by deadmau5, and experience the ‘Encore’ concert in Core.

The fun continues with three games inspired by deadmau5’s music — Hop ‘Til You Drop, Mau5 Hau5 and Ballon Royale — set throughout 19 dystopian worlds featured in the official When The Summer Dies music video. Party on with exclusive deadmau5 skins, emotes and mounts, and interact with other fans while streaming the exclusive, interactive and must-experience deadmau5 performance celebrating the launch of Core on GeForce NOW with the “Encore” concert this week.

A New Challenge Calls

Icarus Beta on GeForce NOW
Explore a savage alien wilderness in the aftermath of terraforming gone wrong — even on a low-powered laptop.

It wouldn’t be GFN Thursday without a new set of games coming to the cloud. Get ready to grind one of the newest joining the GeForce NOW library this week:

  • Combat Mission Cold War (New release on Steam, Nov. 16)
  • The Last Stand: Aftermath (New release on Steam, Nov. 16)
  • Myth of Empires (New release on Steam, Nov. 18)
  • Icarus (Beta weekend on Steam, Nov. 19)
  • Assassin’s Creed: Syndicate Gold Edition (Ubisoft Connect)
  • Core (Epic Games Store)
  • Lost Words: Beyond the Page (Steam)
  • World of Tanks (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Update on ‘Bright Memory: Infinite’

Bright Memory: Infinite was added last week, but during onboarding it was discovered that enabling RTX in the game requires an upcoming operating system upgrade to GeForce NOW servers. We expect the update to be complete in December and will provide more information here when it happens.

GeForce NOW Coming to LG Smart TVs

We’re working with LG Electronics to add support for GeForce NOW to LG TVs, starting with a beta release of the app in the LG Content Store for select 2021 LG OLED, QNED MiniLED and NanoCell models. If you have one of the supported TVs, check it out and share feedback to help us improve the experience.

And finally, here’s our question for the week:

𝙡𝙚𝙩’𝙨 𝙨𝙚𝙩𝙩𝙡𝙚 𝙩𝙝𝙞𝙨 𝙤𝙣𝙘𝙚 𝙖𝙣𝙙 𝙛𝙤𝙧 𝙖𝙡𝙡:

who’s the tougher enemy?

👽 aliens or zombies 🧟‍♂️

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) November 17, 2021

Let us know on Twitter or in the comments below.

The post A GFN Thursday Deal: Get ‘Crysis Remastered’ Free With Any Six-Month GeForce NOW Membership appeared first on The Official NVIDIA Blog.

Read More

MLPerf HPC Benchmarks Show the Power of HPC+AI 

NVIDIA-powered systems won four of five tests in MLPerf HPC 1.0, an industry benchmark for AI performance on scientific applications in high performance computing.

They’re the latest results from MLPerf, a set of industry benchmarks for deep learning first released in May 2018. MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI.

Recent advances in molecular dynamics, astronomy and climate simulation all used HPC+AI to make scientific breakthroughs. It’s a trend driving the adoption of exascale AI for users in both science and industry.

What the Benchmarks Measure

MLPerf HPC 1.0 measured training of AI models in three typical workloads for HPC centers.

  • CosmoFlow estimates details of objects in images from telescopes.
  • DeepCAM tests detection of hurricanes and atmospheric rivers in climate data.
  • OpenCatalyst tracks how well systems predict forces among atoms in molecules.

Each test has two parts. A measure of how fast a system trains a model is called strong scaling. Its counterpart, weak scaling, is a measure of maximum system throughput, that is, how many models a system can train in a given time.

Compared to the best results in strong scaling from last year’s MLPerf 0.7 round, NVIDIA delivered 5x better results for CosmoFlow. In DeepCAM, we delivered nearly 7x more performance.

The Perlmutter Phase 1 system at Lawrence Berkeley National Lab led in strong scaling in the OpenCatalyst benchmark using 512 of its 6,144 NVIDIA A100 Tensor Core GPUs.

In the weak-scaling category, we led DeepCAM using 16 nodes per job and 256 simultaneous jobs. All our tests ran on NVIDIA Selene (pictured above), our in-house system and the world’s largest industrial supercomputer.

NVIDIA wins MLPerf HPC, Nov 2021
NVIDIA delivered leadership results in both the speed of training a model and per-chip efficiency.

The latest results demonstrate another dimension of the NVIDIA AI platform and its performance leadership. It marks the eighth straight time NVIDIA delivered top scores in MLPerf benchmarks that span AI training and inference in the data center, the cloud and the network’s edge.

A Broad Ecosystem

Seven of the eight participants in this round submitted results using NVIDIA GPUs.

They include the Jülich Supercomputing Centre in Germany, the Swiss National Supercomputing Centre and, in the U.S., the Argonne and Lawrence Berkeley National Laboratories, the National Center for Supercomputing Applications and the Texas Advanced Computing Center.

“With the benchmark test, we have shown that our machine can unfold its potential in practice and contribute to keeping Europe on the ball when it comes to AI,” said Thomas Lippert, director of the Jülich Supercomputing Centre in a blog.

The MLPerf benchmarks are backed by MLCommons, an industry group led by Alibaba, Google, Intel, Meta, NVIDIA and others.

How We Did It

The strong showing is the result of a mature NVIDIA AI platform that includes a full stack of software.

In this round, we tuned our code with tools available to everyone, such as NVIDIA DALI to accelerate data processing and CUDA Graphs to reduce small-batch latency for efficiently scaling up to 1,024 or more GPUs.

We also applied NVIDIA SHARP, a key component within NVIDIA MagnumIO. It provides in-network computing to accelerate communications and offload data operations to the NVIDIA Quantum InfiniBand switch.

For a deeper dive into how we used these tools see our developer blog.

All the software we used for our submissions is available from the MLPerf repository. We regularly add such code to the NGC catalog, our software hub for pretrained AI models, industry application frameworks, GPU applications and other software resources.

The post MLPerf HPC Benchmarks Show the Power of HPC+AI  appeared first on The Official NVIDIA Blog.

Read More

A Revolution in the Making: How AI and Science Can Mitigate Climate Change

A partial differential equation is “the most powerful tool humanity has ever created,” Cornell University mathematician Steven Strogatz wrote in a 2009 New York Times opinion piece.

This quote opened last week’s GTC talk AI4Science: The Convergence of AI and Scientific Computing, presented by Anima Anandkumar, director of machine learning research at NVIDIA and professor of computing at the California Institute of Technology.

Anandkumar explained that partial differential equations are the foundation for most scientific simulations. And she showcased how this historic tool is now being made all the more powerful with AI.

“The convergence of AI and scientific computing is a revolution in the making,” she said.

Using new neural operator-based frameworks to learn and solve partial differential equations, AI can help us model weather forecasting 100,000x quicker — and carbon dioxide sequestration 60,000x quicker — than traditional models.

Speeding Up the Calculations

Anandkumar and her team developed the Fourier Neural Operator (FNO), a framework that allows AI to learn and solve an entire family of partial differential equations, rather than a single instance.

It’s the first machine learning method to successfully model turbulent flows with zero-shot super-resolution — which means that FNOs enable AI to make high-resolution inferences without high-resolution training data, which would be necessary for standard neural networks.

FNO-based machine learning greatly reduces the costs of obtaining information for AI models, improves their accuracy and speeds up inference by three orders of magnitude compared with traditional methods.

Mitigating Climate Change

FNOs can be applied to make real-world impact in countless ways.

For one, they offer a 100,000x speedup over numerical methods and unprecedented fine-scale resolution for weather prediction models. By accurately simulating and predicting extreme weather events, the AI models can allow planning to mitigate the effects of such disasters.

The FNO model, for example, was able to accurately predict the trajectory and magnitude of Hurricane Matthew from 2016.

In the video below, the red line represents the observed track of the hurricane. The white cones show the National Oceanic and Atmospheric Administration’s hurricane forecasts based on traditional models. The purple contours mark the FNO-based AI forecasts.

As shown, the FNO model follows the trajectory of the hurricane with improved accuracy compared with the traditional method — and the high-resolution simulation of this weather event took just a quarter of a second to process on NVIDIA GPUs.

In addition, Anandkumar’s talk covered how FNO-based AI can be used to model carbon dioxide sequestration — capturing carbon dioxide from the atmosphere and storing it underground, which scientists have said can help mitigate climate change.

Researchers can model and study how carbon dioxide would interact with materials underground using FNOs 60,000x faster than with traditional methods.

Anandkumar said the FNO model is also a significant step toward building a digital twin of Earth.

The new NVIDIA Modulus framework for training physics-informed machine learning models and NVIDIA Quantum-2 InfiniBand networking platform equip researchers and developers with the tools to combine the powers of AI, physics and supercomputing — to help solve the world’s toughest problems.

“I strongly believe this is the future of science,” Anandkumar said.

She’ll delve into these topics further at a SC21 plenary talk, taking place on Nov. 18 at 10:30 a.m. Central time.

Watch her full GTC session on demand, here.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote below.

The post A Revolution in the Making: How AI and Science Can Mitigate Climate Change appeared first on The Official NVIDIA Blog.

Read More

World’s Fastest Supercomputers Changing Fast

Modern computing workloads — including scientific simulations, visualization, data analytics, and machine learning — are pushing supercomputing centers, cloud providers and enterprises to rethink their computing architecture.

The processor or the network or the software optimizations alone can’t address the latest needs of researchers, engineers and data scientists. Instead, the data center is the new unit of computing, and organizations have to look at the full technology stack.

The latest rankings of the world’s most powerful systems show continued momentum for this full-stack approach in the latest generation of supercomputers.

NVIDIA technologies accelerate over 70 percent, or 355, of the systems on the TOP500 list released at the SC21 high performance computing conference this week, including over 90 percent of all new systems. That’s up from 342 systems, or 68 percent, of the machines on the TOP500 list released in June.

NVIDIA also continues to have a strong presence on the Green500 list of the most energy-efficient systems, powering 23 of the top 25 systems on the list, unchanged from June. On average, NVIDIA GPU-powered systems deliver 3.5x higher power efficiency than non-GPU systems on the list.

Highlighting the emergence of a new generation of cloud-native systems, Microsoft’s GPU-accelerated Azure supercomputer ranked 10th on the list, the first top 10 showing for a cloud-based system.

AI is revolutionizing scientific computing.  The number of research papers leveraging HPC and machine learning has skyrocketed in recent years; growing from roughly 600 ML + HPC papers submitted in 2018 to nearly 5,000 in 2020.

The ongoing convergence of HPC and AI workloads is also underscored by new benchmarks such as HPL-AI and MLPerf HPC.

HPL-AI is an emerging benchmark of converged HPC and AI workloads that uses mixed-precision math — the basis of deep learning and many scientific and commercial jobs — while still delivering the full accuracy of double-precision math, which is the standard  measuring stick for traditional HPC benchmarks.

And MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI, with the benchmark measuring performance on three key workloads for HPC centers: astrophysics (Cosmoflow), weather (Deepcam) and molecular dynamics (Opencatalyst).

NVIDIA addresses the full stack with GPU-accelerated processing, smart networking, GPU-optimized applications, and libraries that support the convergence of AI and HPC. This approach has supercharged workloads and enabled scientific breakthroughs.

Let’s look more closely at how NVIDIA is supercharging supercomputers.

Accelerated Computing

The combined power of the GPU’s parallel processing capabilities and over 2,500 GPU-optimized applications allows users to speed up their HPC jobs, in many cases from weeks to hours.

We’re constantly optimizing the CUDA-X libraries and the GPU-accelerated applications, so it’s not unusual for users to see an x-factor performance gain on the same GPU architecture.

As a result, the performance of the most widely used scientific applications — which we call the “golden suite” — has improved 16x over the past six years, with more advances on the way.

16x performance on top HPC, AI and ML apps from full-stack innovation.**

And to help users quickly take advantage of higher performance, we offer the latest versions of the AI and HPC software through containers from the NGC catalog. Users simply pull and run the application on their supercomputer, in the data center or the cloud.

Convergence of HPC and AI 

The infusion of AI in HPC helps researchers speed up their simulations while achieving the accuracy they’d get with the traditional simulation approach.

That’s why an increasing number of researchers are taking advantage of AI to speed up their discoveries.

That includes four of the finalists for this year’s Gordon Bell prize, the most prestigious award in supercomputing. Organizations are racing to build exascale AI computers to support this new model, which combines HPC and AI.

That strength is underscored by relatively new benchmarks, such as HPL-AI and MLPerf HPC, highlighting the ongoing convergence of HPC and AI workloads.

To fuel this trend, last week NVIDIA announced a broad range of advanced new libraries and software development kits for HPC.

Graphs — a key data structure in modern data science — can now be projected into deep-neural network frameworks with Deep Graph Library, or DGL, a new Python package.

NVIDIA Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics.

And NVIDIA introduced three new libraries:

  • ReOpt – to increase operational efficiency for the $10 trillion logistics industry.
  • cuQuantum – to accelerate quantum computing research.
  • cuNumeric – to accelerate NumPy for scientists, data scientists, and machine learning and AI researchers in the Python community.

Weaving it all together is NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, of the 5G edge, robots, self-driving cars and even avatars.

Using Omniverse, NVIDIA announced last week that it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

Cloud-Native Supercomputing

As supercomputers take on more workloads across data analytics, AI, simulation and visualization, CPUs are stretched to support a growing number of communication tasks needed to operate large and complex systems.

Data processing units alleviate this stress by offloading some of these processes.

As a fully integrated data-center-on-a-chip platform, NVIDIA BlueField DPUs can offload and manage data center infrastructure tasks instead of making the host processor do the work, enabling stronger security and more efficient orchestration of the supercomputer.

Combined with NVIDIA Quantum InfiniBand platform, this architecture delivers optimal bare-metal performance while natively supporting multinode tenant isolation.

NVIDIA’s Quantum InfiniBand platform provides predictive, bare-metal performance isolation.

Thanks to a zero-trust approach, these new systems are also more secure.

BlueField DPUs isolate applications from infrastructure. NVIDIA DOCA 1.2 — the latest BlueField software platform — enables next-generation distributed firewalls and wider use of line-rate data encryption. And NVIDIA Morpheus, assuming an interloper is already inside the data center, uses deep learning-powered data science to detect intruder activities in real time.

And all of the trends outlined above will be accelerated by new networking technology.

NVIDIA Quantum-2, also announced last week, is a 400Gbps InfiniBand platform and consists of the Quantum-2 switch, the ConnectX-7 NIC, the BlueField-3 DPU, as well as new software for the new networking architecture.

NVIDIA Quantum-2 offers the benefits of bare-metal high performance and secure multi-tenancy, allowing the next generation of supercomputers to be secure, cloud-native and better utilized.

 

** Benchmark applications: Amber, Chroma, GROMACS, MILC, NAMD, PyTorch, Quantum Espresso; Random Forest FP32 , TensorFlow, VASP | GPU node: dual-socket CPUs with 4x P100, V100, or A100 GPUs.

The post World’s Fastest Supercomputers Changing Fast appeared first on The Official NVIDIA Blog.

Read More

Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse

Siemens Energy, a leading supplier of power plant technology in the trillion-dollar worldwide energy market, is relying on the NVIDIA Omniverse platform to create digital twins to support predictive maintenance of power plants.

In doing so, Siemens Energy joins a wave of companies across various industries that are using digital twins to enhance their operations. Among them, BMW Group, which has 31 factories around the world, is building multiple industrial digital twins of its operations; and Ericsson is adopting Omniverse to build digital twins of urban areas to help determine how to construct 5G networks.

Indeed, the worldwide market for digital twin platforms is forecast to reach $86 billion by 2028, according to Grand View Research.

“NVIDIA’s open platforms along with physics-infused neural networks bring great value to Siemens Energy,” said Stefan Lichtenberger, technical portfolio manager at Siemens Energy.

Siemens Energy builds and services combined cycle power plants, which include large gas turbines and steam turbines. Heat recovery steam generators (HRSGs) use the exhaust heat from the gas turbine to create steam used to drive the steam turbine. This improves the thermodynamic efficiency of the power plant by more than 60 percent, according to Siemens Energy.

At some sections of an HRSG, a steam and water mixture can cause corrosion that might impact the lifetime of the HRSG’s parts. Downtime for maintenance and repairs leads to lost revenue opportunities for utility companies.

Siemens Energy estimates that a 10 percent reduction in the industry’s average planned downtime of 5.5 days for HRSGs — required among others to check wall loss thickness of pipes due to corrosion —  would save $1.7 billion a year.

Simulations for Industrial Applications

Siemens Energy is enlisting NVIDIA technology to develop a new workflow to reduce the frequency of planned shutdowns while maintaining safety. Real-time data — water inlet temperature, pressure, pH, gas turbine power and temperature — is preprocessed to compute pressure, temperature and velocity of both water and steam. The pressure, temperature and velocity are fed into a physics-ML model created with the NVIDIA Modulus framework to simulate precisely how steam and water flow through the pipes in real time.

The flow conditions in the pipes are then visualized with NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows. Omniverse scales across multi-GPUs to help Siemens Energy understand and predict the aggregated effects of corrosion in real time.

Accelerating Digital Twin Development

Using NVIDIA software frameworks, running on NVIDIA A100 Tensor Core GPUs, Siemens Energy is simulating the corrosive effects of heat, water and other conditions on metal over time to fine-tune maintenance needs. Predicting maintenance more accurately with machine learning models can help reduce the frequency of maintenance checks without running the risk of failure. The scaled Modulus PINN model was run on AWS Elastic Kubernetes Service (EKS) backed by P4d EC2 instances with A100 GPUs.

Building computational fluid dynamics models for each HRSG, takes as long as eight weeks each to estimate corrosion within pipes at HRSGs plants. This process is required for a portfolio of more than 600 units. Faster workflow using NVIDIA technologies can enable Siemens Energy to accelerate corrosion estimation from weeks to hours.

NVIDIA Omniverse provides a highly scalable platform that lets Siemens Energy replicate and deploy digital twins worldwide, accessing potentially thousands of NVIDIA GPUs as needed.

“NVIDIA’s work as the pioneer in accelerated computing, AI software platforms and simulation offer the scale and flexibility needed for industrial digital twins at Siemen Energy,” said Lichtenberger.

Learn more about Omniverse for virtual simulations and digital twins.

The post Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse appeared first on The Official NVIDIA Blog.

Read More

Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies

Two simulations of a billion atoms, two fresh insights into how the SARS-CoV-2 virus works, and a new AI model to speed drug discovery.

Those are results from finalists for Gordon Bell awards, considered a Nobel prize in high performance computing. They used AI, accelerated computing or both to advance science with NVIDIA’s technologies.

A finalist for the special prize for COVID-19 research used AI to link multiple simulations, showing at a new level of clarity how the virus replicates inside a host.

The research — led by Arvind Ramanathan, a computational biologist at the Argonne National Laboratory — provides a way to improve the resolution of traditional tools used to explore protein structures. That could provide fresh insights into ways to arrest the spread of a virus.

The team, drawn from a dozen organizations in the U.S. and the U.K., designed a workflow that ran across systems including Perlmutter, an NVIDIA A100-powered system, built by Hewlett Packard Enterprise, and Argonne’s NVIDIA DGX A100 systems.

“The capability to perform multisite data analysis and simulations for integrative biology will be invaluable for making use of large experimental data that are difficult to transfer,” the paper said.

As part of its work, the team developed a technique to speed molecular dynamics research using the popular NAMD program on GPUs. They also leveraged NVIDIA NVLink to speed data “far beyond what is currently possible with a conventional HPC network interconnect, or … PCIe transfers.”

A Billion Atoms in High Fidelity

Ivan Oleynik, a professor of physics at the University of South Florida, led a team named a finalist for the standard Gordon Bell award for their work producing the first highly accurate simulation of a billion atoms. It broke by 23x a record set by a Gordon Bell winner last year.

“It’s a joy to uncover phenomena never seen before, it’s a really big achievement we’re proud of,” said Oleynik.

The simulation of carbon atoms under extreme temperature and pressure could open doors to new energy sources and help describe the makeup of distant planets. It’s especially stunning because the simulation has quantum-level accuracy, faithfully reflecting the forces among the atoms.

“It’s accuracy we could only achieve by applying machine learning techniques on a powerful GPU supercomputer — AI is creating a revolution in how science is done,” said Oleynik.

The team exercised 4,608 IBM Power AC922 servers and 27,900 NVIDIA GPUs on the U.S. Department of Energy’s Summit supercomputer, built by IBM, one of the world’s most powerful supercomputers. It demonstrated their code could scale with almost 100-percent efficiency to simulations of 20 billion atoms or more.

That code is available to any researcher who wants to push the boundaries of materials science.

Inside a Deadly Droplet

In another billion-atom simulation, a second finalist for the COVID-19 prize showed the Delta variant in an airborne droplet (below). It reveals biological forces that spread COVID and other diseases, providing a first atomic-level look at aerosols.

The work has “far reaching … implications for viral binding in the deep lung, and for the study of other airborne pathogens,” according to the paper from a team led by last year’s winner of the special prize, researcher Rommie Amaro from the University of California San Diego.

Gordon Bell finalist COVID droplet simulation
The team led by Amaro simulated the Delta SARS-CoV-2 virus in a respiratory droplet with more than a billion atoms.

“We demonstrate how AI coupled to HPC at multiple levels can result in significantly improved effective performance, enabling new ways to understand and interrogate complex biological systems,” Amaro said.

Researchers used NVIDIA GPUs on Summit, the Longhorn supercomputer built by Dell Technologies for the Texas Advanced Computing Center and commercial systems in Oracle Cloud Infrastructure (OCI).

“HPC and cloud resources can be used to significantly drive down time-to-solution for major scientific efforts as well as connect researchers and greatly enable complex collaborative interactions,” the team concluded.

The Language of Drug Discovery

Finalists for the COVID prize at Oak Ridge National Laboratory (ORNL) applied natural language processing (NLP) to the problem of screening chemical compounds for new drugs.

They used a dataset containing 9.6 billion molecules — the largest dataset applied to this task to date — to train in two hours a BERT NLP model that can speed discovery of new drugs. Previous best efforts took four days to train a model using a dataset with 1.1 billion molecules.

The work exercised more than 24,000 NVIDIA GPUs on the Summit supercomputer to deliver a whopping 603 petaflops. Now that the training is done, the model can run on a single GPU to help researchers find chemical compounds that could inhibit COVID and other diseases.

“We have collaborators here who want to apply the model to cancer signaling pathways,” said Jens Glaser, a computational scientist at ORNL.

“We’re just scratching the surface of training data sizes — we hope to use a trillion molecules soon,” said Andrew Blanchard, a research scientist who led the team.

Relying on a Full-Stack Solution

NVIDIA software libraries for AI and accelerated computing helped the team complete its work in what one observer called a surprisingly short time.

“We didn’t need to fully optimize our work for the GPU’s tensor cores because you don’t need specialized code, you can just use the standard stack,” said Glaser.

He summed up what many finalists felt: “Having a chance to be part of meaningful research with potential impact on people’s lives is something that’s very satisfying for a scientist.”

Tune in to our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies appeared first on The Official NVIDIA Blog.

Read More

Universities Expand Research Horizons with NVIDIA Systems, Networks

Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way.

SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community with more than 12,000 students and 2,400 faculty and staff.

It’s one of three universities in the south-central U.S. announcing plans to use NVIDIA technologies to shift research into high gear.

Texas A&M and Mississippi State University are adopting NVIDIA Quantum-2, our 400 Gbit/second InfiniBand networking platform, as the backbone for their latest high-performance computers. In addition, a supercomputer in the U.K. has upgraded its InfiniBand network.

Texas Lassos a SuperPOD

“We’re the second university in America to get a DGX SuperPOD and that will put this community ahead in AI capabilities to fuel our degree programs and corporate partnerships,” said Michael Hites, chief information officer of SMU, referring to a system installed earlier this year at the University of Florida.

A September report called the Dallas area “hobbled” by a lack of major AI research. Ironically, the story hit the local newspaper just as SMU was buttoning up its plans for its DGX SuperPOD.

Previewing its initiative, an SMU report in March said AI is “at the heart of digital transformation … and no sector of society will remain untouched” by the technology. “The potential for dramatic improvements in K-12 education and workforce development is enormous and will contribute to the sustained economic growth of the region,” it added.

SMU Ignite, a $1.5 billion fundraiser kicked off in September, will fuel the AI initiative, helping propel Southern Methodist into the top ranks of university research nationally. The university is hiring a chief innovation officer to help guide the effort.

Crafting a Computational Crucible

It’s all about the people, says Jason Warner, who manages the IT teams that support SMU’s researchers. So, he hired a seminal group of data science specialists to staff a new center at SMU’s Ford Hall for Research and Innovation, a hub Warner calls SMU’s “computational crucible.”

Eric Godat leads that team. He earned his Ph.D. in particle physics at SMU modeling nuclear structure using data from the Large Hadron Collider.

Now he’s helping fire up SMU’s students about opportunities on the DGX SuperPOD. As a first step, he asked two SMU students to build a miniature model of a DGX SuperPOD using NVIDIA Jetson modules.

“We wanted to give people — especially those in nontechnical fields who haven’t done AI — a sense of what’s coming,” Godat said.

SMU's Jetson SuperPOD
SMU undergrad Connor Ozenne helped build a miniature DGX SuperPOD that was featured in SMU’s annual report. It uses 16 Jetson modules in a cluster students will benchmark as if it were a TOP500 system.

The full-sized supercomputer, made up of 20 NVIDIA DGX A100 systems on an NVIDIA Quantum InfiniBand network, could be up and running as early as January thanks to its Lego-like, modular architecture. It will deliver a whopping 100 petaflops of computing power, enough to give it a respectable slot on the TOP500 list of the world’s fastest supercomputers.

Aggies Tap NVIDIA Quantum-2 InfiniBand for ACES

About 200 miles south, the high performance computing center at Texas A&M will be among the first to plug into the NVIDIA Quantum-2 InfiniBand platform. Its ACES supercomputer, built by Dell Technologies, will use the 400G InfiniBand network to connect researchers to a mix of five accelerators from four vendors.

NVIDIA Quantum-2 ensures “that a single job on ACES can scale up using all the computing cores and accelerators.  Besides the obvious 2x jump in throughput from NVIDIA Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling,” said Honggao Liu, ACES’s principal investigator and project director.

Texas A&M already gives researchers access to accelerated computing in four systems that include more than 600 NVIDIA A100 Tensor Core and prior-generation GPUs. Two of the four systems use an earlier version of NVIDIA’s InfiniBand technology.

MSU Rides a 400G Train

Mississippi State University will also tap the NVIDIA Quantum-2 InfiniBand platform. It’s the network of choice for a new system that supplements Orion, the largest of four clusters MSU manages, all using earlier versions of InfiniBand.

Both Orion and the new system are funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) and built by Dell. They conduct work for NOAA’s missions as well as research for MSU.

Orion was listed as the fourth largest academic supercomputer in America when it debuted on the TOP500 list in June 2019.

“We’re using InfiniBand in four generations of supercomputers here at MSU so we know it’s both powerful and mature to run our big jobs reliably,” said Trey Breckenridge, director of high performance computing at MSU.

“We’re adding a new system with NVIDIA Quantum-2 to stay at the leading edge in HPC,” he added.

Quantum Nets Cover the UK

Across the pond in the U.K., the Data Intensive supercomputer at the University of Leicester, known as the DIaL system, has upgraded to NVIDIA Quantum, the 200G version of InfiniBand.

“DIaL is specifically designed to tackle the complex, data-intensive questions which must be answered to evolve our understanding of the universe around us,” said Mark Wilkinson, professor of theoretical astrophysics at the University of Leicester and director of its HPC center.

“The intense requirements of these specialist workloads rely on the unparalleled bandwidth and latency that only InfiniBand can provide to make the research possible,” he said.

DIaL is one of four supercomputers in the U.K.’s DiRAC facility using InfiniBand, including the Tursa system at the University of Edinburgh.

InfiniBand Shines in Evaluation

In a technical evaluation, researchers found Tursa with NVIDIA GPU accelerators on a Quantum network delivered 5x the performance of their CPU-only Tesseract system using an alternative interconnect.

Application benchmarks show 16 nodes of Tursa have twice the performance of 512 nodes of Tesseract. Tursa delivers 10 teraflops/node using 90 percent of the network’s bandwidth at a significant improvement in performance per kilowatt over Tesseract.

It’s another example of why most of the world’s TOP500 systems are using NVIDIA technologies.

For more, watch our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Universities Expand Research Horizons with NVIDIA Systems, Networks appeared first on The Official NVIDIA Blog.

Read More

NVIDIA GTC Sees Spike in Developers From Africa

The appetite for AI and data science is increasing, and nowhere is that more prevalent than in emerging markets.

Registrations for this week’s GTC from African nations tripled compared with the spring edition of the event.

Indeed, Nigeria had the third most registered attendees for countries in the EMEA region, ahead of France, Italy and Spain. Five other African nations were among the region’s top 15 for registrants: Egypt (No. 6), Tunisia (No. 7), Ghana (No. 9), South Africa (No. 11) and Kenya (No. 12)

The numbers demonstrate the growing interest among Africa-based developers to access content, information and expertise centered around AI, data science, robotics and high performance computing. Developers are using these technologies as a platform to create innovative applications that address local challenges, such as healthcare and climate change.

Global Conference, Localized Content

Among the speakers at GTC were several members of NVIDIA Emerging Chapters, a new program that enables local communities in emerging economies to build and scale their AI, data science and graphics projects. Such highly localized content empowers developers from these areas and raises awareness of their unique challenges and needs.

For example, tinyML Kenya, a community of machine learning researchers and practitioners, spoke on the impacts of healthcare, education, conservation and climate change as a force for good in emerging markets. Zindi, Africa’s first data science competition platform, participated in a session about bridging the AI education gap among developers, IT professionals and students on the continent.

Multiple African organizations and universities also spoke at GTC about how developers in the region and emerging markets are using AI to build innovations that address local challenges. Among them were Kenya Adanian Labs, Cadi Ayyad University of Morocco, Data Science Africa, Python Ghana, and Nairobi Women in Machine Learning & Data Science.

Several Africa-based members of NVIDIA Inception, a free program designed to empower cutting-edge startups, spoke about the AI revolution underway in the continent and other emerging areas. Cyst.ai, minoHealth, Fastagger and ARMA were among the 70+ Inception startups who presented at the conference.

AI was not the only innovation topic for local developers. The top African gaming and animation companies Usiku Games, Leti Arts, NETINFO 3D and HeroSmashers TV also joined the party to discuss how the continent’s burgeoning gaming industry continues to thrive and the tools game developers need to be successful in an area of the world where access to compute resources is often limited.

Engaging Developers Everywhere

While AI developers and startup founders come from all over the world, developers in emerging areas face unique circumstances and opportunities. This means global representation and localized access become even more important to bolster developer ecosystems in emerging markets.

Through NVIDIA Emerging Chapters, grassroots organizations and communities can provide developers access to the NVIDIA Developer Program and course credits for the NVIDIA Deep Learning Institute, helping bridge new paths to AI development in the region.

Learn more about AI in emerging markets today.

Watch NVIDIA CEO Jensen Huang’s GTC keynote address:

The post NVIDIA GTC Sees Spike in Developers From Africa appeared first on The Official NVIDIA Blog.

Read More

NVIDIA to Build Earth-2 Supercomputer to See Our Future

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry and biology of the atmosphere, waters, ice, land and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

The post NVIDIA to Build Earth-2 Supercomputer to See Our Future appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration

For millions of professionals around the world, 3D workflows are essential.

Everything they build, from cars to products to buildings, must first be designed or simulated in a virtual world. At the same time, more organizations are tackling complex designs while adjusting to a hybrid work environment.

As a result, design teams need a solution that helps them improve remote collaboration while managing 3D production pipelines. And NVIDIA Omniverse is the answer.

NVIDIA Omniverse Enterprise, now available, helps professionals across industries transform complex 3D design workflows. The groundbreaking platform lets global teams working across multiple software suites collaborate in real time in a shared virtual space.

Designed for the Present, Built for the Future

With Omniverse Enterprise, professionals gain new capabilities to boost traditional visualization workflows. It’s a newly launched subscription that brings fully supported software to 3D organizations of any scale.

The foundation of Omniverse is Pixar’s Universal Scene Description, an open-source file format that enables users to enhance their design process with real-time interoperability across applications. Additionally, the platform is built on NVIDIA RTX technology, so creators can render faster, do multiple iterations at no opportunity cost, and quickly achieve their final designs with stunning, photorealistic detail.

Ericsson, a leading telecommunications company, is using Omniverse Enterprise to create a digital twin of a 5G radio network to simulate and visualize signal propagation and performance. Within Omniverse, Ericsson has built a true-to-reality city-scale simulation environment, bringing in scenes, models and datasets from Esri CityEngine.

A New Experience for 3D Design

Omniverse Enterprise is available worldwide through global computer makers BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro. Many companies have already experienced the advanced capabilities of the platform.

Epigraph, a leading provider for companies such as Black & Decker, Yamaha and Wayfair, creates physically accurate 3D assets and product experiences for e-commerce. BOXX Technologies helped Epigraph achieve faster rendering with Omniverse Enterprise and NVIDIA RTX A6000 graphics. The advanced RTX Renderer in Omniverse enabled Epigraph to render images at final-frame quality faster, while significantly reducing the amount of computational resources needed.

Media.Monks is exploring ways to enhance and extend their workflows in a virtual world with Omniverse Enterprise, together with HP. The combination of remote computing and collocated workstations enables the Media.Monks design, creative and solutions teams to accelerate their clients’ digital transformation toward a more decentralized future. In collaboration with NVIDIA and HP, Media.Monks is exploring new approaches and the convergence of collaboration, real-time graphics, and live broadcast for a new era of brand virtualization.

Dell Technologies is presenting at GTC to show how Omniverse is advancing the hybrid workforce with Dell Precision workstations, Dell EMC PowerEdge servers and Dell Technologies Validated Designs. The interactive panel discussion will dive into why users need Omniverse today, and how Dell is helping more professionals adopt this solution, from the desktop to the data center.

And Lenovo is showcasing how advanced technologies like Omniverse are making remote collaboration seamless. Whether it’s connecting to a powerful mobile workstation on the go, a physical workstation back in the office, or a virtual workstation in the data center, Lenovo, TGX and NVIDIA are providing remote workers with the same experience they get at the office.

These systems manufacturers have also enabled other Omniverse Enterprise customers such as Kohn Pedersen Fox, Woods Bagot and WPP to improve their efficiency and productivity with real-time collaboration.

Experience Virtual Worlds With NVIDIA Omniverse

NVIDIA Omniverse Enterprise is now generally available by subscription from BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro.

The platform is optimized and certified to run on NVIDIA RTX professional mobile workstations and NVIDIA-Certified Systems, including desktops and servers on the NVIDIA EGX platform.

With Omniverse Enterprise, creative and design teams can connect their Autodesk 3ds Max, Maya and Revit, Epic Games’ Unreal Engine, McNeel & Associates Rhino, Grasshopper and Trimble SketchUp workflows through live-edit collaboration. Learn more about NVIDIA Omniverse Enterprise and our 30-day evaluation program. For individual artists, there’s also a free beta version of the platform available for download.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address below:

The post NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration appeared first on The Official NVIDIA Blog.

Read More