COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

Research across global academic and commercial labs to create a more efficient drug discovery process won recognition today with a special Gordon Bell Prize for work fighting COVID-19.

A team of 27 researchers led by Rommie Amaro at the University of California at San Diego (UCSD) combined high performance computing (HPC) and AI to provide the clearest view to date of the coronavirus, winning the award.

Their work began in late March when Amaro lit up Twitter with a picture of part of a simulated SARS-CoV-2 virus that looked like an upside-down Christmas tree.

Seeing it, one remote researcher noticed how a protein seemed to reach like a crooked finger from behind a protective shield to touch a healthy human cell.

“I said, ‘holy crap, that’s crazy’… only through sharing a simulation like this with the community could you see for the first time how the virus can only strike when it’s in an open position,” said Amaro, who leads a team of biochemists and computer experts at UCSD.

Tweet of coronavirus from Amaro Lab
Amaro shared her early results on Twitter.

The image in the tweet was taken by Amaro’s lab using what some call a computational microscope, a digital tool that links the power of HPC simulations with AI to see details beyond the capabilities of conventional instruments.

It’s one example of work around the world using AI and data analytics, accelerated by NVIDIA Clara Discovery, to slash the $2 billion in costs and ten-year time span it typically takes to bring a new drug to market.

A Virtual Microscope Enhanced with AI

In early October, Amaro’s team completed a series of more ambitious HPC+AI simulations. They showed for the first time fine details of how the spike protein moved, opened and contacted a healthy cell.

One simulation (below) packed a whopping 305 million atoms, more than twice the size of any prior simulation in molecular dynamics. It required AI and all 27,648 NVIDIA GPUs on the Summit supercomputer at Oak Ridge National Laboratory.

More than 4,000 researchers worldwide have downloaded the results that one called “critical for vaccine design” for COVID and future pathogens.

Today, it won a special Gordon Bell Prize for COVID-19, the equivalent of a Nobel Prize in the supercomputing community.

Two other teams also used NVIDIA technologies in work selected as finalists in the COVID-19 competition created by the ACM, a professional group representing more than 100,000 computing experts worldwide.

And the traditional Gordon Bell Prize went to a team from Beijing, Berkeley and Princeton that set a new milestone in molecular dynamics, also using a combination of HPC+AI on Summit.

An AI Funnel Catches Promising Drugs

Seeing how the infection process works is one of a string of pearls that scientists around the world are gathering into a new AI-assisted drug discovery process.

Another is screening from a vast field of 1068 candidates the right compounds to arrest a virus. In a paper from part of the team behind Amaro’s work, researchers described a new AI workflow that in less than five months filtered 4.2 billion compounds down to the 40 most promising ones that are now in advanced testing.

“We were so happy to get these results because people are dying and we need to address that with a new baseline that shows what you can get with AI,” said Arvind Ramanathan, a computational biologist at Argonne National Laboratory.

Ramanathan’s team was part of an international collaboration among eight universities and supercomputer centers, each contributing unique tools to process nearly 60 terabytes of data from 21 open datasets. It fueled a set of interlocking simulations and AI predictions that ran across 160 NVIDIA A100 Tensor Core GPUs on Argonne’s Theta system with massive AI inference runs using NVIDIA TensorRT on the many more GPUs on Summit.

Docking Compounds, Proteins on a Supercomputer

Earlier this year, Ada Sedova put a pearl on the string for protein docking (described in the video below) when she described plans to test a billion drug compounds against two coronavirus spike proteins in less than 24 hours using the GPUs on Summit. Her team’s work cut to just 21 hours the work that used to take 51 days, a 58x speedup.

In a related effort, colleagues at Oak Ridge used NVIDIA RAPIDS and BlazingSQL to accelerate by an order of magnitude data analytics on results like Sedova produced.

Among the other Gordon Bell finalists, Lawrence Livermore researchers used GPUs on the Sierra supercomputer to slash the training time for an AI model used to speed drug discovery from a day to just 23 minutes.

From the Lab to the Clinic

The Gordon Bell finalists are among more than 90 research efforts in a supercomputing collaboration using 50,000 GPU cores to fight the coronavirus.

They make up one front in a global war on COVID that also includes companies such as Oxford Nanopore Technologies, a genomics specialist using NVIDIA’s CUDA software to accelerate its work.

Oxford Nanopore won approval from European regulators last month for a novel system the size of a desktop printer that can be used with minimal training to perform thousands of COVID tests in a single day. Scientists worldwide have used its handheld sequencing devices to understand the transmission of the virus.

Relay Therapeutics uses NVIDIA GPUs and software to simulate with machine learning how proteins move, opening up new directions in the drug discovery process. In September, it started its first human trial of a molecule inhibitor to treat cancer.

Startup Structura uses CUDA on NVIDIA GPUs to analyze initial images of pathogens to quickly determine their 3D atomic structure, another key step in drug discovery. It’s a member of the NVIDIA Inception program, which gives startups in AI access to the latest GPU-accelerated technologies and market partners.

From Clara Discovery to Cambridge-1

NVIDIA Clara Discovery delivers a framework with AI models, GPU-optimized code and applications to accelerate every stage in the drug discovery pipeline. It provides speedups of 6-30x across jobs in genomics, protein structure prediction, virtual screening, docking, molecular simulation, imaging and natural-language processing that are all part of the drug discovery process.

It’s NVIDIA’s latest contribution to fighting the SARS-CoV-2 and future pathogens.

NVIDIA Clara Discovery
NVIDIA Clara Discovery speeds each step of a drug discovery process using AI and data analytics.

Within hours of the shelter-at-home order in the U.S., NVIDIA gave researchers free access to a test drive of Parabricks, our genomic sequencing software. Since then, we’ve provided as part of NVIDIA Clara open access to AI models co-developed with the U.S. National Institutes of Health.

We’ve also committed to build with partners including GSK and AstraZeneca Europe’s largest supercomputer dedicated to driving drug discovery forward. Cambridge-1 will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance.

Next Up: A Billion-Atom Simulation

The work is just getting started.

Ramanathan of Argonne sees a future where self-driving labs learn what experiments they should launch next, like autonomous vehicles finding their own way forward.

“And I want to scale to the absolute max of screening 1068 drug compounds, but even covering half that will be significantly harder than what we’ve done so far,” he said.

“For me, simulating a virus with a billion atoms is the next peak, and we know we will get there in 2021,” said Amaro. “Longer term, we need to learn how to use AI even more effectively to deal with coronavirus mutations and other emerging pathogens that could be even worse,” she added.

Hear NVIDIA CEO Jensen Huang describe in the video below how AI in Clara Discovery is advancing drug discovery.

At top: An image of the SARS-CoV-2 virus based on the Amaro lab’s simulation showing 305 million atoms.

The post COVID-19 Spurs Scientific Revolution in Drug Discovery with AI appeared first on The Official NVIDIA Blog.

Read More

A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines

A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines

In the global race to tame the spread of COVID-19, scientific researchers and pharmaceutical companies first must understand the virus’s protein structure.

Doing so requires building detailed 3D models of protein molecules, which until recently has been an intensely time-consuming task. Structura Biotechnology’s groundbreaking software is helping speed things along.

The GPU-powered machine learning algorithms underlying Structura’s software power the image processing stage of a technology called cryo-electron microscopy, or cryo-EM, a revolutionary breakthrough in biochemistry that was the subject of the 2017 Nobel Prize in chemistry.

Cryo-EM enables powerful electron microscopes to capture detailed images of biomolecules in their near-native states. These images can then be used to reconstruct a 3D model of the biomolecules.

With cryo-EM providing valuable 2D image data, Structura’s AI-infused software, called cryoSPARC, can quickly analyze the resulting microscopy data to solve the 3D atomic structures of the embedded protein molecules. That, in turn, allows researchers to more rapidly gauge how effective drugs will be in binding to those molecules, significantly speeding up the process of drug discovery.

Hundreds of labs around the world already use the three-year-old Toronto-based company’s software, with a significant, but not surprising, surge during 2020. In fact, CEO Ali Punjani states that Structura’s software has been used by scientists to visualize COVID-19 proteins in multiple publications.

“Our software helps scientists to understand what their proteins look like and how their proposed therapeutics may bind,” Punjani said. “The more they can see about the structure of the target, the easier it becomes to design or identify a molecule that locks onto that structure and stops it.”

An Intriguing Test Case

The idea for Structura came from a conversation Punjani overheard, during his undergraduate work at the University of Toronto, about trying to solve protein structures using microscopic images. He thought the topic would make an intriguing test case for his developing interest in machine learning research.

Punjani formed his team in 2017, and Structura started building its software, backed by large-scale inference and computer vision algorithms that help to recover a 3D model from 2D image data. The key, he said, is to collect and analyze — with increasing accuracy — a sufficient amount of microscopic data to enable high-quality 3D reconstructions.

“It’s a highly scientific domain with zero tolerance for error,” Punjani said. “Getting it wrong can be a huge waste of time and money.”

Structura’s software is deployed on premises, typically on customers’ hardware, which must be up to the task of processing real-time 3D microscope data. Punjani said labs often run this work on NVIDIA Quadro RTX 6000 GPUs, or something similar, while many larger pharmaceutical companies have invested in clusters of NVIDIA V100 Tensor Core GPUs accompanied by a variety of NVIDIA graphics cards.

Structura does all of its model training and software development on machines running multi-GPU nodes of V100 GPUs. Punjani said his team writes all of its GPU kernels from scratch because of the particular and exotic nature of the problem. The code that runs on Structura’s GPUs is written in CUDA, while cuDNN is used for some high-end computing tasks.

Right Software at the Right Time

Given the value of Structura’s innovations, and the importance of cryo-EM, Punjani isn’t holding back on his ambitions for the company, which recently joined NVIDIA Inception, an accelerator program designed to nurture startups revolutionizing industries with advancements in AI and data sciences.

Punjani says that any research related to living things can now make use of the information from 3D protein structures that cryo-EM offers and, as a result, there’s a lot of industry attention focused on the kind of work Structura’s software enables.

“What we’re building right now is a fundamental building block for cryo-EM to better enable structure-based drug discovery,” he said. “Cryo-EM is set to become ubiquitous throughout all biological research.”

Stay up to date with the latest healthcare news from NVIDIA.

The post A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines appeared first on The Official NVIDIA Blog.

Read More

NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television

NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television

Concept art is often considered the bread and butter of filmmaking, and Ryan Church is the concept design supervisor that’s behind the visuals of many of our favorite films.

Church has created concept art for blockbusters such as Avatar, Tomorrowland and Transformers. He’s collaborated closely with George Lucas on the Star Wars prequels and sequels trilogies. Now, he’s working on the popular series The Mandalorian.

All images courtesy of Ryan Church.

When he’s not creating unique vehicles and dazzling worlds for film and television, Church captures new visions and illustrates designs in his personal time. He’s always had a close relationship with cutting-edge technology to produce the highest-quality visuals, even when he’s working at home.

Recently, Church got his hands on an HP Z8 workstation powered by the NVIDIA Quadro RTX 6000. With the performance and speed of RTX behind his concept designs, he can render stunning images of architecture, vehicles and scenery faster than ever.

RTX Delivers More Time for Precision and Creativity

Filmmakers are always trying to figure out the quickest way to bring a concept or idea to life in a fast-paced environment.

Church says that directors nowadays don’t just want to see a drawing of a place or item for the set, but to see the actual place or item in front of them.

To do so, Church creates his 3D models in Foundry’s Modo and turns to OctaneRender, a GPU render engine that uses NVIDIA RTX to accelerate the rendering performance for his scenes. This allows him to achieve real-time rendering, and with the large memory capacity and performance gains of NVIDIA RTX, Church can create massive worlds freely without worrying about optimizing the geometry of his scenes.

“NVIDIA RTX has allowed me to work without babysitting the geometry all along the way,” said Church. “The friction has been removed from the creation process, allowing me to stay focused on the art.”

Like Church, many concept artists are using technology to create and design complex virtual sets and elaborate 3D mattes for virtual production in real time. The large GPU memory capacities of RTX allow for free flow of art creation while working with multiple creative applications.

And when trying to find the perfect lighting, or tweaking the depth of field or reflections of a scene, the NVIDIA RTX GPU speeds up the workflow to allow for better, quicker designs. Church can do up to 20-30 passes on a scene, enabling him to iterate on his designs more often so he can get the look and feel he’s aiming for.

“The RTX card in the Z8 allows me to have that complex scene and really dial in much better and faster,” said Church. “With design, lighting, texturing happening all in real time, I can model and move lights around, and see it all happening in the active, updating viewport.”

When Church needs desktop-class performance on the go, he turns to his HP ZBook Studio mobile workstation. Featuring the NVIDIA Studio driver and NVIDIA Quadro RTX GPU, the ZBook Studio has been tested and certified to work with the top creative applications.

As a leading concept designer standing at the intersection between art and technology, Church has inspired countless artists, and his work will continue to inspire for generations to come.

Concept artist Ryan Church pushes boundaries of creativity with NVIDIA RTX.

Learn more about NVIDIA RTX.

The post NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television appeared first on The Official NVIDIA Blog.

Read More

GeForce NOW Streaming Comes to iOS Safari

GeForce NOW Streaming Comes to iOS Safari

GeForce NOW transforms underpowered or incompatible hardware into high-performance GeForce gaming rigs.

Now, we’re bringing the world of PC gaming to iOS devices through Safari.

GeForce NOW is streaming on iOS Safari, in beta, starting today. That means more than 5 million GeForce NOW members can now access the latest experience by launching Safari from iPhone or iPad and visiting play.geforcenow.com.

Not a member? Get started by signing up for a Founders membership that offers beautifully ray-traced graphics on supported games, extended sessions lengths and front-of-the-line access. There’s also a free option for those who want to test the waters.

Right now is a great time to join. Founders memberships are available for $4.99/month or lock in an even better rate with a six-month Founders membership for $24.95.

All new GeForce RTX 30 Series GPUs come bundled with a GeForce NOW Founders membership, available to existing or new members.

Once logged in, you’re only a couple clicks away from streaming a massive catalog of the latest and most played PC games. Instantly jump into your games, like Assassin’s Creed Valhalla, Destiny 2 Beyond Light, Shadow of the Tomb Raider and more. Founders members can also play titles like Watch Dogs: Legion with RTX ON and NVIDIA DLSS, even on their iPhone or iPad.

GeForce NOW on iOS Safari requires a gamepad — keyboard and mouse-only games aren’t available due to hardware limitations. For the best experience, you’ll want to use a GeForce NOW Recommended gamepad, like the Razer Kishi.

Fortnite, Coming Soon

Alongside the amazing team at Epic Games, we’re working to enable a touch-friendly version of Fortnite, which will delay availability of the game. While the GeForce NOW library is best experienced on mobile with a gamepad, touch is how over 100 million Fortnite gamers have built, battled and danced their way to Victory Royale.

We’re looking forward to delivering a cloud-streaming Fortnite mobile experience powered by GeForce NOW. Members can look for the game on iOS Safari soon.

More Games, More Game Stores

When you fire up your favorite game, you’re playing it instantly — that’s what it means to be Game Ready on GeForce NOW. The experience has been optimized for cloud gaming and includes Game Ready Driver performance improvements.

GeForce NOW platforms

NVIDIA manages the game updates and patches, letting you play the games you own at 1080p, 60FPS across nearly all of your devices. And when a game supports RTX, Founders members can play with beautifully ray-traced graphics and DLSS 2.0 for improved graphics fidelity.

PC games become playable across a wider range of hardware, removing the barriers across platforms. With the recent release of Among Us, Mac and Chromebook owners can now experience the viral sensation, just like PC gamers. Android owners can play the newest titles even when they’re away from their rig.

More than 750 PC games are now Game Ready on GeForce NOW, with weekly additions that continue to expand the library.

Coming soon, GeForce NOW will connect with GOG, giving members access to even more games in their library. The first GOG.com games that we anticipate supporting are CD PROJEKT RED’s Cyberpunk 2077 and The Witcher 3: Wild Hunt.

The GeForce-Powered Cloud

Founders members have turned RTX ON for beautifully ray-traced graphics, and can run games with even higher fidelity thanks to DLSS. That’s the power to play that only a GeForce-powered cloud gaming service can deliver.

The advantages also extend to improving quality of service. By developing both the hardware and software streaming solutions, we’re able to easily integrate new GeForce technologies that reduce latency on good networks, and improve fidelity on poor networks.

These improvements will continue over time, with some optimizations already available and being felt by GeForce NOW members today.

Chrome Wasn’t Built in a Day

The first webRTC client running GeForce NOW was the Chromebook beta in August. In the months since, we’ve seen over 10 percent of gameplay in the Chrome web-based client.

GeForce NOW on Chrome

Soon we’ll bring that experience to more Chrome platforms, including Linux, PC, Mac and Android. Stay tuned for updates as we approach a full launch early next year.

Expanding to New Regions

The GeForce NOW Alliance continues to spread cloud gaming across the globe. The alliance is currently made up of LG U+ and KDDI in Korea, SoftBank in Japan, GFN.RU in Russia and Taiwan Mobile, which launched out of beta on Nov. 7.

In the weeks ahead, Zain KSA, the leading 5G telecom operator in Saudi Arabia, will launch its GeForce NOW beta for gamers, expanding cloud gaming into another new region.

More games. More platforms. Legendary GeForce performance. And now streaming on iOS Safari. That’s the power to play that only GeForce NOW can deliver.

The post GeForce NOW Streaming Comes to iOS Safari appeared first on The Official NVIDIA Blog.

Read More

Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G

Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G

5G networks are poised to transform the healthcare industry, starting with how medical students learn.

The Grid Factory, a U.K.-based provider of NVIDIA GPU-accelerated services, is partnering with telecommunications company Vodafone to showcase the potential of 5G technology with a network built at Coventry University.

Operating NVIDIA CloudXR on the private 5G network, student nurses and healthcare professionals can experience lessons and simulations in virtual reality environments.

With NVIDIA CloudXR, users don’t need to be physically tethered to a high-performance computer that drives rich, immersive environments. Instead, it runs on NVIDIA servers located in the cloud or on premises, which deliver the advanced graphics performance needed for wireless virtual, augmented or mixed reality environments — which collectively are known as XR.

Streaming high-resolution graphics over 5G promises higher-quality, mobile-immersive VR for more engaging experiences in remote learning. Using CloudXR enables lecturers to teach in VR while students can access the interactive environment through smartphones, tablets, laptops, VR headsets and AR glasses.

All images courtesy of Gamoola/The Original Content Company.

A member of the NVIDIA CloudXR early access program, The Grid Factory is helping organizations realize new opportunities to deliver high-quality graphics over 5G.

“CloudXR makes the experience so natural that lecturers can easily forget about the technology, and instead focus on learning points and content they’re showing,” said Ben Jones, CTO at The Grid Factory.

With Coventry University’s advanced VR technology, users can now take virtual tours through the human body. Medical students can enter the immersive environment and visualize detailed parts of the body, from the bones, muscles and the brain, to the heart, veins, vessels and blood cells.

Previously, lecturers would have to use pre-recorded materials, but this only allowed them to view the body in a linear, 2D format. Working with Vodafone, The Grid Factory installed NVIDIA CloudXR at the university, enabling lecturers to guide their students on interactive explorations of the human body in 3D models.

“With 5G, we can put the VR headset on and stream high-resolution images and videos remotely anywhere in the world,” said Natasha Taylor, associate professor in the School Of Nursing, Midwifery, and Health at Coventry University . “This experience allows us to take tours of the human body in a way we’ve never been able to before.”

The lessons have turned flat asynchronous learning into cinematic on-demand learning experiences. Students can tune in virtually to study high-resolution, 3D visualizations of the body at any time.

The immersive environments can also show detailed simulations of viral attacks, providing more engaging content that allows students to visualize and retain information faster, according to the faculty staff at Coventry.

And while the lecturers provide the virtual lessons, students can ask questions throughout the presentation.

With 5G, CloudXR can provide lower-latency immersive experiences, and VR environments can become more natural for users. It has allowed lecturers to demonstrate more easily, and for medical students to better visualize parts of the human body.

“The lower the latency, the closer you are to real-life experience,” said Andrea Dona, head of Networks at Vodafone UK. “NVIDIA CloudXR is a really exciting new software platform that allows us to stream high-quality virtual environments directly to the headset, and is now being deployed in a 5G network for the first time commercially.”

More faculty members have expressed interest in the 5G-enabled NVIDIA CloudXR experiences, especially for engineering and automotive use cases, which involve graphics-intensive workloads.

Learn more about NVIDIA CloudXR.

The post Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G appeared first on The Official NVIDIA Blog.

Read More

Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track

Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track

Supercomputing centers worldwide are onboarding NVIDIA Ampere GPU architecture to serve the growing demands of heftier AI models for everything from drug discovery to energy research.

Joining this movement, Fujitsu has announced a new exascale system for Japan-based AI Bridging Cloud Infrastructure (ABCI), offering 600 petaflops of performance at the National Institute of Advanced Industrial Science and Technology.

The debut comes as model complexity has surged 30,000x in the past five years, with booming use of AI in research. With scientific applications, these hulking datasets can be held in memory, helping to minimize batch processing as well as to achieve higher throughput.

To fuel this next research ride, NVIDIA Monday introduced the NVIDIA A100 80GB GPU with HBM2e technology. It doubles the A100 40GB GPU’s high-bandwidth memory to 80GB and delivers over 2 terabytes per second of memory bandwidth.

New NVIDIA A100 80GB GPUs let larger models and datasets run in-memory at faster memory bandwidth, enabling higher compute and faster results on workloads. Reducing internode communication can boost AI training performance by 1.4x with half the GPUs.

NVIDIA also introduced new NVIDIA Mellanox 400G InfiniBand architecture, doubling data throughput and offering new in-network computing engines for added acceleration.

Europe Takes Supercomputing Ride

Europe is leaping in. Italian inter-university consortium CINECA announced the Leonardo system, the world’s fastest AI supercomputer. It taps 14,000 NVIDIA Ampere architecture GPUs and NVIDIA Mellanox InfiniBand networking for 10 exaflops of AI. France’s Atos is set to build it.

Leonardo joins a growing pack of European systems on NVIDIA AI platforms supported by the EuroHPC initiative. Its German neighbor, the Jülich Supercomputing Center, recently launched the first NVIDIA GPU-powered AI exascale system to come online in Europe, delivering the region’s most powerful AI platform. The new Atos-designed Jülich system, dubbed JUWELS, is a 2.5 exaflops AI supercomputer that captured No. 7 on the latest TOP500 list.

Those also getting on board include Luxembourg’s MeluXina supercomputer; IT4Innovations National Supercomputing Center, the most powerful supercomputer in the Czech Republic; and the Vega supercomputer at the Institute of Information Science in Maribor, Slovenia.

Linköping University is planning to build Sweden’s fastest AI supercomputer, dubbed BerzeLiUs, based on the NVIDIA DGX SuperPOD infrastructure. It’s expected to provide 300 petaflops of AI performance for cutting-edge research.

NVIDIA is building Cambridge-1, an 80-node DGX SuperPOD with 400 petaflops of AI performance. It will be the fastest AI supercomputer in the U.K. It’s planned to be used in collaborative research within the country’s AI and healthcare community across academia, industry and startups.

Full Steam Ahead in North America

North America is taking the exascale AI supercomputing ride. NERSC (the U.S. National Energy Research Scientific Computing Center) is adopting NVIDIA AI for projects on Perlmutter, its system packing 6,200 A100 GPUs. NERSC now lays claim to 3.9 exaflops of AI performance.

NVIDIA Selene, a cluster based on the DGX SuperPOD, provides a public reference architecture for large-scale GPU clusters that can be deployed in weeks. The NVIDIA DGX SuperPOD system landed the top spot on the Green500 list of most efficient supercomputers, achieving a new world record in power efficiency of 26.2 gigaflops per watt, and it has set eight new performance milestones for MLPerf inference.

The University of Florida and NVIDIA are building the world’s fastest AI supercomputer in academia, aiming to deliver 700 petaflops of AI performance. The partnership puts UF among leading U.S. AI universities, advances academic research and helps address some of Florida’s most complex challenges.

At Argonne National Laboratory, researchers will use a cluster of 24 NVIDIA DGX A100 systems to scan billions of drugs in the search for treatments for COVID-19.

Los Alamos National Laboratory, Hewlett Packard Enterprise and NVIDIA are teaming up to deliver next-generation technologies to accelerate scientific computing.

All Aboard in APAC

Supercomputers in APAC will also be fueled by NVIDIA Ampere architecture. Korean search engine NAVER and Japanese messaging service LINE are using a DGX SuperPOD built with 140 DGX A100 systems with 700 petaflops of peak AI performance to scale out research and development of natural language processing models and conversational AI services.

The Japan Agency for Marine-Earth Science and Technology, or JAMSTEC, is upgrading its Earth Simulator with NVIDIA A100 GPUs and NVIDIA InfiniBand. The supercomputer is expected to have 624 petaflops of peak AI performance with a maximum theoretical performance of 19.5 petaflops of HPC performance, which today would rank high among the TOP500 supercomputers.

India’s Centre for Development of Advanced Computing, or C-DAC, is commissioning the country’s fastest and largest AI supercomputer, called PARAM Siddhi – AI. Built with 42 DGX A100 systems, it delivers 200 exaflops of AI performance and will address challenges in healthcare, education, energy, cybersecurity, space, automotive and agriculture.

Buckle up. Scientific research worldwide has never enjoyed such a ride.

The post Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track appeared first on The Official NVIDIA Blog.

Read More

NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing

NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing

In the past 18 months, researchers have witnessed a whopping 25.5x performance boost for Arm-based platforms in high performance computing, thanks to the combined efforts of the Arm and NVIDIA ecosystems.

Many engineers deserve a round of applause for the gains.

  • The Arm Neoverse N1 core gave systems-on-a-chip like Ampere Computing’s Altra an estimated 2.3x improvement over last year’s designs.
  • NVIDIA’s A100 Tensor Core GPUs delivered its largest ever gains in a single generation.
  • The latest platforms upshifted to more and faster cores, input/output lanes and memory.
  • And application developers tuned their software with many new optimizations.

As a result, NVIDIA’s Arm-based reference design for HPC, with two Ampere Altra SoCs and two A100 GPUs, just delivered 25.5x the muscle of the dual-SoC servers researchers were using in June 2019. Our GPU-accelerated, Arm-based reference platform alone saw a 2.5x performance gain in 12 months.

The results span applications — including GROMACS, LAMMPS, MILC, NAMD and Quantum Espresso — that are key to work like drug discovery, a top priority during the pandemic. These and many other applications ready to run on Arm-based systems are available in containers on NGC, our hub for GPU-accelerated software.

Companies and researchers pushing the limits in areas such as molecular dynamics and quantum chemistry can harness these apps to drive advances not only in basic science but in fields such as healthcare.

Under the Hood with Arm and HPC

The latest reference architecture marries the energy-efficient throughput of Ampere Computing’s Mt. Jade, a 2U-sized server platform, with NVIDIA’s HGX A100 that’s already accelerating several supercomputers around the world. It’s the successor to a design that debuted last year based on the Marvell ThunderX2 and NVIDIA V100 GPUs.

Mt. Jade consists of two Ampere Altra SoCs packing 80 cores each based on the Arm Neoverse N1 core, all running at up to 3 GHz. They provide a whopping 192 PCI Express Gen4 lanes and up to 8TB of memory to feed two A100 GPUs.

Ampere Computing Mt. Jade reference design
The Mt. Jade server platform supports 192 PCIe Gen4 lanes.

The combination creates a compelling node for next-generation supercomputers. Ampere Computing has already attracted support from nine original equipment and design manufacturers and systems integrators, including Gigabyte, Lenovo and Wiwynn.

A Rising Arm HPC Ecosystem

In another sign of an expanding ecosystem, the Arm HPC User Group hosted a virtual event ahead of SC20 with more than three dozen talks from organizations including AWS, Hewlett Packard Enterprise, the Juelich Supercomputing Center, RIKEN in Japan, and Oak Ridge and Sandia National Labs in the U.S. Most of the talks are available on its YouTube channel.

In June, Arm made its biggest splash in supercomputing to date. That’s when the Fugaku system in Japan debuted at No. 1 on the TOP500 list of the world’s fastest supercomputers with a stunning 415.5 petaflops using the Arm-based A64FX CPU from Fujitsu.

At the time it was one of four Arm-powered supercomputers on the list, and the first using Arm’s Scalable Vector Extensions, technology embedded in Arm’s next-generation Neoverse designs that NVIDIA will support in its software.

Meanwhile, AWS is already running in the cloud HPC jobs like genomics, financial risk modeling and computational fluid dynamics on its Arm-based Graviton2 processors.

NVIDIA Accelerates Arm in HPC

Arm’s growing HPC presence is part of a broad ecosystem of 13 million developers in areas that span smartphones to supercomputers. It’s a community NVIDIA aims to expand with our deal to acquire Arm to create the world’s premier company for the age of AI.

We’re extending the ecosystem with Arm support built into our NVIDIA AI, HPC, networking and graphics software. At last year’s supercomputing event, NVIDIA CEO Jensen Huang announced our work accelerating Arm in HPC in addition to our ongoing support for IBM POWER and x86 architectures.

Nvidia support for Arm ecosystem
Nvidia has expanded its support for the Arm ecosystem.

Since then, we’ve announced our BlueField-2 DPUs that use Arm IP to accelerate and secure networking and storage jobs for cloud, embedded and enterprise applications. And for more than a decade, we’ve been an avid user of Arm designs inside products such as our Jetson Nano modules for robotics and other embedded systems.

We’re excited to be part of dramatic performance gains for Arm in HPC. It’s the latest page in the story of an open, thriving Arm ecosystem that keeps getting better.

Learn more in the NVIDIA SC20 Special Address.

The post NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing appeared first on The Official NVIDIA Blog.

Read More

Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart

Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart

The world’s fastest supercomputers aren’t just faster than ever. They’re smarter and support a greater variety of workloads, too.

Nearly 70 percent of the machines, including eight of the top 10, on the latest TOP500 list of the world’s fastest supercomputers, released today at SC20, are powered by NVIDIA technology.

In addition, four of the nominations for the Gordon Bell Prize, supercomputing’s most prestigious award — to be named this week at SC20 — use AI to drive their discoveries.

The common thread: our end-to-end HGX AI supercomputing platform, which accelerates scientific computing, data analytics and AI workloads. It’s a story that begins with a great chip and extremely fast, smart networking, but ultimately is all about NVIDIA’s globally adopted data-center-scale platform for doing great science.

The shift to incorporating AI into HPC, and a platform that extends beyond traditional supercomputing centers, represents a significant change in a field that, since Seymour Cray’s CDC 6600 was launched in 1964, has focused on harnessing ever larger, more powerful machines for compute-intensive simulation and modeling.

The latest TOP500 list is about more than high-performance Linpack results:

  • Speed records: Measured by the traditional benchmark of supercomputing performance — the speed it takes to do operations in a double-precision floating-point format called FP64 — NVIDIA technologies accelerate the world’s fastest clusters, powering eight of the top 10 machines. This includes the No. 5 system — NVIDIA’s own Selene supercomputer, the world’s most powerful commercial system — as well as new additions like JUWELS (Forschungszentrum Jülich) at No. 7 and Dammam-7 (Saudi Aramco) at No. 10.
  • “Smarts” records: When measured by HPL-AI, the mixed-precision standard that’s the benchmark for AI performance, NVIDIA-powered machines captured top spots on the list with Oak Ridge National Lab’s Summit supercomputer at 0.55 exaflops and NVIDIA Selene at 0.25 exaflops.
  • Green records: The NVIDIA DGX SuperPOD system captured the top spot on the Green500 list of most efficient supercomputers, achieving a new world record in power efficiency of 26.2 gigaflops per watt. Overall, NVIDIA-powered machines captured 25 of the top 30 spots on the list.

The Era of AI Supercomputing  Is in High Gear

Maybe the most impressive achievement: We’ve surpassed exascale computing well ahead of schedule.

In October, Italy’s CINECA supercomputing center unveiled plans to build Leonardo, the world’s most powerful AI supercomputer with an expected 10 exaflops of AI performance. It’s joined by a wave of new EuropHPC AI systems in the Czech Republic, Luxembourg and Slovenia. More are coming not only in Europe but also across Asia and North America.

That’s because modern AI harnesses the incredible parallel processing power of NVIDIA GPUs, NVIDIA CUDA-X libraries and NVIDIA Mellanox InfiniBand — the world’s only smart, fully accelerated in-network computing platform — to pour vast quantities of data into advanced neural networks, creating sophisticated models of the world around us. This lets scientists tackle much more ambitious projects than would otherwise be possible.

Take the example of the team from Lawrence Berkeley National Laboratory’s Computational Research Division, one of this year’s Gordon Bell Prize nominees. Thanks to AI, the team was able to increase the scale of their molecular dynamics simulation by at least 100x compared to the largest system simulated by previous nominees.

It’s About Advancing Science

Of course, it’s not just how fast your system is, but what you do with it in the real world that counts.

That’s why you’ll find the new breed of AI-powered being thrown into the front-line of the fight against COVID.

Three of the four of the nominees for a special Gordon Bell award focused tackling the COVID-19 pandemic rely on NVIDIA AI.

On the Lawrence Livermore National Laboratory’s Sierra supercomputer — No. 3 on the TOP500 list — a team trained an AI able to identify new drug candidates on 1.6 billion compounds in just 23 minutes.

On Oak Ridge’s Summit supercomputer — No. 2 on the TOP500 list — another team harnessed 27,612 NVIDIA GPUs to test 19,028 potential drug compounds on two key SARS-CoV-2 protein structures every second.

Another team used Summit to create an AI-driven workflow to model how the SARS-CoV-2 spike protein, the main viral infection machinery, attacks the human ACE2 receptor.

Thanks to the growing ubiquity of the scalable NVIDIA HGX AI supercomputing platform — which includes everything from processors to networking and software — scientists can run their workloads in the hyperscale data centers of cloud computing companies, as well as in supercomputers.

It’s a unified platform, enabling the fusion of high-performance computing, data analytics and AI workloads. With 2.3 million developers and supporting over 1,800 accelerated apps, all AI frameworks and popular data analytics frameworks including DASK and Spark, the platform enables scientists and researchers to be instantly productive on GPU-powered x86, Arm and Power systems.

In addition, the NVIDIA NGC catalog offers performance-optimized containers for the latest versions of HPC and AI applications. So scientists and researchers can deploy quickly and stay focused on advancing their science.

Learn more in the live NVIDIA SC20 Special Address at 3 p.m. PT today.

The post Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart appeared first on The Official NVIDIA Blog.

Read More

Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop

Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop

No traveling for work? No problem — at least not for character concept artist Furio Tedeschi.

Tedeschi has worked on films like Transformers and Bumblebee: The Movie, as well as games like Mass Effect: Andromeda from Bioware. Currently, he’s working on the upcoming Cyberpunk 2077 video game.

No stranger to working while traveling, Tedeschi often goes from one place to another for his projects. So he’s always been familiar with using a workstation that can keep up with his mobility.

But when the coronavirus pandemic hit, the travel stopped. Tedeschi had a workstation setup at home, so he adjusted quickly to remote work. However, he didn’t want to be chained to his desk, and he wanted a mobile device that could handle the complexity of his designs.

With the Lenovo ThinkPad P53 accelerated by the NVIDIA RTX 5000 GPU, Tedeschi gets the speed and performance he needs to handle massive scenes and render graphics in real time, no matter where he works from.

All images courtesy of Furio Tedeschi.

RTX-tra Boost for Remote Rendering Work

One of the biggest challenges Tedeschi faced was migrating projects, which often contained heavy scenes, from his desktop workstation to a mobile setup.

He used to reduce file sizes and renders because his previous laptop couldn’t run complex graphics workflows. But with the RTX laptop, Tedeschi has enough power to quickly migrate his scenes without slowing anything down. He no longer has to reduce scene sizes because the RTX 5000 can easily handle them at full quality across the creative applications he uses to work on scenes.

“The RTX laptop has made working on the move very comfortable for me, as it packs enough power to handle even some of the heavier scenes I use for concepts,” said Tedeschi. “Now I feel 100 percent comfortable knowing I can work on renders when I go back to traveling.”

With the RTX-powered Lenovo ThinkPad P53, he gets speed and performance that’s similar to his workstation at home. The laptop allows Tedeschi to see the models and scenes from every single view, all in real time.

When it comes to his artistic style, Tedeschi likes to focus on form and shapes, and he uses applications like ZBrush and KeyShot to create his designs. After using the RTX-powered mobile workstation, Tedeschi experienced massive rendering improvements with both applications.

With KeyShot specifically, the GPU rendering is “leaps faster and gives instant feedback through the render viewpoint,” according to Tedeschi.

The faster workflows, combined with the ability to see his concepts running in real time, allows Tedeschi to choose better angles and lighting and get closer to bringing his ideas to life in a final image. This results in reduced prep time, faster productions and even less stress.

With Keyshot, ZBrush and Photoshop all running at the same time, the laptop’s battery lasts up to two hours without being plugged in, so Tedeschi can easily get in the creative flow and work on designs from anywhere in his house without being distracted.

Learn more about Furio Tedeschi and RTX-powered mobile workstations.

The post Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop appeared first on The Official NVIDIA Blog.

Read More

Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet

Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet

From its entry-level vehicles to premium ones, Hyundai Motor Group will deliver the latest in AI-powered convenience and safety to every new customer.

The leading global auto group, which produces more than 7 million vehicles a year, announced today that every Hyundai, Kia and Genesis model will include infotainment systems powered by NVIDIA DRIVE, with production starting in 2022. By making high-performance, energy-efficient compute a standard feature, every vehicle will include a rich, software-defined AI user experience that’s always at the cutting edge.

Hyundai Motor Group has been working with NVIDIA since 2015, developing a state-of-the-art in-vehicle infotainment system on NVIDIA DRIVE that shipped in the Genesis GV80 and G80 models last year. The companies have also been collaborating on an advanced digital cockpit for release in late 2021.

The Genesis G80

Now, the automaker is standardizing AI for all its vehicles by extending NVIDIA DRIVE throughout its entire fleet — marking its commitment to developing software-defined and constantly updateable vehicles for more intelligent transportation.

A Smarter Co-Pilot

AI and accelerated computing have opened the door for a vast array of new functionalities in next-generation vehicles.

Specifically, these software-defined AI cockpit features can be realized with a centralized, high-performance computing architecture. Traditionally, vehicle infotainment requires a collection of electronic control units and switches to perform basic functions, such as changing the radio station or adjusting temperature.

Consolidating these components with the NVIDIA DRIVE software-defined AI platform simplifies the architecture while creating more compute headroom to add new features. With NVIDIA DRIVE at the core, automakers such as Hyundai can orchestrate crucial safety and convenience features, building vehicles that become smarter over time.

The NVIDIA DRIVE platform

These capabilities include driver or occupant monitoring to ensure eyes stay on the road or exiting passengers avoid oncoming traffic. They can elevate convenience in the car by clearly providing information on the vehicle’s surroundings or recommending faster routes and nearby restaurants.

Delivering the Future to Every Fleet

Hyundai is making this new area of in-vehicle AI a reality for all of its customers.

The automaker will leverage the high-performance compute of NVIDIA DRIVE to roll out its new connected car operating system to every new Hyundai, Kia and Genesis vehicle. The software platform consolidates the massive amounts of data generated by the car to deliver personalized convenience and safety features for the vehicle’s occupants.

By running on NVIDIA DRIVE, the in-vehicle infotainment system can process the myriad vehicle data in parallel to deliver features instantaneously. It can provide these services regardless of whether the vehicle is connected to the internet, customizing to each user safely and securely for the ultimate level of convenience.

With this new centralized cockpit architecture, Hyundai Motor Group will bring AI to every new customer, offering software upgradeable applications for the entire life of its upcoming fleet.

The post Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet appeared first on The Official NVIDIA Blog.

Read More