Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering

Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering

It’s time to bring krisp graphics to stick figure drawings.

Creative studio SoKrispyMedia, started by content creators Sam Wickert and Eric Leigh, develops short videos blended with high-quality visual effects. Since publishing one of their early works eight years ago on YouTube, Chalk Warfare 1, the team has regularly put out short films that showcase engaging visual effects and graphics — including Stick Figure Battle, which has nearly 25 million views.

Now, the Stick Figure saga continues with SoKrispyMedia’s latest, Stick Figure War, which relies on real-time rendering for photorealistic results, as well as improved creative workflows.

With real-time rendering, SoKrispyMedia worked more efficiently as they could see the final results quickly, and have more time for iterations so they could ensure the visuals looked exactly how they wanted — from stick figures piloting paper airplanes to robots fighting skeletons in textbooks.

The team enhanced their virtual production process by using Unreal Engine and a Dell Precision 7750 mobile workstation featuring an NVIDIA Quadro RTX 5000 GPU. Adding to this mix high-quality cameras and DaVinci Resolve software from Blackmagic Design, SoKrispyMedia produced a short film with higher quality than they ever thought possible.

Real-Time Rendering Sticks Out in Visual Effects

Integrating real-time rendering into their pipelines has allowed SoKrispyMedia to work faster and iterate more quickly. They no longer need to wait hundreds of hours for renders to preview — everything can be produced in real time.

“Looking back at our older videos and the technology we used, it feels like we were writing in pencil, and as the technology evolves, we’re adding more and more colors to our palette,” said Micah Malinics, producer at SoKrispyMedia.

For Stick Figure War, a lot of the elements in the video were drawn by hand, and then scanned and converted into 2D or 3D graphics in Unreal Engine. The creators also developed a stylized filter that allowed them to make certain elements look like cross-hatched drawings.

SoKrispyMedia used Unreal Engine to do real-time rendering for almost the entire film, which enabled them to explore more creative ideas and let their imaginations run wild without worrying about increased render times.

Pushing Creativity Behind the Scenes

While NVIDIA RTX and Unreal Engine have broadened the reach of real-time rendering, Blackmagic Design has made high-quality cameras more accessible so content creators can produce cinematic-quality work at a fraction of the cost.

For Stick Figure War, SoKrispyMedia used Blackmagic URSA Mini G2 for production, Pocket Cinema Camera for pick-up shots and Micro Studio Camera 4K for over-the-head VFX shots. With the cameras, the team could shoot videos at 4K resolution and crop footage without losing any resolution in post-production.

Editing workflows were accelerated as Blackmagic’s DaVinci Resolve utilized NVIDIA GPUs to dramatically speed up playback and performance.

“Five to 10 years ago, making this video would’ve been astronomically difficult. Now we’re able to simply plug the Blackmagic camera directly into Unreal and see final results in front of our eyes,” said Sam Wickert, co-founder of SoKrispyMedia. “Using the Resolve Live feature for interactive and collaborative color grading and editing is just so fast, easy and efficient. We’re able to bring so much more to life on screen than we ever thought possible.”

The SoKrispyMedia team was provided with a Dell Precision 7750 mobile workstation with an RTX 5000 GPU inside, allowing the content creators to work on the go and preview real-time renderings on set. And the Dell workstation’s display provided advanced color accuracy, from working in DaVinci Resolve to rendering previews and final images.

Learn more about the making of SoKrispyMedia’s latest video, Stick Figure War.

The post Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

Cloud or on premises? That’s the question many organizations ask when building AI infrastructure.

Cloud computing can help developers get a fast start with minimal cost. It’s great for early experimentation and supporting temporary needs.

As businesses iterate on their AI models, however, they can become increasingly complex, consume more compute cycles and involve exponentially larger datasets. The costs of data gravity can escalate, with more time and money spent pushing large datasets from where they’re generated to where compute resources reside.

This AI development “speed bump” is often an inflection point where organizations realize there are opex benefits with on-premises or collocated infrastructure. Its fixed costs can support rapid iteration at the lowest “cost per training run,” complementing their cloud usage.

Conversely, for organizations whose datasets are created in the cloud and live there, procuring compute resources adjacent to that data makes sense. Whether on-prem or in the cloud, minimizing data travel — by keeping large volumes as close to compute resources as possible — helps minimize the impact of data gravity on operating costs.

‘Own the Base, Rent the Spike’ 

Businesses that ultimately embrace hybrid cloud infrastructure trace a familiar trajectory.

One customer developing an image recognition application immediately benefited from a fast, effortless start in the cloud.

As their database grew to millions of images, costs rose and processing slowed, causing their data scientists to become more cautious in refining their models.

At this tipping point — when a fixed cost infrastructure was justified — they shifted training workloads to an on-prem NVIDIA DGX system. This enabled an immediate return to rapid, creative experimentation, allowing the business to build on the great start enabled by the cloud.

The saying “own the base, rent the spike” captures this situation. Enterprise IT provisions on-prem DGX infrastructure to support the steady-state volume of AI workloads and retains the ability to burst to the cloud whenever extra capacity is needed.

It’s this hybrid cloud approach that can secure the continuous availability of compute resources for developers while ensuring the lowest cost per training run.

Delivering the AI Hybrid Cloud with DGX and Google Cloud’s Anthos on Bare Metal

To help businesses embrace hybrid cloud infrastructure, NVIDIA has introduced support for Google Cloud’s Anthos on bare metal for its DGX A100 systems.

For customers using Kubernetes to straddle cloud GPU compute instances and on-prem DGX infrastructure, Anthos on bare metal enables a consistent development and operational experience across deployments, while reducing expensive overhead and improving developer productivity.

This presents several benefits to enterprises. While many have implemented GPU-accelerated AI in their data centers, much of the world retains some legacy x86 compute infrastructure. With Anthos on bare metal, IT can easily add on-prem DGX systems to their infrastructure to tackle AI workloads and manage it the same familiar way, all without the need for a hypervisor layer.

Without the need for a virtual machine, Anthos on bare metal — now generally available — manages application deployment and health across existing environments for more efficient operations. Anthos on bare metal can also manage application containers on a wide variety of performance, GPU-optimized hardware types and allows for direct application access to hardware.

“Anthos on bare metal provides customers with more choice over how and where they run applications and workloads,” said Rayn Veerubhotla, Director of Partner Engineering at Google Cloud. “NVIDIA’s support for Anthos on bare metal means customers can seamlessly deploy NVIDIA’s GPU Device Plugin directly on their hardware, enabling increased performance and flexibility to balance ML workloads across hybrid environments.”

Additionally, teams can access their favorite NVIDIA NGC containers, Helm charts and AI models from anywhere.

With this combination, enterprises can enjoy the rapid start and elasticity of resources offered on Google Cloud, as well as the secure performance of dedicated on-prem DGX infrastructure.

Learn more about Google Cloud’s Anthos.

Learn more about NVIDIA DGX A100.

The post How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure appeared first on The Official NVIDIA Blog.

Read More

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI — the Medical Open Network for AI, a domain-optimized, open-source framework for healthcare — is now ready for production with the upcoming release of the NVIDIA Clara application framework for AI-powered healthcare and life sciences.

Introduced in April and already adopted by leading healthcare research institutions, MONAI is a PyTorch-based framework that enables the development of AI for medical imaging with industry-specific data handling, high-performance training workflows and reproducible reference implementations of state-of-the-art approaches.

As part of the updated Clara offering, MONAI will come with over 20 pre-trained models, including ones recently developed for COVID-19, as well as the latest training optimizations on NVIDIA DGX A100 GPUs that provide up to a sixfold acceleration in training turnaround time.

“MONAI is becoming the PyTorch of healthcare, paving the way for closer collaboration between data scientists and clinicians,” said Dr. Jayashree Kalpathy-Cramer, director of the QTIM lab at the Athinoula A. Martinos Center for Biomedical Imaging at MGH. “Global adoption of MONAI is fostering collaboration across the globe facilitated by federated learning.”

Adoption by the healthcare ecosystem of MONAI has been tremendous. DKFZ, King’s College London, Mass General, Stanford and Vanderbilt are among those to adopt the AI framework for imaging. MONAI is being used in everything from industry-leading imaging competitions to the first boot camp focused on the framework, held in September, which drew over 550 registrants from 40 countries, including undergraduate university students.

“MONAI is quickly becoming the go-to deep learning framework for healthcare. Getting from research to production is critical for the integration of AI applications into clinical care,” said Dr. Bennett Landman of Vanderbilt University. “NVIDIA’s commitment to community-driven science and allowing the academic community to contribute to a framework that is production-ready will allow for further innovation to build enterprise-ready features.”

New Features

NVIDIA Clara brings the latest breakthroughs in AI-assisted annotation, federated learning and production deployment to the MONAI community.

The latest version introduces a game-changer to AI-assisted annotation that allows radiologists to label complex 3D CT data in one-tenth of the clicks with a new model called DeepGrow 3D. Instead of the traditional time-consuming method of segmenting an organ or lesion image by image or slice by slice, which can be up to 250 clicks for a large organ like the liver, users can segment with far fewer clicks.

Integrated with Fovia Ai’s F.A.S.T. AI Annotation software, NVIDIA Clara’s AI-assisted annotation tools and the new DeepGrow 3D feature can be used for labeling training data as well as assisting radiologists when reading. Fovia offers the XStream HDVR SDK suite to review DICOM images that’s integrated into industry-leading PACS viewers.

AI-assisted annotation is the key to unlocking rich radiology datasets and was recently used to label the public COVID-19 CT dataset published by The Cancer Imaging Archive at the U.S. National Institutes of Health. This labeled dataset was then used in the MICCAI-endorsed COVID-19 Lung CT Lesion Segmentation Challenge.

Clara Federated Learning made possible the recent research collaboration of 20 hospitals around the world to develop a generalized AI model for COVID-19 patients. The EXAM model predicts oxygen requirements in COVID-19 patients, is available on the NGC software registry, and is being evaluated for clinical validation at Mount Sinai Health System in New York, Diagnósticos da America SA in Brazil, NIHR Cambridge Biomedical Research Centre in the U.K. and the NIH.

“The MONAI software framework provides key components for training and evaluating imaging-based deep learning models, and its open-source approach is fostering a growing community that is contributing exciting advances, such as federated learning,” said Dr. Daniel Rubin, professor of biomedical data science, radiology and medicine at Stanford University.

NVIDIA is additionally expanding its release of NVIDIA Clara to digital pathology applications, where the sheer sizes of images would choke off-the-shelf open-source AI tools. Clara for pathology early access contains reference pipelines for both training and deployment of AI applications.

“Healthcare data interoperability, model deployment and clinical pathway integration are an increasingly complex and intertwined topic, with significant field-specific expertise,” said Jorge Cardoso, CTO of the London Medical Imaging and AI Centre for Value-based Healthcare. “Project MONAI, jointly with the rest of the NVIDIA Clara ecosystem, will help deliver improvements to patient care and optimize hospital operations.”

Learn more about NVIDIA Clara Train 4.0 and subscribe to NVIDIA healthcare news.

The post MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

Just over 125 years ago, on Nov. 8, 1895, the world’s first X-ray image was captured. It was a breakthrough that set the stage for the modern medical imaging industry.

Over the decades, an entire ecosystem of medical imaging hardware and software came to be, with AI startups now playing a key role within it.

Today we’re announcing the NVIDIA Inception Alliance for Healthcare, an initiative where medical AI startups have new opportunities to chart innovations and accelerate their success with the help of NVIDIA and its healthcare industry partners.

Premier members of NVIDIA Inception, our accelerator program for 6,500+ startups across 90 countries in AI and data sciences, can now join the GE Healthcare Edison Developer Program. Through integration with the GE Healthcare Edison Platform, these startups gain access to GE Healthcare’s global network to scale clinical and commercial activities within their expansive install base of 4 million imaging, mobile diagnostics and monitoring units across 160 countries with 230 million exams and associated data.

Premier members with FDA clearance also can join the Nuance AI Marketplace for Diagnostic Imaging. The Nuance AI Marketplace brings AI into the radiology workflow by connecting developers directly with radiology subscribers. It offers AI developers a single API to connect their AI solutions to radiologists across 8,000 healthcare facilities that use the Nuance PowerShare Network. And it gives radiology subscribers a one-stop shop to review, try, validate and buy AI models — bridging the technology divide to make AI useful, usable and used.

A Prescription for Growth

NVIDIA Inception, which recently added its 1,000th healthcare AI startup, offers its members a variety of ongoing benefits, including go-to-market support, technology assistance and access to NVIDIA expertise — all tailored to a business’s maturity stage. Startups get access to training through the NVIDIA Deep Learning Institute, preferred pricing on hardware through our global network of distributors, invitations to exclusive networking events and more.

To nurture the growth of AI startups in healthcare, and ultimately the entire medical ecosystem, NVIDIA is working with healthcare giants to offer Inception members an accelerated go-to-market path.

The NVIDIA Inception Alliance for Healthcare will forge new ways to grow through targeted networking, AI training, early access to technology, pitch competitions and technology integration. Members will receive customized training and support to develop, deploy and integrate NVIDIA GPU-accelerated apps everywhere within the medical imaging ecosystem.

Select startup members will have direct access to engage joint customers, in addition to marketing promotion of their results. The initiative will kick off with a pitch competition for leading AI startups in medical imaging and related supporting fields.

“Startups are on the forefront of innovation and the GE Healthcare Edison Developer Program provides them access to the world’s largest installed base of medical devices and customers,” said Karley Yoder, vice president and general manager of Artificial Intelligence at GE Healthcare. “Bringing together the world-class capabilities from industry-leading partners creates a fast-track to accelerate innovation in a connected ecosystem that will help improve the quality of care, lower healthcare costs and deliver better outcomes for patients.”

“With Nuance’s deep understanding of radiologists’ needs and workflow, we are uniquely positioned to help them transform healthcare by harnessing AI. The Nuance AI Marketplace gives radiologists the ability to easily purchase, validate and use AI models within solutions they use every day, so they can work smarter and more efficiently,” said Karen Holzberger, senior vice president and general manager of the Diagnostic Division at Nuance. “The AI models help radiologists focus their time and expertise on the right case at the right time, alleviate many repetitive, mundane tasks and, ultimately, improve patient care — and save more lives. Connecting NVIDIA Inception startup members to the Nuance AI Marketplace is a natural fit — and creates a connection for startups that benefits the entire industry.”

Learn more at the NVIDIA RSNA 2020 Special Address, which is open to the public on Tuesday, Dec. 1, at 6 p.m. CT. Kimberly Powell, NVIDIA’s vice president of healthcare, will discuss how we’re working with medical imaging companies, radiologists, data scientists, researchers and medical device companies to bring workflow acceleration, AI models and deployment platforms to the medical imaging ecosystem.

Subscribe to NVIDIA healthcare news.

The post NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups appeared first on The Official NVIDIA Blog.

Read More

Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle

Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle

Props to team top flops.

Virtual this year, the SC20 Student Cluster Competition was still all about teams vying for top supercomputing performance in the annual battle for HPC bragging rights.

That honor went to Beijing’s Tsinghua University, whose six-member undergraduate student team clocked in 300 teraflops of processing performance.

A one teraflop computer can process one trillion floating-point operations per second.

The Virtual Student Cluster Competition was this year’s battleground for 19 teams. Competitors consisted of either high school or undergraduate students. Teams were made up of six members, an adviser and vendor partners.

Real-World Scenarios

In the 72-hour competition, student teams designed and built virtual clusters running NVIDIA GPUs in the Microsoft Azure cloud. Students completed a set of benchmarks and real-world scientific workloads.

Teams ran the Gromac molecular dynamics application, tackling COVID-19 research. They also ran the CESM application to work on optimizing climate modeling code. The “reproducibility challenge” called on the teams to replicate results from an SC19 research paper.

Among other hurdles, teams were tossed a surprise exascale computing project mini-application, miniVite, to test their chops at compiling, running and optimizing.

A leaderboard tracked performance results of their submissions and the amount of money spent on Microsoft Azure as well as the burn rate of their spending by the hour on cloud resources.

Roller-Coaster Computing Challenges

The Georgia Institute of Technology competed for its second time. This year’s squad, dubbed Team Phoenix, had the good fortune of landing advisor Vijay Thakkar, a Gordon Bell Prize nominee this year.

Half of the team members were teaching assistants for introductory systems courses at Georgia Tech, said team member Sudhanshu Agarwal.

Georgia Tech used NVIDIA GPUs “wherever it was possible, as GPUs reduced computation time,” said Agarwal.

“We had a lot of fun this year and look forward to participating in SC21 and beyond,” he said.

Pan Yueyang, a junior in computer science at Peking University, joined his university’s supercomputing team before taking the leap to participate in the SC20 battle. But it was full of surprises, he noted.

He said that during the competition his team ran into a series of unforeseen hiccups. “Luckily it finished as required and the budget was slightly below the limitation,” he said.

Jacob Xiaochen Li, a junior in computer science at the University of California, San Diego, said his team was relying on NVIDIA GPUs for the MemXCT portion of the competition to reproduce the scaling experiment along with memory bandwidth utilization. “Our results match the original chart closely,” he said, noting there were some hurdles along the way.

Po Hao Chen, a sophmore in computer science at Boston University, said he committed to the competition because he’s always enjoyed algorithmic optimization. Like many, he had to juggle the competition with the demands of courses and exams.

“I stayed up for three whole days working on the cluster,” he said. “And I really learned a lot from this competition.”

Teams and Flops

Tsinghua University, China
300 TFLOPS

ETH Zurich
129 TFLOPS

Southern University of Science and Technology
120 TFLOPS

Texas A&M University
113 TFLOPS

Georgia Institute of Technology
108 TFLOPS

Nanyang Technological University, Singapore
105 TFLOPS

University of Warsaw
75.0 TFLOPS

University of Illinois
71.6 TFLOPS

Massachusetts Institute of Technology
64.9 TFLOPS

Peking University
63.8 TFLOPS

University of California, San Diego
53.9 TFLOPS

North Carolina State University
44.3 TFLOPS

Clemson University
32.6 TFLOPS

Friedrich-Alexander University Erlangen-Nuremberg
29.0 TFLOPS

Northeastern University
21.1 TFLOPS

Shanghai Jiao Tong University
19.9 TFLOPS

ShanghaiTech University
14.4 TFLOPS

University of Texas
13.1 TFLOPS

Wake Forest University
9.172 TFLOPS

 

The post Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle appeared first on The Official NVIDIA Blog.

Read More

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

As the healthcare world battles the pandemic, the medical-imaging field is gaining ground with AI, forging new partnerships and funding startup innovation. It will all be on display at RSNA, the Radiological Society of North America’s annual meeting, taking place Nov. 29 – Dec. 5.

Radiologists, healthcare organizations, developers and instrument makers at RSNA will share their latest advancements and what’s coming next — with an eye on the growing ability of AI models to integrate with medical-imaging workflows. More than half of informatics abstracts submitted to this year’s virtual conference involve AI.

In a special public address at RSNA, Kimberly Powell, NVIDIA’s VP of healthcare, will discuss how we’re working with research institutions, the healthcare industry and AI startups to bring workflow acceleration, deep learning models and deployment platforms to the medical imaging ecosystem.

Healthcare and AI experts worldwide are putting monumental effort into developing models that can help radiologists determine the severity of COVID cases from lung scans. They’re also building platforms to smoothly integrate AI into daily workflows, and developing federated learning techniques that help hospitals work together on more robust AI models.

The NVIDIA Clara Imaging application framework is poised to advance this work with NVIDIA GPUs and AI models that can accelerate each step of the radiology workflow, including image acquisition, scan annotation, triage and reporting.

Delivering Tools to Radiologists, Developers, Hospitals

AI developers are working to bridge the gap between their models and the systems radiologists already use, with the goal of creating seamless integration of deep learning insights into tools like PACS digital archiving systems. Here’s how NVIDIA is supporting their work:

  • We’ve strengthened the NVIDIA Clara application framework’s full-stack GPU-accelerated libraries and SDKs for imaging, with new pretrained models available on the NGC software hub. NVIDIA and the U.S. National Institutes of Health jointly developed AI models that can help researchers classify COVID cases from chest CT scans, and evaluate the severity of these cases.
  • Using the NVIDIA Clara Deploy SDK, Mass General Brigham researchers are testing a risk assessment model that analyzes chest X-rays to determine the severity of lung disease. The tool was developed by the Athinoula A. Martinos Center for Biomedical Imaging, which has adopted NVIDIA DGX A100 systems to power its research.
  • Together with King’s College London, we introduced this year MONAI, an open-source AI framework for medical imaging. Based on the Ignite and PyTorch deep learning frameworks, the modular MONAI code can be easily ported to researchers’ existing AI pipelines. So far, the GitHub project has dozens of contributors and over 1,500 stars.
  • NVIDIA Clara Federated Learning enables researchers to collaborate on training robust AI models without sharing patient information. It’s been used by hospitals and academic medical centers to train models for mammogram assessment, and to assess the likelihood that patients with COVID-19 symptoms will need supplemental oxygen.

NVIDIA at RSNA

RSNA attendees can check out NVIDIA’s digital booth to discover more about GPU-accelerated AI in medical imaging. Hands-on training courses from the NVIDIA Deep Learning Institute are also available, covering medical imaging topics including image classification, coarse-to-fine contextual memory and data augmentation with generative networks. The following events feature NVIDIA speakers:

Over 50 members of NVIDIA Inception — our accelerator program for AI startups — will be exhibiting at RSNA, including Subtle Medical, which developed the first AI tools for medical imaging enhancement to receive FDA clearance and this week announced $12 million in Series A funding.

Another, TrainingData.io used the NVIDIA Clara SDK to train a segmentation AI model to analyze COVID disease progression in chest CT scans. And South Korean startup Lunit recently received the European CE mark and partnered with GE Healthcare on an AI tool that flags abnormalities on chest X-rays for radiologists’ review.

Visit the NVIDIA at RSNA webpage for a full list of activities at the show. Email to request a meeting with our deep learning experts.

Subscribe to NVIDIA healthcare news here.

The post Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology appeared first on The Official NVIDIA Blog.

Read More

Science Magnified: Gordon Bell Winners Combine HPC, AI

Science Magnified: Gordon Bell Winners Combine HPC, AI

Seven finalists including both winners of the 2020 Gordon Bell awards used supercomputers to see more clearly atoms, stars and more — all accelerated with NVIDIA technologies.

Their efforts required the traditional number crunching of high performance computing, the latest data science in graph analytics, AI techniques like deep learning or combinations of all of the above.

The Gordon Bell Prize is regarded as a Nobel Prize in the supercomputing community, attracting some of the most ambitious efforts of researchers worldwide.

AI Helps Scale Simulation 1,000x

Winners of the traditional Gordon Bell award collaborated across universities in Beijing, Berkeley and Princeton as well as Lawrence Berkeley National Laboratory (Berkeley Lab). They used a combination of HPC and neural networks they called DeePMDkit to create complex simulations in molecular dynamics, 1,000x faster than previous work while maintaining accuracy.

In one day on the Summit supercomputer at Oak Ridge National Laboratory, they modeled 2.5 nanoseconds in the life of 127.4 million atoms, 100x more than the prior efforts.

Their work aids understanding complex materials and fields with heavy use of molecular modeling like drug discovery. In addition, it demonstrated the power of combining machine learning with physics-based modeling and simulation on future supercomputers.

Atomic-Scale HPC May Spawn New Materials 

Among the finalists, a team including members from Berkeley Lab and Stanford optimized the BerkeleyGW application to bust through the complex math needed to calculate atomic forces binding more than 1,000 atoms with 10,986 electrons, about 10x more than prior efforts.

“The idea of working on a system with tens of thousands of electrons was unheard of just 5-10 years ago,” said Jack Deslippe, a principal investigator on the project and the application performance lead at the U.S. National Energy Research Scientific Computing Center.

Their work could pave a way to new materials for better batteries, solar cells and energy harvesters as well as faster semiconductors and quantum computers.

The team used all 27,654 GPUs on the Summit supercomputer to get results in just 10 minutes, thanks to harnessing an estimated 105.9 petaflops of double-precision performance.

Developers are continuing the work, optimizing their code for Perlmutter, a next-generation system using NVIDIA A100 Tensor Core GPUs that sport hardware to accelerate 64-bit floating-point jobs.

Analytics Sifts Text to Fight COVID

Using a form of data mining called graph analytics, a team from Oak Ridge and Georgia Institute of Technology found a way to search for deep connections in medical literature using a dataset they created with 213 million relationships among 18.5 million concepts and papers.

Their DSNAPSHOT (Distributed Accelerated Semiring All-Pairs Shortest Path) algorithm, using the team’s customized CUDA code, ran on 24,576 V100 GPUs on Summit, delivering results on a graph with 4.43 million vertices in 21.3 minutes. They claimed a record for deep search in a biomedical database and showed the way for others.

Graph analytics from Gordon Bell 2020 finalists at Oak Ridge and GIT
Graph analytics finds deep patterns in biomedical literature related to COVID-19.

“Looking forward, we believe this novel capability will enable the mining of scholarly knowledge … (and could be used in) natural language processing workflows at scale,” Ramakrishnan Kannan, team lead for computational AI and machine learning at Oak Ridge, said in an article on the lab’s site.

Tuning in to the Stars

Another team pointed the Summit supercomputer at the stars in preparation for one of the biggest big-data projects ever tackled. They created a workflow that handled six hours of simulated output from the Square Kilometer Array (SKA), a network of thousands of radio  telescopes expected to come online later this decade.

Researchers from Australia, China and the U.S. analyzed 2.6 petabytes of data on Summit to provide a proof of concept for one of SKA’s key use cases. In the process they revealed critical design factors for future radio telescopes and the supercomputers that study their output.

The team’s work generated 247 GBytes/second of data and spawned 925 GBytes/s in I/O. Like many other finalists, they relied on the fast, low-latency InfiniBand links powered by NVIDIA Mellanox networking, widely used in supercomputers like Summit to speed data among thousands of computing nodes.

Simulating the Coronavirus with HPC+AI

The four teams stand beside three other finalists who used NVIDIA technologies in a competition for a special Gordon Bell Prize for COVID-19.

The winner of that award used all the GPUs on Summit to create the largest, longest and most accurate simulation of a coronavirus to date.

“It was a total game changer for seeing the subtle protein motions that are often the important ones, that’s why we started to run all our simulations on GPUs,” said Lilian Chong, an associate professor of chemistry at the University of Pittsburgh, one of 27 researchers on the team.

“It’s no exaggeration to say what took us literally five years to do with the flu virus, we are now able to do in a few months,” said Rommie Amaro, a researcher at the University of California at San Diego who led the AI-assisted simulation.

The post Science Magnified: Gordon Bell Winners Combine HPC, AI appeared first on The Official NVIDIA Blog.

Read More

COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

COVID-19 Spurs Scientific Revolution in Drug Discovery with AI

Research across global academic and commercial labs to create a more efficient drug discovery process won recognition today with a special Gordon Bell Prize for work fighting COVID-19.

A team of 27 researchers led by Rommie Amaro at the University of California at San Diego (UCSD) combined high performance computing (HPC) and AI to provide the clearest view to date of the coronavirus, winning the award.

Their work began in late March when Amaro lit up Twitter with a picture of part of a simulated SARS-CoV-2 virus that looked like an upside-down Christmas tree.

Seeing it, one remote researcher noticed how a protein seemed to reach like a crooked finger from behind a protective shield to touch a healthy human cell.

“I said, ‘holy crap, that’s crazy’… only through sharing a simulation like this with the community could you see for the first time how the virus can only strike when it’s in an open position,” said Amaro, who leads a team of biochemists and computer experts at UCSD.

Tweet of coronavirus from Amaro Lab
Amaro shared her early results on Twitter.

The image in the tweet was taken by Amaro’s lab using what some call a computational microscope, a digital tool that links the power of HPC simulations with AI to see details beyond the capabilities of conventional instruments.

It’s one example of work around the world using AI and data analytics, accelerated by NVIDIA Clara Discovery, to slash the $2 billion in costs and ten-year time span it typically takes to bring a new drug to market.

A Virtual Microscope Enhanced with AI

In early October, Amaro’s team completed a series of more ambitious HPC+AI simulations. They showed for the first time fine details of how the spike protein moved, opened and contacted a healthy cell.

One simulation (below) packed a whopping 305 million atoms, more than twice the size of any prior simulation in molecular dynamics. It required AI and all 27,648 NVIDIA GPUs on the Summit supercomputer at Oak Ridge National Laboratory.

More than 4,000 researchers worldwide have downloaded the results that one called “critical for vaccine design” for COVID and future pathogens.

Today, it won a special Gordon Bell Prize for COVID-19, the equivalent of a Nobel Prize in the supercomputing community.

Two other teams also used NVIDIA technologies in work selected as finalists in the COVID-19 competition created by the ACM, a professional group representing more than 100,000 computing experts worldwide.

And the traditional Gordon Bell Prize went to a team from Beijing, Berkeley and Princeton that set a new milestone in molecular dynamics, also using a combination of HPC+AI on Summit.

An AI Funnel Catches Promising Drugs

Seeing how the infection process works is one of a string of pearls that scientists around the world are gathering into a new AI-assisted drug discovery process.

Another is screening from a vast field of 1068 candidates the right compounds to arrest a virus. In a paper from part of the team behind Amaro’s work, researchers described a new AI workflow that in less than five months filtered 4.2 billion compounds down to the 40 most promising ones that are now in advanced testing.

“We were so happy to get these results because people are dying and we need to address that with a new baseline that shows what you can get with AI,” said Arvind Ramanathan, a computational biologist at Argonne National Laboratory.

Ramanathan’s team was part of an international collaboration among eight universities and supercomputer centers, each contributing unique tools to process nearly 60 terabytes of data from 21 open datasets. It fueled a set of interlocking simulations and AI predictions that ran across 160 NVIDIA A100 Tensor Core GPUs on Argonne’s Theta system with massive AI inference runs using NVIDIA TensorRT on the many more GPUs on Summit.

Docking Compounds, Proteins on a Supercomputer

Earlier this year, Ada Sedova put a pearl on the string for protein docking (described in the video below) when she described plans to test a billion drug compounds against two coronavirus spike proteins in less than 24 hours using the GPUs on Summit. Her team’s work cut to just 21 hours the work that used to take 51 days, a 58x speedup.

In a related effort, colleagues at Oak Ridge used NVIDIA RAPIDS and BlazingSQL to accelerate by an order of magnitude data analytics on results like Sedova produced.

Among the other Gordon Bell finalists, Lawrence Livermore researchers used GPUs on the Sierra supercomputer to slash the training time for an AI model used to speed drug discovery from a day to just 23 minutes.

From the Lab to the Clinic

The Gordon Bell finalists are among more than 90 research efforts in a supercomputing collaboration using 50,000 GPU cores to fight the coronavirus.

They make up one front in a global war on COVID that also includes companies such as Oxford Nanopore Technologies, a genomics specialist using NVIDIA’s CUDA software to accelerate its work.

Oxford Nanopore won approval from European regulators last month for a novel system the size of a desktop printer that can be used with minimal training to perform thousands of COVID tests in a single day. Scientists worldwide have used its handheld sequencing devices to understand the transmission of the virus.

Relay Therapeutics uses NVIDIA GPUs and software to simulate with machine learning how proteins move, opening up new directions in the drug discovery process. In September, it started its first human trial of a molecule inhibitor to treat cancer.

Startup Structura uses CUDA on NVIDIA GPUs to analyze initial images of pathogens to quickly determine their 3D atomic structure, another key step in drug discovery. It’s a member of the NVIDIA Inception program, which gives startups in AI access to the latest GPU-accelerated technologies and market partners.

From Clara Discovery to Cambridge-1

NVIDIA Clara Discovery delivers a framework with AI models, GPU-optimized code and applications to accelerate every stage in the drug discovery pipeline. It provides speedups of 6-30x across jobs in genomics, protein structure prediction, virtual screening, docking, molecular simulation, imaging and natural-language processing that are all part of the drug discovery process.

It’s NVIDIA’s latest contribution to fighting the SARS-CoV-2 and future pathogens.

NVIDIA Clara Discovery
NVIDIA Clara Discovery speeds each step of a drug discovery process using AI and data analytics.

Within hours of the shelter-at-home order in the U.S., NVIDIA gave researchers free access to a test drive of Parabricks, our genomic sequencing software. Since then, we’ve provided as part of NVIDIA Clara open access to AI models co-developed with the U.S. National Institutes of Health.

We’ve also committed to build with partners including GSK and AstraZeneca Europe’s largest supercomputer dedicated to driving drug discovery forward. Cambridge-1 will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance.

Next Up: A Billion-Atom Simulation

The work is just getting started.

Ramanathan of Argonne sees a future where self-driving labs learn what experiments they should launch next, like autonomous vehicles finding their own way forward.

“And I want to scale to the absolute max of screening 1068 drug compounds, but even covering half that will be significantly harder than what we’ve done so far,” he said.

“For me, simulating a virus with a billion atoms is the next peak, and we know we will get there in 2021,” said Amaro. “Longer term, we need to learn how to use AI even more effectively to deal with coronavirus mutations and other emerging pathogens that could be even worse,” she added.

Hear NVIDIA CEO Jensen Huang describe in the video below how AI in Clara Discovery is advancing drug discovery.

At top: An image of the SARS-CoV-2 virus based on the Amaro lab’s simulation showing 305 million atoms.

The post COVID-19 Spurs Scientific Revolution in Drug Discovery with AI appeared first on The Official NVIDIA Blog.

Read More

A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines

A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines

In the global race to tame the spread of COVID-19, scientific researchers and pharmaceutical companies first must understand the virus’s protein structure.

Doing so requires building detailed 3D models of protein molecules, which until recently has been an intensely time-consuming task. Structura Biotechnology’s groundbreaking software is helping speed things along.

The GPU-powered machine learning algorithms underlying Structura’s software power the image processing stage of a technology called cryo-electron microscopy, or cryo-EM, a revolutionary breakthrough in biochemistry that was the subject of the 2017 Nobel Prize in chemistry.

Cryo-EM enables powerful electron microscopes to capture detailed images of biomolecules in their near-native states. These images can then be used to reconstruct a 3D model of the biomolecules.

With cryo-EM providing valuable 2D image data, Structura’s AI-infused software, called cryoSPARC, can quickly analyze the resulting microscopy data to solve the 3D atomic structures of the embedded protein molecules. That, in turn, allows researchers to more rapidly gauge how effective drugs will be in binding to those molecules, significantly speeding up the process of drug discovery.

Hundreds of labs around the world already use the three-year-old Toronto-based company’s software, with a significant, but not surprising, surge during 2020. In fact, CEO Ali Punjani states that Structura’s software has been used by scientists to visualize COVID-19 proteins in multiple publications.

“Our software helps scientists to understand what their proteins look like and how their proposed therapeutics may bind,” Punjani said. “The more they can see about the structure of the target, the easier it becomes to design or identify a molecule that locks onto that structure and stops it.”

An Intriguing Test Case

The idea for Structura came from a conversation Punjani overheard, during his undergraduate work at the University of Toronto, about trying to solve protein structures using microscopic images. He thought the topic would make an intriguing test case for his developing interest in machine learning research.

Punjani formed his team in 2017, and Structura started building its software, backed by large-scale inference and computer vision algorithms that help to recover a 3D model from 2D image data. The key, he said, is to collect and analyze — with increasing accuracy — a sufficient amount of microscopic data to enable high-quality 3D reconstructions.

“It’s a highly scientific domain with zero tolerance for error,” Punjani said. “Getting it wrong can be a huge waste of time and money.”

Structura’s software is deployed on premises, typically on customers’ hardware, which must be up to the task of processing real-time 3D microscope data. Punjani said labs often run this work on NVIDIA Quadro RTX 6000 GPUs, or something similar, while many larger pharmaceutical companies have invested in clusters of NVIDIA V100 Tensor Core GPUs accompanied by a variety of NVIDIA graphics cards.

Structura does all of its model training and software development on machines running multi-GPU nodes of V100 GPUs. Punjani said his team writes all of its GPU kernels from scratch because of the particular and exotic nature of the problem. The code that runs on Structura’s GPUs is written in CUDA, while cuDNN is used for some high-end computing tasks.

Right Software at the Right Time

Given the value of Structura’s innovations, and the importance of cryo-EM, Punjani isn’t holding back on his ambitions for the company, which recently joined NVIDIA Inception, an accelerator program designed to nurture startups revolutionizing industries with advancements in AI and data sciences.

Punjani says that any research related to living things can now make use of the information from 3D protein structures that cryo-EM offers and, as a result, there’s a lot of industry attention focused on the kind of work Structura’s software enables.

“What we’re building right now is a fundamental building block for cryo-EM to better enable structure-based drug discovery,” he said. “Cryo-EM is set to become ubiquitous throughout all biological research.”

Stay up to date with the latest healthcare news from NVIDIA.

The post A Binding Decision: Startup Uses Microscopy Breakthrough to Speed Creation of COVID-19 Vaccines appeared first on The Official NVIDIA Blog.

Read More

NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television

NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television

Concept art is often considered the bread and butter of filmmaking, and Ryan Church is the concept design supervisor that’s behind the visuals of many of our favorite films.

Church has created concept art for blockbusters such as Avatar, Tomorrowland and Transformers. He’s collaborated closely with George Lucas on the Star Wars prequels and sequels trilogies. Now, he’s working on the popular series The Mandalorian.

All images courtesy of Ryan Church.

When he’s not creating unique vehicles and dazzling worlds for film and television, Church captures new visions and illustrates designs in his personal time. He’s always had a close relationship with cutting-edge technology to produce the highest-quality visuals, even when he’s working at home.

Recently, Church got his hands on an HP Z8 workstation powered by the NVIDIA Quadro RTX 6000. With the performance and speed of RTX behind his concept designs, he can render stunning images of architecture, vehicles and scenery faster than ever.

RTX Delivers More Time for Precision and Creativity

Filmmakers are always trying to figure out the quickest way to bring a concept or idea to life in a fast-paced environment.

Church says that directors nowadays don’t just want to see a drawing of a place or item for the set, but to see the actual place or item in front of them.

To do so, Church creates his 3D models in Foundry’s Modo and turns to OctaneRender, a GPU render engine that uses NVIDIA RTX to accelerate the rendering performance for his scenes. This allows him to achieve real-time rendering, and with the large memory capacity and performance gains of NVIDIA RTX, Church can create massive worlds freely without worrying about optimizing the geometry of his scenes.

“NVIDIA RTX has allowed me to work without babysitting the geometry all along the way,” said Church. “The friction has been removed from the creation process, allowing me to stay focused on the art.”

Like Church, many concept artists are using technology to create and design complex virtual sets and elaborate 3D mattes for virtual production in real time. The large GPU memory capacities of RTX allow for free flow of art creation while working with multiple creative applications.

And when trying to find the perfect lighting, or tweaking the depth of field or reflections of a scene, the NVIDIA RTX GPU speeds up the workflow to allow for better, quicker designs. Church can do up to 20-30 passes on a scene, enabling him to iterate on his designs more often so he can get the look and feel he’s aiming for.

“The RTX card in the Z8 allows me to have that complex scene and really dial in much better and faster,” said Church. “With design, lighting, texturing happening all in real time, I can model and move lights around, and see it all happening in the active, updating viewport.”

When Church needs desktop-class performance on the go, he turns to his HP ZBook Studio mobile workstation. Featuring the NVIDIA Studio driver and NVIDIA Quadro RTX GPU, the ZBook Studio has been tested and certified to work with the top creative applications.

As a leading concept designer standing at the intersection between art and technology, Church has inspired countless artists, and his work will continue to inspire for generations to come.

Concept artist Ryan Church pushes boundaries of creativity with NVIDIA RTX.

Learn more about NVIDIA RTX.

The post NVIDIA RTX Real-Time Rendering Inspires Vivid Visuals, Captivating Cinematics for Film and Television appeared first on The Official NVIDIA Blog.

Read More