NVIDIA Research Achieves AI Training Breakthrough Using Limited Datasets

NVIDIA Research Achieves AI Training Breakthrough Using Limited Datasets

NVIDIA Research’s latest AI model is a prodigy among generative adversarial networks. Using a fraction of the study material needed by a typical GAN, it can learn skills as complex as emulating renowned painters and recreating images of cancer tissue.

By applying a breakthrough neural network training technique to the popular NVIDIA StyleGAN2 model, NVIDIA researchers reimagined artwork based on fewer than 1,500 images from the Metropolitan Museum of Art. Using NVIDIA DGX systems to accelerate training, they generated new AI art inspired by the historical portraits.

The technique — called adaptive discriminator augmentation, or ADA — reduces the number of training images by 10-20x while still getting great results. The same method could someday have a significant impact in healthcare, for example by creating cancer histology images to help train other AI models.

“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain,” said David Luebke, vice president of graphics research at NVIDIA. “I can’t wait to see what artists, medical experts and researchers use it for.”

The research paper behind this project is being presented this week at the annual Conference on Neural Information Processing Systems, known as NeurIPS. It’s one of a record 28 NVIDIA Research papers accepted to the prestigious conference.

This new method is the latest in a legacy of GAN innovation by NVIDIA researchers, who’ve developed groundbreaking GAN-based models for the AI painting app GauGAN, the game engine mimicker GameGAN, and the pet photo transformer GANimal. All are available on the NVIDIA AI Playground.

The Training Data Dilemma

Like most neural networks, GANs have long followed a basic principle: the more training data, the better the model. That’s because each GAN consists of two cooperating networks — a generator, which creates synthetic images, and a discriminator, which learns what realistic images should look like based on training data.

The discriminator coaches the generator, giving pixel-by-pixel feedback to help it improve the realism of its synthetic images. But with limited training data to learn from, a discriminator won’t be able to help the generator reach its full potential — like a rookie coach who’s experienced far fewer games than a seasoned expert.

It typically takes 50,000 to 100,000 training images to train a high-quality GAN. But in many cases, researchers simply don’t have tens or hundreds of thousands of sample images at their disposal.

With just a couple thousand images for training, many GANs would falter at producing realistic results. This problem, called overfitting, occurs when the discriminator simply memorizes the training images and fails to provide useful feedback to the generator.

In image classification tasks, researchers get around overfitting with data augmentation, a technique that expands smaller datasets using copies of existing images that are randomly distorted by processes like rotating, cropping or flipping — forcing the model to generalize better.

But previous attempts to apply augmentation to GAN training images resulted in a generator that learned to mimic those distortions, rather than creating believable synthetic images.

A GAN on a Mission

NVIDIA Research’s ADA method applies data augmentations adaptively, meaning the amount of data augmentation is adjusted at different points in the training process to avoid overfitting. This enables models like StyleGAN2 to achieve equally amazing results using an order of magnitude fewer training images.

As a result, researchers can apply GANs to previously impractical applications where examples are too scarce, too hard to obtain or too time-consuming to gather into a large dataset.

Different editions of StyleGAN have been used by artists to create stunning exhibits and produce a new manga based on the style of legendary illustrator Osamu Tezuka. It’s even been adopted by Adobe to power Photoshop’s new AI tool, Neural Filters.

With less training data required to get started, StyleGAN2 with ADA could be applied to rare art, such as the work by Paris-based AI art collective Obvious on African Kota masks.

Another promising application lies in healthcare, where medical images of rare diseases can be few and far between because most tests come back normal. Amassing a useful dataset of abnormal pathology slides would require many hours of painstaking labeling by medical experts.

Synthetic images created with a GAN using ADA could fill that gap, generating training data for another AI model that helps pathologists or radiologists spot rare conditions on pathology images or MRI studies. An added bonus: With AI-generated data, there are no patient data or privacy concerns, making it easier for healthcare institutions to share datasets.

NVIDIA Research at NeurIPS

The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas including AI, computer vision, self-driving cars, robotics and graphics. Over two dozen papers authored by NVIDIA researchers will be highlighted at NeurIPS, the year’s largest AI research conference, taking place virtually from Dec. 6-12.

Check out the full lineup of NVIDIA Research papers at NeurIPS.

Main images generated by StyleGAN2 with ADA, trained on a dataset of fewer than 1,500 images from the Metropolitan Museum of Art Collection API.

The post NVIDIA Research Achieves AI Training Breakthrough Using Limited Datasets appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Boosts Academic AI Research for Business Innovation

NVIDIA Boosts Academic AI Research for Business Innovation

Academic researchers are developing AI to solve challenging problems with everything from agricultural robotics to autonomous flying machines.

To help AI research like this make the leap from academia to commercial or government deployment, NVIDIA today announced the Applied Research Accelerator Program. The program supports applied research on NVIDIA platforms for GPU-accelerated application deployments.

The program will initially focus on robotics and autonomous machines. Worldwide spending on robotics systems and drones is forecast to reach $241 billion by 2023, an 88 percent increase from the $128.7 billion in spending expected for 2020, according to IDC. The program will also extend to other domains such as Data Science, NLP, Speech and Conversational AI in the months ahead.

The new program will support researchers and the organizations they work with in rolling out the next generation of applications developed on NVIDIA AI platforms, including the Jetson developer kits and SDKs like DeepStream and Isaac.

Researchers working with sponsoring organizations will also gain support from NVIDIA through technical guidance, hardware grants, funding, grant application support, AI training programs, not to mention networking and marketing opportunities.

NVIDIA is now accepting applications to the program from researchers working to apply robotics and AI for automation in collaboration with enterprises seeking to deploy new technologies in the market.

Accelerating and Deploying AI Research

The NVIDIA Applied Research Accelerator Program’s first group of participants have already demonstrated AI capabilities meriting further development for agriculture, logistics and healthcare.

  • The University of Florida is developing AI applications for smart sprayers used in agriculture, and working with Chemical Containers Inc. to deploy AI on machines running NVIDIA Jetson to reduce the amount of plant protection products applied to tree crops.
  • The Institute for Factory Automation and Production Systems at Friedrich-Alexander-University Erlangen-Nuremberg, based in Germany, is working with materials handling company KION and the intralogistics research association IFL to design drones for warehouse autonomy using NVIDIA Jetson.
  • The Massachusetts Institute of Technology is developing AI applications for disinfecting surfaces with UV-C light using NVIDIA Jetson. It’s also working with Ava Robotics to deploy autonomous disinfection on robots to minimize human supervision and additional risk of exposure to COVID-19.

Applied Research Accelerator Program Benefits  

NVIDIA offers hardware grants along with funding in some cases for academic researchers who can demonstrate AI feasibility in practical applications. The program also provides letters of support for third-party grant applications submitted by researchers.

Members will also have access to technical guidance on using NVIDIA platforms, including Jetson, as well as Isaac and DeepStream.

Membership in the new program includes access to training courses via the Deep Learning Institute to help researchers master a wide range of AI technologies.

NVIDIA also offers researchers opportunities to present and network at the GPU Technology Conferences.

Interested researchers can apply today for the Applied Research Accelerator Program.

The post NVIDIA Boosts Academic AI Research for Business Innovation appeared first on The Official NVIDIA Blog.

Read More

Big Wheels Keep on Learnin’: Einride’s AI Trucks Advance Capabilities with NVIDIA DRIVE AGX Orin

Big Wheels Keep on Learnin’: Einride’s AI Trucks Advance Capabilities with NVIDIA DRIVE AGX Orin

Swedish startup Einride has rejigged the big rig for highways around the world.

The autonomous truck maker launched the next generation of its cab-less autonomous truck, known as the Pod, with new, advanced functionality and pricing. The AI vehicles, which will be commercially available worldwide, will be powered by the latest in high-performance, energy-efficient compute — NVIDIA DRIVE AGX Orin.

These scalable self-driving haulers will begin to hit the road in 2023, with a variety of models available to customers around the world.

Autonomous trucks are always learning, taking in vast amounts of data to navigate the unpredictability of the real world, from highways to crowded ports. This rapid processing requires centralized, high-performance AI compute.

With the power of AI, these vehicles can easily rise to the demands of the trucking industry. The vehicles can operate 24 hours a day, improving delivery times. And, with increased efficiency, they can slash the annual cost of logistics in the U.S. by 45 percent, according to experts at McKinsey.

Einride’s autonomous pods and trucks are built for every type of route. They can automate short, routine trips like the loading and unloading of containers on cargo ships and managing port operations, as well as autonomously drive on the highway, dramatically streamlining the shipping and logistics.

A New Pod Joins the Squad

The latest Einride Pod features a refined design that balances sleek features with the practical requirements of wide-scale production.

Its rounded edges give it an aerodynamic shape for greater efficiency and performance, without sacrificing cargo space. The Pod’s lighting system — which includes headlights, tail lights and indicators — provides a signature look while improving visibility for road users.

The cab-less truck comes in a range of variations, depending on use case. The AET 1 (Autonomous Electric Transport) model is purpose-built for closed facilities with dedicated routes — such as a port or loading bay. The AET 2 can handle fenced-in areas as well as short-distance public roads between destinations.

The AET 3 and AET 4 vehicles are designed for fully autonomous operation on backroads and highways, with speeds of up to 45 km per hour.

Einride is currently accepting reservations for AET 1 and AET 2, with others set to ship starting in 2022.

Trucking Ahead with Orin

The Einride Pod is able to achieve its scalability and autonomous functionality by leveraging the next generation in AI compute.

NVIDIA Orin is a system-on-a-chip born out of the data center, consisting of 17 billion transistors and the result of four years of R&D investment. It achieves 200 TOPS — nearly 7x the performance of the previous generation SoC Xavier — and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous trucks, while achieving systematic safety standards such as ISO 26262 ASIL-D.

This massive compute capability ensures the Einride Pod is continuously learning, expanding the environments and situations in which it can operate autonomously.

These next-generation electric, self-driving freight transport vehicles built on NVIDIA DRIVE are primed to safely increase productivity, improve utilization, reduce emissions and decrease the world’s dependence on fossil fuels.

The post Big Wheels Keep on Learnin’: Einride’s AI Trucks Advance Capabilities with NVIDIA DRIVE AGX Orin appeared first on The Official NVIDIA Blog.

Read More

Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering

Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering

It’s time to bring krisp graphics to stick figure drawings.

Creative studio SoKrispyMedia, started by content creators Sam Wickert and Eric Leigh, develops short videos blended with high-quality visual effects. Since publishing one of their early works eight years ago on YouTube, Chalk Warfare 1, the team has regularly put out short films that showcase engaging visual effects and graphics — including Stick Figure Battle, which has nearly 25 million views.

Now, the Stick Figure saga continues with SoKrispyMedia’s latest, Stick Figure War, which relies on real-time rendering for photorealistic results, as well as improved creative workflows.

With real-time rendering, SoKrispyMedia worked more efficiently as they could see the final results quickly, and have more time for iterations so they could ensure the visuals looked exactly how they wanted — from stick figures piloting paper airplanes to robots fighting skeletons in textbooks.

The team enhanced their virtual production process by using Unreal Engine and a Dell Precision 7750 mobile workstation featuring an NVIDIA Quadro RTX 5000 GPU. Adding to this mix high-quality cameras and DaVinci Resolve software from Blackmagic Design, SoKrispyMedia produced a short film with higher quality than they ever thought possible.

Real-Time Rendering Sticks Out in Visual Effects

Integrating real-time rendering into their pipelines has allowed SoKrispyMedia to work faster and iterate more quickly. They no longer need to wait hundreds of hours for renders to preview — everything can be produced in real time.

“Looking back at our older videos and the technology we used, it feels like we were writing in pencil, and as the technology evolves, we’re adding more and more colors to our palette,” said Micah Malinics, producer at SoKrispyMedia.

For Stick Figure War, a lot of the elements in the video were drawn by hand, and then scanned and converted into 2D or 3D graphics in Unreal Engine. The creators also developed a stylized filter that allowed them to make certain elements look like cross-hatched drawings.

SoKrispyMedia used Unreal Engine to do real-time rendering for almost the entire film, which enabled them to explore more creative ideas and let their imaginations run wild without worrying about increased render times.

Pushing Creativity Behind the Scenes

While NVIDIA RTX and Unreal Engine have broadened the reach of real-time rendering, Blackmagic Design has made high-quality cameras more accessible so content creators can produce cinematic-quality work at a fraction of the cost.

For Stick Figure War, SoKrispyMedia used Blackmagic URSA Mini G2 for production, Pocket Cinema Camera for pick-up shots and Micro Studio Camera 4K for over-the-head VFX shots. With the cameras, the team could shoot videos at 4K resolution and crop footage without losing any resolution in post-production.

Editing workflows were accelerated as Blackmagic’s DaVinci Resolve utilized NVIDIA GPUs to dramatically speed up playback and performance.

“Five to 10 years ago, making this video would’ve been astronomically difficult. Now we’re able to simply plug the Blackmagic camera directly into Unreal and see final results in front of our eyes,” said Sam Wickert, co-founder of SoKrispyMedia. “Using the Resolve Live feature for interactive and collaborative color grading and editing is just so fast, easy and efficient. We’re able to bring so much more to life on screen than we ever thought possible.”

The SoKrispyMedia team was provided with a Dell Precision 7750 mobile workstation with an RTX 5000 GPU inside, allowing the content creators to work on the go and preview real-time renderings on set. And the Dell workstation’s display provided advanced color accuracy, from working in DaVinci Resolve to rendering previews and final images.

Learn more about the making of SoKrispyMedia’s latest video, Stick Figure War.

The post Chalk and Awe: Studio Crafts Creative Battle Between Stick Figures with Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure

Cloud or on premises? That’s the question many organizations ask when building AI infrastructure.

Cloud computing can help developers get a fast start with minimal cost. It’s great for early experimentation and supporting temporary needs.

As businesses iterate on their AI models, however, they can become increasingly complex, consume more compute cycles and involve exponentially larger datasets. The costs of data gravity can escalate, with more time and money spent pushing large datasets from where they’re generated to where compute resources reside.

This AI development “speed bump” is often an inflection point where organizations realize there are opex benefits with on-premises or collocated infrastructure. Its fixed costs can support rapid iteration at the lowest “cost per training run,” complementing their cloud usage.

Conversely, for organizations whose datasets are created in the cloud and live there, procuring compute resources adjacent to that data makes sense. Whether on-prem or in the cloud, minimizing data travel — by keeping large volumes as close to compute resources as possible — helps minimize the impact of data gravity on operating costs.

‘Own the Base, Rent the Spike’ 

Businesses that ultimately embrace hybrid cloud infrastructure trace a familiar trajectory.

One customer developing an image recognition application immediately benefited from a fast, effortless start in the cloud.

As their database grew to millions of images, costs rose and processing slowed, causing their data scientists to become more cautious in refining their models.

At this tipping point — when a fixed cost infrastructure was justified — they shifted training workloads to an on-prem NVIDIA DGX system. This enabled an immediate return to rapid, creative experimentation, allowing the business to build on the great start enabled by the cloud.

The saying “own the base, rent the spike” captures this situation. Enterprise IT provisions on-prem DGX infrastructure to support the steady-state volume of AI workloads and retains the ability to burst to the cloud whenever extra capacity is needed.

It’s this hybrid cloud approach that can secure the continuous availability of compute resources for developers while ensuring the lowest cost per training run.

Delivering the AI Hybrid Cloud with DGX and Google Cloud’s Anthos on Bare Metal

To help businesses embrace hybrid cloud infrastructure, NVIDIA has introduced support for Google Cloud’s Anthos on bare metal for its DGX A100 systems.

For customers using Kubernetes to straddle cloud GPU compute instances and on-prem DGX infrastructure, Anthos on bare metal enables a consistent development and operational experience across deployments, while reducing expensive overhead and improving developer productivity.

This presents several benefits to enterprises. While many have implemented GPU-accelerated AI in their data centers, much of the world retains some legacy x86 compute infrastructure. With Anthos on bare metal, IT can easily add on-prem DGX systems to their infrastructure to tackle AI workloads and manage it the same familiar way, all without the need for a hypervisor layer.

Without the need for a virtual machine, Anthos on bare metal — now generally available — manages application deployment and health across existing environments for more efficient operations. Anthos on bare metal can also manage application containers on a wide variety of performance, GPU-optimized hardware types and allows for direct application access to hardware.

“Anthos on bare metal provides customers with more choice over how and where they run applications and workloads,” said Rayn Veerubhotla, Director of Partner Engineering at Google Cloud. “NVIDIA’s support for Anthos on bare metal means customers can seamlessly deploy NVIDIA’s GPU Device Plugin directly on their hardware, enabling increased performance and flexibility to balance ML workloads across hybrid environments.”

Additionally, teams can access their favorite NVIDIA NGC containers, Helm charts and AI models from anywhere.

With this combination, enterprises can enjoy the rapid start and elasticity of resources offered on Google Cloud, as well as the secure performance of dedicated on-prem DGX infrastructure.

Learn more about Google Cloud’s Anthos.

Learn more about NVIDIA DGX A100.

The post How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure appeared first on The Official NVIDIA Blog.

Read More

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare

MONAI — the Medical Open Network for AI, a domain-optimized, open-source framework for healthcare — is now ready for production with the upcoming release of the NVIDIA Clara application framework for AI-powered healthcare and life sciences.

Introduced in April and already adopted by leading healthcare research institutions, MONAI is a PyTorch-based framework that enables the development of AI for medical imaging with industry-specific data handling, high-performance training workflows and reproducible reference implementations of state-of-the-art approaches.

As part of the updated Clara offering, MONAI will come with over 20 pre-trained models, including ones recently developed for COVID-19, as well as the latest training optimizations on NVIDIA DGX A100 GPUs that provide up to a sixfold acceleration in training turnaround time.

“MONAI is becoming the PyTorch of healthcare, paving the way for closer collaboration between data scientists and clinicians,” said Dr. Jayashree Kalpathy-Cramer, director of the QTIM lab at the Athinoula A. Martinos Center for Biomedical Imaging at MGH. “Global adoption of MONAI is fostering collaboration across the globe facilitated by federated learning.”

Adoption by the healthcare ecosystem of MONAI has been tremendous. DKFZ, King’s College London, Mass General, Stanford and Vanderbilt are among those to adopt the AI framework for imaging. MONAI is being used in everything from industry-leading imaging competitions to the first boot camp focused on the framework, held in September, which drew over 550 registrants from 40 countries, including undergraduate university students.

“MONAI is quickly becoming the go-to deep learning framework for healthcare. Getting from research to production is critical for the integration of AI applications into clinical care,” said Dr. Bennett Landman of Vanderbilt University. “NVIDIA’s commitment to community-driven science and allowing the academic community to contribute to a framework that is production-ready will allow for further innovation to build enterprise-ready features.”

New Features

NVIDIA Clara brings the latest breakthroughs in AI-assisted annotation, federated learning and production deployment to the MONAI community.

The latest version introduces a game-changer to AI-assisted annotation that allows radiologists to label complex 3D CT data in one-tenth of the clicks with a new model called DeepGrow 3D. Instead of the traditional time-consuming method of segmenting an organ or lesion image by image or slice by slice, which can be up to 250 clicks for a large organ like the liver, users can segment with far fewer clicks.

Integrated with Fovia Ai’s F.A.S.T. AI Annotation software, NVIDIA Clara’s AI-assisted annotation tools and the new DeepGrow 3D feature can be used for labeling training data as well as assisting radiologists when reading. Fovia offers the XStream HDVR SDK suite to review DICOM images that’s integrated into industry-leading PACS viewers.

AI-assisted annotation is the key to unlocking rich radiology datasets and was recently used to label the public COVID-19 CT dataset published by The Cancer Imaging Archive at the U.S. National Institutes of Health. This labeled dataset was then used in the MICCAI-endorsed COVID-19 Lung CT Lesion Segmentation Challenge.

Clara Federated Learning made possible the recent research collaboration of 20 hospitals around the world to develop a generalized AI model for COVID-19 patients. The EXAM model predicts oxygen requirements in COVID-19 patients, is available on the NGC software registry, and is being evaluated for clinical validation at Mount Sinai Health System in New York, Diagnósticos da America SA in Brazil, NIHR Cambridge Biomedical Research Centre in the U.K. and the NIH.

“The MONAI software framework provides key components for training and evaluating imaging-based deep learning models, and its open-source approach is fostering a growing community that is contributing exciting advances, such as federated learning,” said Dr. Daniel Rubin, professor of biomedical data science, radiology and medicine at Stanford University.

NVIDIA is additionally expanding its release of NVIDIA Clara to digital pathology applications, where the sheer sizes of images would choke off-the-shelf open-source AI tools. Clara for pathology early access contains reference pipelines for both training and deployment of AI applications.

“Healthcare data interoperability, model deployment and clinical pathway integration are an increasingly complex and intertwined topic, with significant field-specific expertise,” said Jorge Cardoso, CTO of the London Medical Imaging and AI Centre for Value-based Healthcare. “Project MONAI, jointly with the rest of the NVIDIA Clara ecosystem, will help deliver improvements to patient care and optimize hospital operations.”

Learn more about NVIDIA Clara Train 4.0 and subscribe to NVIDIA healthcare news.

The post MONAI Imaging Framework Fast-Tracked to Production to Accelerate AI in Healthcare appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups

Just over 125 years ago, on Nov. 8, 1895, the world’s first X-ray image was captured. It was a breakthrough that set the stage for the modern medical imaging industry.

Over the decades, an entire ecosystem of medical imaging hardware and software came to be, with AI startups now playing a key role within it.

Today we’re announcing the NVIDIA Inception Alliance for Healthcare, an initiative where medical AI startups have new opportunities to chart innovations and accelerate their success with the help of NVIDIA and its healthcare industry partners.

Premier members of NVIDIA Inception, our accelerator program for 6,500+ startups across 90 countries in AI and data sciences, can now join the GE Healthcare Edison Developer Program. Through integration with the GE Healthcare Edison Platform, these startups gain access to GE Healthcare’s global network to scale clinical and commercial activities within their expansive install base of 4 million imaging, mobile diagnostics and monitoring units across 160 countries with 230 million exams and associated data.

Premier members with FDA clearance also can join the Nuance AI Marketplace for Diagnostic Imaging. The Nuance AI Marketplace brings AI into the radiology workflow by connecting developers directly with radiology subscribers. It offers AI developers a single API to connect their AI solutions to radiologists across 8,000 healthcare facilities that use the Nuance PowerShare Network. And it gives radiology subscribers a one-stop shop to review, try, validate and buy AI models — bridging the technology divide to make AI useful, usable and used.

A Prescription for Growth

NVIDIA Inception, which recently added its 1,000th healthcare AI startup, offers its members a variety of ongoing benefits, including go-to-market support, technology assistance and access to NVIDIA expertise — all tailored to a business’s maturity stage. Startups get access to training through the NVIDIA Deep Learning Institute, preferred pricing on hardware through our global network of distributors, invitations to exclusive networking events and more.

To nurture the growth of AI startups in healthcare, and ultimately the entire medical ecosystem, NVIDIA is working with healthcare giants to offer Inception members an accelerated go-to-market path.

The NVIDIA Inception Alliance for Healthcare will forge new ways to grow through targeted networking, AI training, early access to technology, pitch competitions and technology integration. Members will receive customized training and support to develop, deploy and integrate NVIDIA GPU-accelerated apps everywhere within the medical imaging ecosystem.

Select startup members will have direct access to engage joint customers, in addition to marketing promotion of their results. The initiative will kick off with a pitch competition for leading AI startups in medical imaging and related supporting fields.

“Startups are on the forefront of innovation and the GE Healthcare Edison Developer Program provides them access to the world’s largest installed base of medical devices and customers,” said Karley Yoder, vice president and general manager of Artificial Intelligence at GE Healthcare. “Bringing together the world-class capabilities from industry-leading partners creates a fast-track to accelerate innovation in a connected ecosystem that will help improve the quality of care, lower healthcare costs and deliver better outcomes for patients.”

“With Nuance’s deep understanding of radiologists’ needs and workflow, we are uniquely positioned to help them transform healthcare by harnessing AI. The Nuance AI Marketplace gives radiologists the ability to easily purchase, validate and use AI models within solutions they use every day, so they can work smarter and more efficiently,” said Karen Holzberger, senior vice president and general manager of the Diagnostic Division at Nuance. “The AI models help radiologists focus their time and expertise on the right case at the right time, alleviate many repetitive, mundane tasks and, ultimately, improve patient care — and save more lives. Connecting NVIDIA Inception startup members to the Nuance AI Marketplace is a natural fit — and creates a connection for startups that benefits the entire industry.”

Learn more at the NVIDIA RSNA 2020 Special Address, which is open to the public on Tuesday, Dec. 1, at 6 p.m. CT. Kimberly Powell, NVIDIA’s vice president of healthcare, will discuss how we’re working with medical imaging companies, radiologists, data scientists, researchers and medical device companies to bring workflow acceleration, AI models and deployment platforms to the medical imaging ecosystem.

Subscribe to NVIDIA healthcare news.

The post NVIDIA Launches Inception Alliance with GE Healthcare and Nuance to Accelerate Medical Imaging AI Startups appeared first on The Official NVIDIA Blog.

Read More

Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle

Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle

Props to team top flops.

Virtual this year, the SC20 Student Cluster Competition was still all about teams vying for top supercomputing performance in the annual battle for HPC bragging rights.

That honor went to Beijing’s Tsinghua University, whose six-member undergraduate student team clocked in 300 teraflops of processing performance.

A one teraflop computer can process one trillion floating-point operations per second.

The Virtual Student Cluster Competition was this year’s battleground for 19 teams. Competitors consisted of either high school or undergraduate students. Teams were made up of six members, an adviser and vendor partners.

Real-World Scenarios

In the 72-hour competition, student teams designed and built virtual clusters running NVIDIA GPUs in the Microsoft Azure cloud. Students completed a set of benchmarks and real-world scientific workloads.

Teams ran the Gromac molecular dynamics application, tackling COVID-19 research. They also ran the CESM application to work on optimizing climate modeling code. The “reproducibility challenge” called on the teams to replicate results from an SC19 research paper.

Among other hurdles, teams were tossed a surprise exascale computing project mini-application, miniVite, to test their chops at compiling, running and optimizing.

A leaderboard tracked performance results of their submissions and the amount of money spent on Microsoft Azure as well as the burn rate of their spending by the hour on cloud resources.

Roller-Coaster Computing Challenges

The Georgia Institute of Technology competed for its second time. This year’s squad, dubbed Team Phoenix, had the good fortune of landing advisor Vijay Thakkar, a Gordon Bell Prize nominee this year.

Half of the team members were teaching assistants for introductory systems courses at Georgia Tech, said team member Sudhanshu Agarwal.

Georgia Tech used NVIDIA GPUs “wherever it was possible, as GPUs reduced computation time,” said Agarwal.

“We had a lot of fun this year and look forward to participating in SC21 and beyond,” he said.

Pan Yueyang, a junior in computer science at Peking University, joined his university’s supercomputing team before taking the leap to participate in the SC20 battle. But it was full of surprises, he noted.

He said that during the competition his team ran into a series of unforeseen hiccups. “Luckily it finished as required and the budget was slightly below the limitation,” he said.

Jacob Xiaochen Li, a junior in computer science at the University of California, San Diego, said his team was relying on NVIDIA GPUs for the MemXCT portion of the competition to reproduce the scaling experiment along with memory bandwidth utilization. “Our results match the original chart closely,” he said, noting there were some hurdles along the way.

Po Hao Chen, a sophmore in computer science at Boston University, said he committed to the competition because he’s always enjoyed algorithmic optimization. Like many, he had to juggle the competition with the demands of courses and exams.

“I stayed up for three whole days working on the cluster,” he said. “And I really learned a lot from this competition.”

Teams and Flops

Tsinghua University, China
300 TFLOPS

ETH Zurich
129 TFLOPS

Southern University of Science and Technology
120 TFLOPS

Texas A&M University
113 TFLOPS

Georgia Institute of Technology
108 TFLOPS

Nanyang Technological University, Singapore
105 TFLOPS

University of Warsaw
75.0 TFLOPS

University of Illinois
71.6 TFLOPS

Massachusetts Institute of Technology
64.9 TFLOPS

Peking University
63.8 TFLOPS

University of California, San Diego
53.9 TFLOPS

North Carolina State University
44.3 TFLOPS

Clemson University
32.6 TFLOPS

Friedrich-Alexander University Erlangen-Nuremberg
29.0 TFLOPS

Northeastern University
21.1 TFLOPS

Shanghai Jiao Tong University
19.9 TFLOPS

ShanghaiTech University
14.4 TFLOPS

University of Texas
13.1 TFLOPS

Wake Forest University
9.172 TFLOPS

 

The post Supercomputing Chops: China’s Tsinghua Takes Top Flops in SC20 Student Cluster Battle appeared first on The Official NVIDIA Blog.

Read More

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology

As the healthcare world battles the pandemic, the medical-imaging field is gaining ground with AI, forging new partnerships and funding startup innovation. It will all be on display at RSNA, the Radiological Society of North America’s annual meeting, taking place Nov. 29 – Dec. 5.

Radiologists, healthcare organizations, developers and instrument makers at RSNA will share their latest advancements and what’s coming next — with an eye on the growing ability of AI models to integrate with medical-imaging workflows. More than half of informatics abstracts submitted to this year’s virtual conference involve AI.

In a special public address at RSNA, Kimberly Powell, NVIDIA’s VP of healthcare, will discuss how we’re working with research institutions, the healthcare industry and AI startups to bring workflow acceleration, deep learning models and deployment platforms to the medical imaging ecosystem.

Healthcare and AI experts worldwide are putting monumental effort into developing models that can help radiologists determine the severity of COVID cases from lung scans. They’re also building platforms to smoothly integrate AI into daily workflows, and developing federated learning techniques that help hospitals work together on more robust AI models.

The NVIDIA Clara Imaging application framework is poised to advance this work with NVIDIA GPUs and AI models that can accelerate each step of the radiology workflow, including image acquisition, scan annotation, triage and reporting.

Delivering Tools to Radiologists, Developers, Hospitals

AI developers are working to bridge the gap between their models and the systems radiologists already use, with the goal of creating seamless integration of deep learning insights into tools like PACS digital archiving systems. Here’s how NVIDIA is supporting their work:

  • We’ve strengthened the NVIDIA Clara application framework’s full-stack GPU-accelerated libraries and SDKs for imaging, with new pretrained models available on the NGC software hub. NVIDIA and the U.S. National Institutes of Health jointly developed AI models that can help researchers classify COVID cases from chest CT scans, and evaluate the severity of these cases.
  • Using the NVIDIA Clara Deploy SDK, Mass General Brigham researchers are testing a risk assessment model that analyzes chest X-rays to determine the severity of lung disease. The tool was developed by the Athinoula A. Martinos Center for Biomedical Imaging, which has adopted NVIDIA DGX A100 systems to power its research.
  • Together with King’s College London, we introduced this year MONAI, an open-source AI framework for medical imaging. Based on the Ignite and PyTorch deep learning frameworks, the modular MONAI code can be easily ported to researchers’ existing AI pipelines. So far, the GitHub project has dozens of contributors and over 1,500 stars.
  • NVIDIA Clara Federated Learning enables researchers to collaborate on training robust AI models without sharing patient information. It’s been used by hospitals and academic medical centers to train models for mammogram assessment, and to assess the likelihood that patients with COVID-19 symptoms will need supplemental oxygen.

NVIDIA at RSNA

RSNA attendees can check out NVIDIA’s digital booth to discover more about GPU-accelerated AI in medical imaging. Hands-on training courses from the NVIDIA Deep Learning Institute are also available, covering medical imaging topics including image classification, coarse-to-fine contextual memory and data augmentation with generative networks. The following events feature NVIDIA speakers:

Over 50 members of NVIDIA Inception — our accelerator program for AI startups — will be exhibiting at RSNA, including Subtle Medical, which developed the first AI tools for medical imaging enhancement to receive FDA clearance and this week announced $12 million in Series A funding.

Another, TrainingData.io used the NVIDIA Clara SDK to train a segmentation AI model to analyze COVID disease progression in chest CT scans. And South Korean startup Lunit recently received the European CE mark and partnered with GE Healthcare on an AI tool that flags abnormalities on chest X-rays for radiologists’ review.

Visit the NVIDIA at RSNA webpage for a full list of activities at the show. Email to request a meeting with our deep learning experts.

Subscribe to NVIDIA healthcare news here.

The post Bringing Enterprise Medical Imaging to Life: RSNA Highlights What’s Next for Radiology appeared first on The Official NVIDIA Blog.

Read More

Science Magnified: Gordon Bell Winners Combine HPC, AI

Science Magnified: Gordon Bell Winners Combine HPC, AI

Seven finalists including both winners of the 2020 Gordon Bell awards used supercomputers to see more clearly atoms, stars and more — all accelerated with NVIDIA technologies.

Their efforts required the traditional number crunching of high performance computing, the latest data science in graph analytics, AI techniques like deep learning or combinations of all of the above.

The Gordon Bell Prize is regarded as a Nobel Prize in the supercomputing community, attracting some of the most ambitious efforts of researchers worldwide.

AI Helps Scale Simulation 1,000x

Winners of the traditional Gordon Bell award collaborated across universities in Beijing, Berkeley and Princeton as well as Lawrence Berkeley National Laboratory (Berkeley Lab). They used a combination of HPC and neural networks they called DeePMDkit to create complex simulations in molecular dynamics, 1,000x faster than previous work while maintaining accuracy.

In one day on the Summit supercomputer at Oak Ridge National Laboratory, they modeled 2.5 nanoseconds in the life of 127.4 million atoms, 100x more than the prior efforts.

Their work aids understanding complex materials and fields with heavy use of molecular modeling like drug discovery. In addition, it demonstrated the power of combining machine learning with physics-based modeling and simulation on future supercomputers.

Atomic-Scale HPC May Spawn New Materials 

Among the finalists, a team including members from Berkeley Lab and Stanford optimized the BerkeleyGW application to bust through the complex math needed to calculate atomic forces binding more than 1,000 atoms with 10,986 electrons, about 10x more than prior efforts.

“The idea of working on a system with tens of thousands of electrons was unheard of just 5-10 years ago,” said Jack Deslippe, a principal investigator on the project and the application performance lead at the U.S. National Energy Research Scientific Computing Center.

Their work could pave a way to new materials for better batteries, solar cells and energy harvesters as well as faster semiconductors and quantum computers.

The team used all 27,654 GPUs on the Summit supercomputer to get results in just 10 minutes, thanks to harnessing an estimated 105.9 petaflops of double-precision performance.

Developers are continuing the work, optimizing their code for Perlmutter, a next-generation system using NVIDIA A100 Tensor Core GPUs that sport hardware to accelerate 64-bit floating-point jobs.

Analytics Sifts Text to Fight COVID

Using a form of data mining called graph analytics, a team from Oak Ridge and Georgia Institute of Technology found a way to search for deep connections in medical literature using a dataset they created with 213 million relationships among 18.5 million concepts and papers.

Their DSNAPSHOT (Distributed Accelerated Semiring All-Pairs Shortest Path) algorithm, using the team’s customized CUDA code, ran on 24,576 V100 GPUs on Summit, delivering results on a graph with 4.43 million vertices in 21.3 minutes. They claimed a record for deep search in a biomedical database and showed the way for others.

Graph analytics from Gordon Bell 2020 finalists at Oak Ridge and GIT
Graph analytics finds deep patterns in biomedical literature related to COVID-19.

“Looking forward, we believe this novel capability will enable the mining of scholarly knowledge … (and could be used in) natural language processing workflows at scale,” Ramakrishnan Kannan, team lead for computational AI and machine learning at Oak Ridge, said in an article on the lab’s site.

Tuning in to the Stars

Another team pointed the Summit supercomputer at the stars in preparation for one of the biggest big-data projects ever tackled. They created a workflow that handled six hours of simulated output from the Square Kilometer Array (SKA), a network of thousands of radio  telescopes expected to come online later this decade.

Researchers from Australia, China and the U.S. analyzed 2.6 petabytes of data on Summit to provide a proof of concept for one of SKA’s key use cases. In the process they revealed critical design factors for future radio telescopes and the supercomputers that study their output.

The team’s work generated 247 GBytes/second of data and spawned 925 GBytes/s in I/O. Like many other finalists, they relied on the fast, low-latency InfiniBand links powered by NVIDIA Mellanox networking, widely used in supercomputers like Summit to speed data among thousands of computing nodes.

Simulating the Coronavirus with HPC+AI

The four teams stand beside three other finalists who used NVIDIA technologies in a competition for a special Gordon Bell Prize for COVID-19.

The winner of that award used all the GPUs on Summit to create the largest, longest and most accurate simulation of a coronavirus to date.

“It was a total game changer for seeing the subtle protein motions that are often the important ones, that’s why we started to run all our simulations on GPUs,” said Lilian Chong, an associate professor of chemistry at the University of Pittsburgh, one of 27 researchers on the team.

“It’s no exaggeration to say what took us literally five years to do with the flu virus, we are now able to do in a few months,” said Rommie Amaro, a researcher at the University of California at San Diego who led the AI-assisted simulation.

The post Science Magnified: Gordon Bell Winners Combine HPC, AI appeared first on The Official NVIDIA Blog.

Read More