Healthcare Headliners Put AI Under the Microscope at GTC

Two revolutions are meeting in the field of life sciences — the explosion of digital data and the rise of AI computing to help healthcare professionals make sense of it all, said Daphne Koller and Kimberly Powell at this week’s GPU Technology Conference,.

Powell, NVIDIA’s vice president of healthcare, presented an overview of AI innovation in medicine that highlighted advances in drug discovery, medical imaging, genomics and intelligent medical instruments.

“There’s a digital biology revolution underway, and it’s generating enormous data, far too complex for human understanding,” she said. “With algorithms and computations at the ready, we now have the third ingredient — data — to truly enter the AI healthcare era.”

And Koller, a Stanford adjunct professor and CEO of the AI drug discovery company Insitro, focused on AI solutions in her talk outlining the challenges of drug development and the ways in which predictive machine learning models can enable a better understanding of disease-related biological data.

Digital biology “allows us to measure biological systems in entirely new ways, interpret what we’re measuring using data science and machine learning, and then bring that back to engineer biology to do things that we’d never otherwise be able to do,” she said.

Watch replays of these talks — part of a packed lineup of more than 100 healthcare sessions among 1,600 on-demand sessions — by registering free for GTC through April 23. Registration isn’t required to watch a replay of the keynote address by NVIDIA CEO Jensen Huang.

Data-Driven Insights into Disease

Recent advancements in biotechnology — including CRISPR, induced pluripotent stem cells and more widespread availability of DNA sequencing — have allowed scientists to gather “mountains of data,” Koller said in her talk, “leaving us with a problem of how to interpret those data.”

“Fortunately, this is where the other revolution comes in, which is that using machine learning to interpret and identify patterns in very large amounts of data has transformed virtually every sector of our existence,” she said.

The data-intensive process of drug discovery requires researchers to understand the biological structure of a disease, and then vet potential compounds that could be used to bind with a critical protein along the disease pathway. Finding a promising therapeutic is a complex optimization problem, and despite the exponential rise in the amount of digital data available in the last decade or two, the process has been getting slower and more expensive.

Daphne Koller, CEO of Insitro

Known as Eroom’s law, this observation finds that the research and development cost for bringing a new drug to market has trended upward since the 1980s, taking pharmaceutical companies more time and money. Koller says that’s because of all the potential drug candidates that fail to get approved for use.

“What we aim to do at Insitro is to understand those failures, and try and see whether machine learning — combined with the right kind of data generation — can get us to make better decisions along the path and avoid a lot of those failures,” she said. “Machine learning is able to see things that people just cannot see.”

Bringing AI to vast datasets can help scientists determine how physical characteristics like height and weight, known as phenotypes, relate to genetic variants, known as genotypes. In many cases, “these associations give us a hint about the causal drivers of disease,” said Koller.

She gave the example of NASH, or nonalcoholic steatohepatitis, a common liver condition related to obesity and diabetes. To study underlying causes and potential treatments for NASH, Insitro worked with biopharmaceutical company Gilead to apply machine learning to liver biopsy and RNA sequencing data from clinical trial data representing hundreds of patients.

The team created a machine learning model to analyze biopsy images to capture a quantitative representation of a patient’s disease state, and found even with just a weak level of supervision, the AI’s predictions aligned with the scores assigned by clinical pathologists. The models could even differentiate between images with and without NASH, which is difficult to determine with the naked eye.

Accelerating the AI Healthcare Era

It’s not enough to just have abundant data to create an effective deep learning model for medicine, however. Powell’s GTC talk focused on domain-specific computational platforms — like the NVIDIA Clara application framework for healthcare — that are tailored to the needs and quirks of medical datasets.

The NVIDIA Clara Discovery suite of AI libraries harnesses transformer models, popular in natural language processing, to parse biomedical deta. Using the NVIDIA Megatron framework for training transformers helps researchers build models with billions of parameters — like MegaMolBart, an NLP generative drug discovery model in development by NVIDIA and AstraZeneca for use in reaction prediction, molecular optimization and de novo molecular generation.

Kimberly Powell, VP of healthcare at NVIDIA

University of Florida Health has also used the NVIDIA Megatron framework and NVIDIA BioMegatron pre-trained model to develop GatorTron, the largest clinical language model to date, which was trained on more than 2 million patient records with more than 50 million interactions.

“With biomedical data at scale of petabytes, and learning at the scale of billions and soon trillions of parameters, transformers are helping us do and find the unexpected,” Powell said.

Clinical decisions, too, can be supported by AI insights that parse data from health records, medical imaging instruments, lab tests, patient monitors and surgical procedures.

“No one hospital’s the same, and no healthcare practice is the same,” Powell said. “So we need an entire ecosystem approach to developing algorithms that can predict the future, see the unseen, and help healthcare providers make complex decisions.”

The NVIDIA Clara framework has more than 40 domain-specific pretrained models available in the NGC catalog — including NVIDIA Federated Learning, which allows different institutions to collaborate on AI model development without sharing patient data with each other, overcoming challenges of data governance and privacy.

And to power the next generation of intelligent medical instruments, the newly available NVIDIA Clara AGX developer kit helps hospitals develop and deploy AI across smart sensors such as endoscopes, ultrasound devices and microscopes.

“As sensor technology continues to innovate, so must the computing platforms that process them,” Powell said. “With AI, instruments can become smaller, cheaper and guide an inexperienced user through the acquisition process.”

These AI-driven devices could help reach areas of the world that lack access to many medical diagnostics today, she said. “The instruments that measure biology, see inside our bodies and perform surgeries are becoming intelligent sensors with AI and computing.”

GTC registration is open through April 23. Attendees will have access to on-demand content through May 11. For more, subscribe to NVIDIA healthcare news, and follow NVIDIA Healthcare on Twitter.

The post Healthcare Headliners Put AI Under the Microscope at GTC appeared first on The Official NVIDIA Blog.

Read More

You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week

This GFN Thursday — when GeForce NOW members can learn what new games and updates are streaming from the cloud — we’re adding 15 games to the service, with new content, including NVIDIA RTX and DLSS in a number of games.

Plus, we have a GeForce NOW Reward for Spellbreak from our friends at Proletariat.

Rewards Are Rewarding

One of the benefits of being a GeForce NOW member is gaining access to exclusive rewards. These can include free games, in-game content, discounts and more.

This week, we’re offering the Noble Oasis outfit, a rare outfit from the game Spellbreak that’s exclusive to GeForce NOW members.

Play Spellbreak on GeForce NOW
Unleash your inner battlemage in Spellbreak, streaming on GeForce NOW,

Spellbreak Chapter 2: The Fracture is streaming on GeForce NOW. This massive update, released just last week, includes Dominion, the new 5 vs 5 team capture-point game mode. It also introduced Leagues, the new competitive ranking mode where players work their way up through Bronze, Silver and all the way to Legend. There were new map updates and gameplay changes as well, making it their biggest update yet.

Founders members will have first crack at the reward, starting today. It’s another benefit to thank you for gaming with us. Priority members are next in line and can look for their opportunity to redeem starting on Friday, April 16. Free members gain access on Tuesday, April 20.

It’s first come, first served, so be sure to redeem your reward as soon as you have access!

The Spellbreak in-game reward is the latest benefit for GeForce NOW members; others have included rewards for Discord, ARK: Survival Evolved, Hyperscape, Warface, Warframe and more.

Signing up for GeForce NOW Rewards is simple. Log in to your NVIDIA GeForce NOW account, click “Update Rewards Settings” and check the box.

Updates to Your Catalog

GeForce NOW members are getting updates to a few games this week in the form of new expansions or RTX support.

Path of Exile, the popular free-to-play, online, action RPG is getting an expansion in Path of Exile: Ultimatum. It contains the Ultimatum challenge league, eight new Skill and Support Gems, improvements to Vaal Skills, an overhaul to past league reward systems, dozens of new items, and much more.

Meanwhile, three games are adding RTX support with real-time, ray-traced graphics and/or NVIDIA DLSS. Mortal Shell gets the full complement of RTX support, while Medieval Dynasty and Observer System Redux get DLSS support to improve image quality while maintaining framerate.

Let’s Play Today

Nigate Tail on GeForce NOW
Nigate Tale is one of 15 games joining the GeForce NOW library.

Of course, GFN Thursday has even more games in store for members. This week we welcomed Nigate Tale, day-and-date with its Steam release on Tuesday. It’s currently available for 15 percent off through April 18. Members can also look for 14 additional games to join our library. Complete list below:

Excited for the reward? Looking forward to streaming one of this week’s new releases or new content? Let us know on Twitter or in the comments below.

The post You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week appeared first on The Official NVIDIA Blog.

Read More

Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups

To better connect venture capitalists with NVIDIA and promising AI startups, we’ve introduced the NVIDIA Inception VC Alliance. This initiative, which VCs can apply to now, aims to fast-track the growth for thousands of AI startups around the globe by serving as a critical nexus between the two communities.

AI adoption is growing across industries and startup funding has, of course, been booming. Investment in AI companies increased 52 percent last year to $52.1 billion, according to PitchBook.

A thriving AI ecosystem depends on both VCs and startups. The alliance aims to help investment firms identify and support leading AI startups early as part of their effort to realize meaningful returns down the line.

Introduced at GTC’s AI Day for VCs 

The alliance was unveiled this week by Jeff Herbst, vice president of business development and head of Inception at NVIDIA, speaking at the AI Day for VCs at GTC, NVIDIA’s annual tech conference.

“Being at the forefront of AI and data science puts NVIDIA Inception in a position to help educate and nurture both VCs and startups, which is why we’ve launched the NVIDIA Inception VC Alliance,” Herbst said. “By creating opportunities for our VC partners to connect with our Inception members, we strive to help advance the businesses of both. The reality is that startups need VCs and VCs need startups and we want to help be a part of that.”

Herbst said that founding members of the alliance include venture firms NEA, Acrew, Mayfield, Madrona Venture Group, In-Q-Tel, Pitango, Vanedge Capital and Our Crowd.

“AI startups are at the forefront of innovation and the NVIDIA Inception VC Alliance will bring them even closer access to the leading venture capital firms investing in AI,” said Greg Papadopoulos, venture partner at NEA. “Exposure to world-class capabilities from VC partners creates a fast-track for AI startups to accelerate their business in a way that benefits their stakeholders, customers and investors.”

Watch a recap of the announcement during the GTC session, “NVIDIA Inception and the Venture Capital Ecosystem – Supporting the Next Generation of AI Startups.”

Tapping Into NVIDIA’s Vast Startup Ecosystem

The NVIDIA Inception VC Alliance is part of the NVIDIA Inception program, an acceleration platform for over 7,500 startups working in AI, data science and HPC, representing every major industry and located in more than 90 countries.

Among its benefits, the alliance offers VCs exclusive access to high-profile events, visibility into top startups actively raising funds, and access to growth resources for portfolio companies.

VC alliance members can further nurture their portfolios by having their startups join NVIDIA Inception, which offers go-to-market support, infrastructure discounts and credits, AI training through NVIDIA’s Deep Learning Institute, and technology assistance.

Interested VCs are invited to join the NVIDIA Inception VC Alliance by applying here.

The post Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University

Scientific discovery powered by supercomputing has the potential to transform the world with research that benefits science, industry and society. A new open, cloud-native supercomputer at Cambridge University offers unrivaled performance that will enable researchers to pursue exploration like never before.

The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK National Research Cloud and one of the world’s most powerful academic supercomputers. It’s hosted at the University of Cambridge and funded by UKRI via STFC DiRAC, STFC IRIS, EPSRC, MRC and UKAEA.

The site, home to the U.K.’s largest academic research cloud environment, is now being enhanced by a new 4-petaflops Dell-EMC system with NVIDIA A100 GPUs, NVIDIA BlueField DPUs and NVIDIA InfiniBand networking that will deliver secured, multi-tenant, bare-metal high performance computing AI and data analytics services for a broad cross section of the U.K. national research community. The CSD3 employs a new cloud-native supercomputing platform enabled by NVIDIA and a revolutionary cloud HPC software stack, called Scientific OpenStack, developed by the University of Cambridge and StackHPC with funding from the DiRAC HPC Facility and the IRIS Facility.

The CSD3 system is projected to deliver a 4 PFLOPS of performance at deployment, ranking it among the top 500 supercomputers in the world. The system uses NVIDIA GPUs and x86 CPUs to provide over 10 petaflops of total performance and it includes the U.K.’s fastest solid-state storage array based on the Dell/Cambridge data accelerator.

The CSD3 provides open and secure access for researchers aiming to tackle some of the world’s most challenging problems across diverse fields such as astrophysics, nuclear fusion power generation development and lifesaving clinical medicine applications. It will advance scientific exploration using converged simulation, AI and data analytics workflows that members of the research community can access more easily and securely without sacrificing application performance or slowing work.

NVIDIA DPUs, HDR InfiniBand Power Next-Generation Systems

CSD3 is enabled by the NVIDIA HDR 200G InfiniBand-connected BlueField-2 DPU to offload infrastructure management such as security policies and storage frameworks from the host while providing acceleration and isolation for workloads to maximize input/output performance.

“Providing an easy and secure way to access the immense computing power of CSD3 is crucial to ushering in a new generation of scientific exploration that serves both the scientific community and industry in the U.K.,“ said Paul Calleja, director of Research Computing Services at Cambridge University. “The extreme performance of NVIDIA InfiniBand, together with the offloading, isolation and acceleration of workloads provided by BlueField DPUs, combined with our ‘Scientific OpenStack’ has enabled Cambridge University to provide a world-class cloud-native supercomputer for driving research that will benefit all of humankind.”

Networking performance is further accelerated by NVIDIA HDR InfiniBand’s In-Network Computing engines, providing optimal bare-metal performance, while natively supporting multi-node tenant isolation.  CSD3 also takes advantage of the latest-generation of Dell-EMC PowerEdge portfolio, with Dell EMC PowerEdge C6520 and PowerEdge XE8545 servers both optimized for data-intensive and AI workloads.

CSD3 is expected to be operational later this year. Learn more about CSD3.

The post NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University appeared first on The Official NVIDIA Blog.

Read More

Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer?

Cloud-native supercomputing is the next big thing in supercomputing, and it’s here today, ready to tackle the toughest HPC and AI workloads.

The University of Cambridge is building a cloud-native supercomputer in the UK. Two teams of researchers in the U.S. are separately developing key software elements for cloud-native supercomputing.

The Los Alamos National Laboratory, as part of its ongoing collaboration with the UCF Consortium, is helping to deliver capabilities that accelerate data algorithms. Ohio State University is updating Message Passing Interface software to enhance scientific simulations.

NVIDIA is making cloud-native supercomputers available to users worldwide in the form of its latest DGX SuperPOD. It packs key ingredients such as the NVIDIA BlueField-2 data processing unit (DPU) now in production.

So, What Is Cloud-Native Supercomputing?

Like Reese’s treats that wrap peanut butter in chocolate, cloud-native supercomputing combines the best of two worlds.

Cloud-native supercomputers blend the power of high performance computing with the security and ease of use of cloud computing services.

Put another way, cloud-native supercomputing provides an HPC cloud with a system as powerful as a TOP500 supercomputer that multiple users can share securely, without sacrificing the performance of their applications.

cloud-native supercomputer chart
A BlueField DPU supports offload of security, communications and management tasks to create an efficient cloud-native supercomputer.

What Can Cloud-Native Supercomputers Do?

Cloud-native supercomputers pack two key features.

First, they let multiple users share a supercomputer while ensuring that each user’s workload stays secure and private. It’s a capability known as “multi-tenant isolation” that’s available in today’s commercial cloud computing services. But it’s typically not found in HPC systems used for technical and scientific workloads where raw performance is the top priority and security services once slowed operations.

Second, cloud-native supercomputers use DPUs to handle tasks such as storage, security for tenant isolation and systems management. This offloads the CPU to focus on processing tasks, maximizing overall system performance.

The result is a supercomputer that enables native cloud services without a loss in performance. Looking forward, DPUs can handle additional offload tasks, so systems maintain peak efficiency running HPC and AI workloads.

How Do Cloud-Native Supercomputers Work?

Under the hood, today’s supercomputers couple two kinds of brains — CPUs and accelerators, typically GPUs.

Accelerators pack thousands of processing cores to speed parallel operations at the heart of many AI and HPC workloads. CPUs are built for the parts of algorithms that require fast serial processing. But over time they’ve become burdened with growing layers of communications tasks needed to manage increasingly large and complex systems.

Cloud-native supercomputers include a third brain to build faster, more efficient systems. They add DPUs that offload security, communications, storage and other jobs modern systems need to manage.

A Commuter Lane for Supercomputers

In traditional supercomputers, a computing job sometimes has to wait while the CPU handles a communications task. It’s a familiar problem that generates what’s called system noise.

In cloud-native supercomputers, computing and communications flow in parallel. It’s like opening a third lane on a highway to help all traffic flow more smoothly.

Early tests show cloud-native supercomputers can perform HPC jobs 1.4x faster than traditional ones, according to work at the MVAPICH lab at Ohio State, a specialist in HPC communications. The lab also showed cloud-native supercomputers achieve a 100 percent overlap of compute and communications functions, 99 percent higher than existing HPC systems.

Experts Speak on Cloud-Native Supercomputing

That’s why around the world, cloud-native supercomputing is coming online.

“We’re building the first academic cloud-native supercomputer in Europe to offer bare-metal performance with cloud-native InfiniBand services,” said Paul Calleja, director of computing at the University of Cambridge.

“This system, which would rank among the top 100 in the November 2020 TOP500 list, will enable our researchers to optimize their applications using the latest advances in supercomputing architecture,” he added.

HPC specialists are paving the way for further advances in cloud-native supercomputers.

“The UCF consortium of industry and academic leaders is creating the production-grade communication frameworks and open standards needed to enable the future for cloud-native supercomputing,” said Steve Poole, speaking in his role as director of the Unified Communication Framework, whose members include representatives from Arm, IBM, NVIDIA, U.S. national labs and U.S. universities.

“Our tests show cloud-native supercomputers have the architectural efficiencies to lift supercomputers to the next level of HPC performance while enabling new security features,” said Dhabaleswar K. (DK) Panda, a professor of computer science and engineering at Ohio State and lead of its Network-Based Computing Laboratory.

Learn More About Cloud-Native Supercomputers

To learn more, check out our technical overview on cloud-native supercomputing. You can also find more information online about the new system at the University of Cambridge and NVIDIA’s new cloud-native supercomputer.

And to get the big picture on the latest advances in HPC, AI and more, watch the GTC keynote.

 

The post Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer? appeared first on The Official NVIDIA Blog.

Read More

From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector

Whether helping the world understand our most immediate threats, like COVID-19, or seeing the future of landing humans on Mars, researchers are increasingly leaning on scientific visualization to analyze, understand and extract scientific insights.

With large-scale simulations generating tens or even hundreds of terabytes of data, and with team members dispersed around the globe, researchers need tools that can both enhance these visualizations and help them work simultaneously across different high performance computing systems.

NVIDIA Omniverse is a real-time collaboration platform that lets users share 2D and 3D simulation data in universal scene description (USD) format from their preferred content creation and visualization applications. Global teams can use Omniverse to view, interact with and update the same dataset with a live connection, making collaboration truly interactive.

Omniverse ParaView Connector

The platform has expanded to address the scientific visualization community and now includes a connector to ParaView, one of the world’s most popular scientific visualization applications. Researchers use ParaView on their local workstations or on HPC systems to analyze large datasets for a variety of domains, including astrophysics, climate and weather, fluid dynamics and structural analysis.

With the availability of the Omniverse ParaView Connector, announced at GTC21, researchers can boost their productivity and speed their discoveries. Large datasets no longer need to be downloaded and exchanged, and colleagues can get instantaneous feedback as Omniverse users can work in the same workspace in the cloud.

Chart showing the NVIDIA Omniverse-pipeline
The NVIDIA Omniverse pipeline

Users can upload their USD format data to the Omniverse Nucleus DB from various application connectors, including the ParaView Connector. The clients then connect to the Omniverse Kit and take advantage of:

  • Photorealistic visuals – Users can leverage a variety of core NVIDIA technologies such as real-time ray tracing, photorealistic materials, depth of field, and advanced lighting and shading through the Omniverse platform’s components such as Omniverse RTX Renderer. This enables researchers to better visualize and understand the results of their simulations leading to deeper insights.
  • Access to high-end visualization tools – Omniverse users can open and interact with USD files through a variety of popular applications like SideFX Houdini, Autodesk Maya and NVIDIA IndeX. See documentation on how to work with various applications in Omniverse to maximize analysis.
  • Interactivity at scale – Analyzing part of a dataset at a time through batched renderings is time-consuming. And traditional applications are too slow to render features like ray tracing, soft shadows and depth of field in real time, which are required for a fast and uninterrupted analysis. Now, users can intuitively interact with entire datasets in their original resolution at high frame rates for better and faster discoveries.

NVIDIA IndeX provides interactive visualization for large-scale volumetric data, allowing users to zoom in on the smallest details for any timestep in real time. With IndeX soon coming to Omniverse, users will be able to take advantage of both technologies for better and faster scientific analysis. This GTC session will go over what researchers can unlock when IndeX connects to Omniverse.

HPC NVIDIA IndeX Omniverse visualization
Visualization of Mars Lander using NVIDIA IndeX in NVIDIA Omniverse. Simulation data courtesy of NASA.
  • Real-time collaboration – Omniverse simplifies workflows by eliminating the need to download data on different systems. It also increases productivity by allowing researchers on different systems to visualize, analyze and modify the same data at the same time.
  • Publish cinematic visuals – Outreach is an essential part of scientific publications. With high-end rendering tools on the Omniverse platform, researchers and artists can interact in real time to transform their work into cinematic visuals that are easy for wide audiences to understand.

“Traditionally, scientists generate visualizations that are useful for data analysis, but are not always aesthetic and straightforward to understand by a broader audience,” said Brad Carvey, an Emmy and Addy Award-winning visualization research engineer at Sandia National Labs. “To generate a range of visualizations, I use ParaView, Houdini FX, Substance Painter, Photoshop and other applications. Omniverse allows me to use all of these tools, interactively, to create what I call ‘impactful visualizations.’”

Learn More from Omniverse Experts

Attend the following GTC sessions to dive deeper into the features and benefits of Omniverse and the ParaView connector:

Get Started Today

The Omniverse ParaView Connector is coming soon to Omniverse. Download and get started with Omniverse open beta here.

The post From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector appeared first on The Official NVIDIA Blog.

Read More

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security.

At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the necessary software to manage it as well as a white-glove services experience to help operationalize it.

Solving AI Challenges of Every Size, at Massive Scale

Since its introduction, DGX SuperPOD has enabled enterprises to scale their development on infrastructure that can tackle problems of a size and complexity that were previously unsolvable in a reasonable amount of time. It’s AI infrastructure built and managed the way NVIDIA does its own.

As AI gets infused into almost every aspect of modern business, the need to deliver almost limitless access to computational resources powering development has been scaling exponentially. This escalation in demand is exemplified by business-critical applications like natural language processing, recommender systems and clinical research.

Organizations often tap into the power of DGX SuperPOD in two ways. Some use it to solve huge, monolithic problems such as conversational AI, where the computational power of an entire DGX SuperPOD is brought to bear to accelerate the training of complex natural language processing models.

Others use DGX SuperPOD to service an entire company, providing multiple teams access to the system to support fluctuating needs across a wide variety of projects. In this mode, enterprise IT is often acting as a service provider, managing this AI infrastructure-as-a-service, with multiple users (perhaps even adversarial ones) who need and expect complete isolation of each other’s work and data.

DGX SuperPOD with BlueField DPU

Increasingly, businesses need to bring the world of high-performance AI supercomputing into an operational mode where many developers can be assured their work is secure and isolated like it is in cloud. And where IT can manage the environment much like a private cloud, with the ability to deliver resources to jobs, right-sized to the task, in a secure, multi-tenant environment.

This is called cloud-native supercomputing and it’s enabled by NVIDIA BlueField-2 DPUs, which bring accelerated, software-defined data center networking, storage, security and management services to AI infrastructure.

With a data processing unit optimized for enterprise deployment and 200 Gbps network connectivity, enterprises gain state-of-the-art, accelerated, fully programmable networking that implements zero trust security to protect against breaches, and isolate users and data, with bare-metal performance.

Every DGX SuperPOD now has this capability with the integration of two NVIDIA BlueField-2 DPUs in each DGX A100 node within it. IT administrators can use the offload, accelerate and isolate capabilities of NVIDIA BlueField DPUs to implement secure multi-tenancy for shared AI infrastructure without impacting the AI performance of the DGX SuperPOD.

Infrastructure Management with Base Command Manager

Every week, NVIDIA manages thousands of AI workloads executed on our internal DGX SATURNV infrastructure, which includes over 2,000 DGX systems. To date, we’ve run over 1.2 million jobs on it supporting over 2,500 developers across more than 200 teams. We’ve also been developing state-of-the-art infrastructure management software that ensures every NVIDIA developer is fully productive as they perform their research and develop our autonomous systems technology, robotics, simulations and more.

The software supports all this work, simplifies and streamlines management, and lets our IT team monitor health, utilization, performance and more. We’re adding this same software, called NVIDIA Base Command Manager, to DGX SuperPOD so businesses can run their environments the way we do. We’ll continuously improve Base Command Manager, delivering the latest innovations to customers automatically.

White-Glove Services

Deploying AI infrastructure is more than just installing servers and storage in data center racks. When a business decides to scale AI, they need a hand-in-glove experience that guides them from design to deployment to operationalization, without burdening their IT team to figure out how to run it, once the “keys” are handed over.

With DGX SuperPOD White Glove Services, customers enjoy a full lifecycle services experience that’s backed by proven expertise from install to operations. Customers benefit from pre-delivery performance certified on NVIDIA’s own acceptance cluster, which validates the deployed system is running at specification before it’s handed off.

White Glove Services also include a dedicated multidisciplinary NVIDIA team that covers everything from installation to infrastructure management to workflow to addressing performance-impacting bottlenecks and optimizations. The services are designed to give IT leaders peace of mind and confidence as they entrust their business to DGX SuperPOD.

DGX SuperPOD at GTC21

To learn more about DGX SuperPOD and how you can consolidate AI infrastructure and centralize development across your enterprise, check out our session presented by Charlie Boyle, vice president and general manager of DGX Systems, who will cover our DGX SuperPOD news and more in two separate sessions at GTC:

Register for GTC, which runs through April 16, for free.

Learn more:

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

Read More

XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk

Applying for a home mortgage can resemble a part-time job. But whether consumers are seeking out a home loan, car loan or credit card, there’s an incredible amount of work going on behind the scenes in a bank’s decision — especially if it has to say no.

To comply with an alphabet soup of financial regulations, banks and mortgage lenders have to keep pace with explaining the reasons for rejections to both applicants and regulators.

Busy in this domain, Wells Fargo will present at NVIDIA GTC21 this week some of its latest development work behind this complex decision-making using AI models accelerated by GPUs.

To inform their decisions, lenders have historically applied linear and non-linear regression models for financial forecasting and logistic and survivability models for default risk. These simple, decades-old methods are easy to explain to customers.

But machine learning and deep learning models are reinventing risk forecasting and in the process requiring explainable AI, or XAI, to allow for customer and regulatory disclosures.

Machine learning and deep learning techniques are more accurate but also more complex, which means banks need to spend extra effort to be able to explain decisions to customers and regulators.

These more powerful models allow banks to do a better job understanding the riskiness of loans, and may allow them to say yes to applicants that would have been rejected by a simpler model.

At the same time, these powerful models require more processing, so financial services firms like Wells Fargo are moving to GPU-accelerated models to improve processing, accuracy and explainability, and to provide faster results to consumers and regulators.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help understand the math inside an AI model.

XAI maps out the data inputs with the data outputs of models in a way that people can understand.

“You have all the linear sub-models, and you can see which factor is the most significant — you can see it very clearly,” said Agus Sudjianto, executive vice president and head of Corporate Model Risk at Wells Fargo, explaining his team’s recent work on Linear Iterative Feature Embedding (LIFE) in a research paper.

Wells Fargo XAI Development

The LIFE algorithm was developed to handle high prediction accuracy, ease of interpretation and efficient computation.

LIFE outperforms directly trained single-layer networks, according to Wells Fargo, as well as many other benchmark models in experiments.

The research paper — titled Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model — authors include Sudjianto, Jinwen Qiu, Miaoqi Li and Jie Chen.

Default or No Default 

Using LIFE, the bank can generate codes that correlate to model interpretability, offering the right explanations to which variables weighed heaviest in the decision. For example, codes might be generated for high debt-to-income ratio or a FICO score that fell below a set minimum for a particular loan product.

There can be anywhere from 40 to 80 different variables taken into consideration for explaining rejections.

“We assess whether the customer is able to repay the loan. And then if we decline the loan, we can give a reason from a recent code as to why it was declined,” said Sudjianto.

Future Work at Wells Fargo

Wells Fargo is also working on Deep ReLU networks to further its efforts in model explainability. Two of the team’s developers will be discussing research from their paper, Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification, at GTC.

Learn more about the LIFE model work by attending the GTC talk by Jie Chen, managing director for Corporate Model Risk at Wells Fargo. Learn about model work on Deep ReLU Networks by attending the talk by Aijun Zhang, a quantitative analytics specialist at Wells Fargo, and Zebin Yang, a Ph.D. student at Hong Kong University. 

Registration for GTC is free.

Image courtesy of joão vincient lewis on Unsplash

The post XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries

NVIDIA technology has been behind some of the world’s most stunning virtual reality experiences.

Each new generation of GPUs has raised the bar for VR environments, producing interactive experiences with photorealistic details to bring new levels of productivity, collaboration and fun.

And with each GTC, we’ve introduced new technologies and software development kits that help developers create extended reality (XR) content and experiences that are more immersive and delightful than ever.

From tetherless streaming with NVIDIA CloudXR to collaborating in a virtual world with NVIDIA Omniverse, our latest technologies are powering the next generation of XR.

This year at GTC, NVIDIA announced a new release for CloudXR that adds support for iOS. We also had announcements with leading cloud service providers to deliver high-quality XR streaming from the cloud. And we released a new version of Variable Rate Supersampling to improve visual performance.

Bringing High Performance and VR Mobility Together

NVIDIA CloudXR is an advanced technology that gives XR users the best of both worlds: the performance of NVIDIA GPUs with the mobility of untethered all-in-one head-mounted displays.

CloudXR is designed to stream all kinds of XR content from any server to any device. Users can easily access powerful, high-quality immersive experiences from anywhere in the world, without being physically connected to a workstation.

From product designers reviewing 3D models to first responders running training simulations, anyone can benefit from CloudXR using Windows and Android devices. We will soon be releasing CloudXR 2.1, which adds support for Apple iOS AR devices, including iPads and iPhones.

Taking XR Streaming to the Cloud

With 5G networks rolling out, streaming XR over 5G from the cloud has the potential to significantly enhance workflows across industries. But the big challenge with delivering XR from the cloud is latency — for people to have a great VR experience, they have to maintain 20ms motion-to-photon latency.

To deliver the best cloud streaming experience, we’ve fine-tuned NVIDIA CloudXR. Over the past six months, we’ve taken great strides to bring CloudXR streaming to cloud service providers, from Amazon Web Services to Tencent.

This year at GTC, we’re continuing this march forward with additional news:

Also at GTC, Google will present a session that showcases CloudXR running on a Google Cloud instance.

To support CloudXR everywhere, we’re adding more client devices to our family.

We’ve worked with Qualcomm Technologies to deliver boundless XR, and with Ericsson on its 5G radio and packet core infrastructure to optimize CloudXR. Hear about the translation of this work to the manufacturing environment at BT’s session in GTC’s XR track.

And we’ve collaborated with Magic Leap on a CloudXR integration, which they will present at GTC. Magic Leap and CloudXR provide a great step forward for spatial computing and an advanced solution that brings many benefits to enterprise customers.

Redefining the XR Experience 

The quality of visuals in a VR experience is critical to provide users with the best visual performance. That’s why NVIDIA developed Variable Rate Supersampling (VRSS), which allows rendering resources to be focused in a foveated region where they’ll have the greatest impact on image quality.

The first VRSS version supported fixed foveated rendering in the center of the screen. The latest version, VRSS 2, integrates dynamic gaze tracking, moving the foveated region where the user is looking.

These advances in XR technology are also paving the way for a solution that allows users to learn, work, collaborate or play with others in a highly realistic immersive environment. The CloudXR iOS integration will soon be available in NVIDIA Omniverse, a collaboration and simulation platform that streamlines 3D production pipelines.

Teams around the world can enter Omniverse and simultaneously collaborate across leading content creation applications in a shared virtual space. With the upcoming CloudXR 2.1 release, Omniverse users can stream specific AR solutions using their iOS tablets and phones.

Expanding XR Workflows at GTC

Learn more about these advances in XR technology at GTC. Register for free and explore over 40 speaker sessions that cover a variety of XR topics, from NVIDIA Omniverse to AI integrations.

Check out the latest XR demos, and get access to an exclusive Connect with Experts session.

And watch a replay of the GTC keynote address by NVIDIA CEO Jensen Huang to catch up on the latest announcements.

Sign up to get news and updates on NVIDIA XR technologies.

Feature image credit: Autodesk VRED.

The post NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries appeared first on The Official NVIDIA Blog.

Read More

GTC Showcases New Era of Design and Collaboration

Breakthroughs in 3D model visualization, such as real-time raytraced rendering and immersive virtual reality, are making architecture and design workflows faster, better and safer.  

At GTC this week, NVIDIA announced the newest advances for the AEC industry with the latest NVIDIA Ampere architecture-based enterprise desktop RTX GPUs, along with an expanded range of mobile laptop GPUs.  

AEC professionals will also want to learn more about NVIDIA Omniverse Enterprise, an open platform for 3D collaboration and physically accurate simulation. 

New RTX GPUs Bring More Power, Performance for AEC 

The NVIDIA RTX A5000 and A4000 GPUs are designed to enhance workflows for architectural design visualization. 

Based on NVIDIA Ampere architecture, the RTX A5000 and A4000 integrate secondgeneration RT Cores to further boost ray tracing , and thirdgeneration Tensor Cores to accelerate AI-powered workflows such as rendering denoising, deep learning super sampling and generative design.  

Several architecture firms, including HNTB, have experienced how the RTX A5000 enhances design workflows.  

“The performance we get from the NVIDIA RTX A5000, even when enabling NVIDIA RTX Global Illumination, is amazing,” said Austin Reed, director of creative media studio​ at HNTB. Having NVIDIA RTX professional GPUs at our designers’ desks at HNTB will enable us to fully leverage RTX technology in our everyday workflows.  

NVIDIA’s new range of mobile laptop GPU models — including the NVIDIA RTX A5000, A4000, A3000 and A2000, and the NVIDIA T1200, T600 and T500  allows AEC professionals to select the perfect GPU for their workloads and budgets.  

With this array of choices, millions of AEC professionals can do their best work from anywhere, even compute-intensive work such as immersive VR for construction rehearsals or point cloud visualization of massive 3D models 

NVIDIA Omniverse Enterprise: A Shared Space for 3D Collaboration  

Architecture firms can now accelerate graphics and simulation workflows with NVIDIA Omniverse Enterprise, the world’s first technology platform that enables global 3D design teams to simultaneously collaborate in a shared virtual space. 

The platform enables organizations to unite their assets and design software tools, so AEC professionals can collaborate on a single project file in real time. 

Powered by NVIDIA RTX technology, Omniverse delivers high-performance and physically accurate simulation for complex 3D scenes like cityscapes, along with real-time ray and pathtraced rendering. Architects and designers can instantly share physically accurate models across teams and devices, accelerating design workflows and reducing the number of review cycles.  

Artists Create Futuristic Renderings with NVIDIA RTX  

Overlapping with GTC, the “Building Utopia” design challenge allowed archviz specialists around the world to discover how NVIDIA RTX real-time rendering is transforming architectural design visualization. 

Our thanks to all the participants who showcased their creativity and submitted short animations they generated using Chaos Vantage running on NVIDIA RTX GPUs. NVIDIA, Lenovo, Chaos Group, KitBash3D and CG Architect are thrilled to announce the winners. 

Congratulations to the winner, Yi Xiang, who receives a Lenovo ThinkPad P15 with an NVIDIA Quadro RTX 5000 GPUIn second place, Cheng Lei will get an NVIDIA Quadro RTX 8000, and in third place, Dariele Polinar will receive an NVIDIA Quadro RTX 6000. 

Image courtesy of Yi Xiang.

Discover More AEC Content at GTC 

Learn more about the newest innovations and all the AEC-focused content at GTC by registering for free 

Check out the latest GTC demos that showcase amazing technologyJoin sessions on NVIDIA Omniverse presented by leading architecture firms like CannonDesignKPF and Woods BagotLearn how companies like The Grid Factory and The Gettys Group are using RTX-powered immersive experiences to accelerate design workflows.  

And be sure to watch a replay of the GTC keynote address by NVIDIA founder and CEO Jensen Huang.

 

Featured image courtesy of KPF – Beijing Century City – 北京世纪城市.

The post GTC Showcases New Era of Design and Collaboration appeared first on The Official NVIDIA Blog.

Read More