NVIDIA Delivers Streaming AR and VR from the Cloud with AWS

NVIDIA Delivers Streaming AR and VR from the Cloud with AWS

NVIDIA and AWS are bringing the future of XR streaming to the cloud.

Announced today, the NVIDIA CloudXR platform will be available on Amazon EC2 P3 and G4 instances, which support NVIDIA V100 and T4 GPUs, allowing cloud users to stream high-quality immersive experiences to remote VR and AR devices.

The CloudXR platform includes the NVIDIA CloudXR software development kit, NVIDIA Virtual Workstation software and NVIDIA AI SDKs to deliver photorealistic graphics, with the mobile convenience of all-in-one XR headsets. XR is a collective term for VR, AR and mixed reality.

With the ability to stream from the cloud, professionals can now easily set up, scale and access immersive experiences from anywhere — they no longer need to be tethered to expensive workstations or external VR tracking systems.

The growing availability of advanced tools like CloudXR is paving the way for enhanced collaboration, streamlined workflows and high fidelity virtual environments. XR solutions are also introducing new possibilities for adding AI features and functionality.

With the CloudXR platform, many early access customers and partners across industries like manufacturing, media and entertainment, healthcare and others are enhancing immersive experiences by combining photorealistic graphics with the mobility of wireless head-mounted displays.

Lucid Motors recently announced the new Lucid Air, a powerful and efficient electric vehicle that users can experience through a custom implementation of the ZeroLight platform. Lucid Motors is developing a virtual design showroom using the CloudXR platform. By streaming the experience from AWS, shoppers can enter the virtual environment and see the advanced features of Lucid Air.

“NVIDIA CloudXR allows people all over the world to experience an incredibly immersive, personalized design with the new Lucid Air,“ said Thomas Orenz, director of digital interactive marketing at Lucid Motors. “By using the AWS cloud, we can save on infrastructure costs by removing the need for onsite servers, while also dynamically scaling the VR configuration experiences for our customers.”

Another early adopter of CloudXR on AWS is The Gettys Group, a hospitality design, branding and development company based in Chicago. Gettys frequently partners with visualization company Theia Interactive to turn the design process into interactive Unreal Engine VR experiences.

When the coronavirus pandemic hit, Gettys and Theia used NVIDIA CloudXR to deliver customer projects to a local Oculus Quest HMD, streaming from the AWS EC2 P3 instance with NVIDIA Virtual Workstations.

“This is a game changer — by streaming collaborative experiences from AWS, we can digitally bring project stakeholders together on short notice for quick VR design alignment meetings,” said Ron Swidler, chief innovation officer at The Gettys Group. “This is going to save a ton of time and money, but more importantly it’s going to increase client engagement, understanding and satisfaction.”


Next-Level Streaming from the Cloud

CloudXR is built on NVIDIA RTX GPUs to allow streaming of immersive AR, VR or mixed reality experiences from anywhere.

The platform includes:

  • NVIDIA CloudXR SDK, which provides support for all OpenVR apps and includes broad client support for phones, tablets and HMDs. Its adaptive streaming protocol delivers the richest experiences with the lowest perceived latency by constantly adapting to network conditions.
  • NVIDIA Virtual Workstations to deliver the most immersive, highest quality graphics at the fastest frame rates. It’s available from cloud providers such as AWS, or can be deployed from an enterprise data center.
  • NVIDIA AI SDKs to accelerate performance and enhance immersive presence.

With the NVIDIA CloudXR platform on Amazon EC2 G4 and P3 instances supporting NVIDIA T4 and V100 GPUs, companies can deliver high-quality virtual experiences to any user, anywhere in the world.

Availability Coming Soon

NVIDIA CloudXR on AWS will be generally available early next year, with a private beta available in the coming months. Sign up now to get the latest news and updates on upcoming CloudXR releases, including the private beta.

The post NVIDIA Delivers Streaming AR and VR from the Cloud with AWS appeared first on The Official NVIDIA Blog.

Read More

Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs

Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs

Researchers at NVIDIA and Massachusetts General Brigham Hospital have developed an AI model that determines whether a person showing up in the emergency room with COVID-19 symptoms will need supplemental oxygen hours or even days after an initial exam.

The original model, named CORISK, was developed by scientist Dr. Quanzheng Li at Mass General Brigham. It combines medical imaging and health records to help clinicians more effectively manage hospitalizations at a time when many countries may start seeing a second wave of COVID-19 patients.

Oxygen prediction AI workflow

To develop an AI model that doctors trust and that generalizes to as many hospitals as possible, NVIDIA and Mass General Brigham embarked on an initiative called EXAM (EMR CXR AI Model) the largest, most diverse federated learning initiative with 20 hospitals from around the world.

In just two weeks, the global collaboration achieved a model with .94 area under the curve (with an AUC goal of 1.0), resulting in excellent prediction for the level of oxygen required by incoming patients. The federated learning model will be released as part of NVIDIA Clara on NGC in the coming weeks.

Looking Inside the ‘EXAM’ Initiative

Using NVIDIA Clara Federated Learning Framework, researchers at individual hospitals were able to use a chest X-ray, patient vitals and lab values to train a local model and share only a subset of model weights back with the global model in a privacy-preserving technique called federated learning.

The ultimate goal of this model is to predict the likelihood that a person showing up in the emergency room will need supplemental oxygen, which can aid physicians in determining the appropriate level of care for patients, including ICU placement.

Dr. Ittai Dayan, who leads development and deployment of AI at Mass General Brigham, co-led the EXAM initiative with NVIDIA, and facilitated the use of CORISK as the starting point for the federated learning training. The improvements were achieved by training the model on distributed data from a multinational, diverse dataset of patients across North and South America, Canada, Europe and Asia.

In addition to Mass Gen Brigham and its affiliated hospitals, other participants included: Children’s National Hospital in Washington, D.C.; NIHR Cambridge Biomedical Research Centre; The Self-Defense Forces Central Hospital in Tokyo; National Taiwan University MeDA Lab and MAHC and Taiwan National Health Insurance Administration; Kyungpook National University Hospital in South Korea; Faculty of Medicine, Chulalongkorn University in Thailand; Diagnosticos da America SA in Brazil; University of California, San Francisco; VA San Diego; University of Toronto; National Institutes of Health in Bethesda, Maryland; University of Wisconsin-Madison School of Medicine and Public Health; Memorial Sloan Kettering Cancer Center in New York; and Mount Sinai Health System in New York.

Each of these hospitals used NVIDIA Clara to train its local models and participate in EXAM.

Rather than needing to pool patient chest X-rays and other confidential information into a single location, each institution uses a secure, in-house server for its data. A separate server, hosted on AWS, holds the global deep neural network, and each participating hospital gets a copy of the model to train on its own dataset.

Collaboration on a Global Scale

Large-scale federated learning projects also are underway, aimed at improving drug discovery and bringing AI benefits to the point of care.

Owkin is teaming up with NVIDIA, King’s College London and more than a dozen other organizations on MELLODDY, a drug-discovery consortium based in the U.K., to demonstrate how federated learning techniques could give pharmaceutical partners the best of both worlds: the ability to leverage the world’s largest collaborative drug compound dataset for AI training without sacrificing data privacy.

King’s College London is hoping that its work with federated learning, as part of its London Medical Imaging and Artificial Intelligence Centre for Value-Based Healthcare project, could lead to breakthroughs in classifying stroke and neurological impairments, determining the underlying causes of cancers, and recommending the best treatment for patients.

Learn more about another AI model for COVID-19 utilizing a multinational dataset in this paper, and about the science behind federated learning in this paper.

The post Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs appeared first on The Official NVIDIA Blog.

Read More

American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime

American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime

Financial fraud is surging along with waves of cybersecurity breaches.

Cybercrime cost the global economy $600 billion annually, or 0.8 percent of worldwide GDP, according to an estimate in 2018 from McAfee. And consulting firm Accenture forecasts cyberattacks could cost companies $5.2 trillion worldwide by 2024.

Credit and bank cards are a major target. American Express, which handles more than eight  billion transactions a year, is using deep learning on the NVIDIA GPU computing platform to combat fraud detection.

American Express has now deployed deep-learning-based models optimized with NVIDIA TensorRT and running on NVIDIA Triton Inference Server to detect fraud, NVIDIA CEO Jensen Huang announced at the GPU Technology Conference on Monday.

NVIDIA TensorRT is a high performance deep learning inference optimizer and runtime that minimizes latency and maximizes throughput.

NVIDIA Triton Inference Server software simplifies model deployment at scale and can be used as a  microservice that enables applications to use AI models in datacenter production.

“Our fraud algorithms monitor in real time every American Express transaction around the world for more than $1.2 trillion spent annually, and we generate fraud decisions in mere milliseconds,” said Manish Gupta, vice president of Machine Learning and Data Science Research at American Express.

Online Shopping Spree

Online shopping has spiked since the pandemic. In the U.S. alone, online commerce rose 49 percent in April compared with early March, according to Adobe’s Digital Economy Index.

That means less cash, more digital dollars. And more digital dollars demand bank and credit card usage, which has already seen increased fraud.

“Card fraud netted criminals $3.88 billion more in 2018 than in 2017,” said David Robertson, publisher of The Nilson Report, which tracks information about the global payments industry.

American Express, with more than 115 million active credit cards, has maintained the lowest fraud rate in the industry for 13 years in a row, according to The Nilson Report

“Having our card members and merchants’ back is our top priority, so keeping our fraud rates low is key to achieving that goal,” said Gupta.

Anomaly Detection with GPU Computing

With online transactions rising, fraudsters are waging more complex attacks as financial firms step up security measures.

One area that is easier to monitor is anomalous spending patterns. These types of transactions on one card — known as “out of pattern” — could show a coffee was purchased in San Francisco and then five minutes later a tank of gas was purchased in Los Angeles.

Such anomalies are red-flagged using recurrent neural networks, or RNNs, which are particularly good at guessing what comes next in a sequence of data.

American Express has deployed long short-term memory networks, or LSTMs, which can provide improved performance in RNNs.

And that can mean the closing gaps on latency and accuracy, two areas where American Express has made leaps. The teams there used NVIDIA DGX systems to accelerate the building and training of these LSTM models on mountains of structured and unstructured data using TensorFlow.

50x Gains Over CPUs

The recently released TensorRT-optimized LSTM network aids the system that analyzes transaction data on tens of millions of daily transactions in real time. This LSTM is now deployed using the NVIDIA Triton Inference Server on NVIDIA T4 GPUs for split-second inference.

Results are in: American Express was able to implement this enhanced, real-time fraud detection system for improved accuracy. It operates within a tight two-millisecond latency requirement, and this new system delivers a 50x improvement over a CPU-based configuration, which couldn’t meet the goal.

The financial services giant’s GPU-accelerated LSTM deep neural network combined with its long-standing gradient boosting machine (GBM) model — used for regression and classification — has improved fraud detection accuracy by up to six percent in specific segments.

Accuracy matters. A false positive that denies a customer’s legitimate transaction is an unpleasant situation to be in for card members and merchants, says American Express.

“Especially in this environment, our customers need us now more than ever, so we’re supporting them with best-in-class fraud protection and servicing,” Gupta said.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime appeared first on The Official NVIDIA Blog.

Read More

NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture

NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture

From AI to VDI, NVIDIA virtual GPU products provide employees with powerful performance for any workflow.

vGPU technology helps IT departments easily scale the delivery of GPU resources, and allows professionals to collaborate and run advanced graphics and computing workflows from the data center or cloud.

Now, NVIDIA is expanding its vGPU software features with a new release that supports the NVIDIA A100 Tensor Core GPU with NVIDIA Virtual Compute Server (vCS) software. Based on NVIDIA vGPU technology, vCS enables AI and compute-intensive workloads to run in VMs.

With support for the NVIDIA A100, the latest NVIDIA vCS delivers significantly faster performance for AI and data analytics workloads.

Powered by the NVIDIA Ampere architecture, the A100 GPU provides strong scaling for GPU compute and deep learning applications running in single- and multi-GPU workstations, servers, clusters, cloud data centers, systems at the edge and supercomputers.

Enterprise data centers standardized on hypervisor-based virtualization can now deploy the A100 with vCS for all the operational benefits that virtualization brings with management and monitoring, without sacrificing performance. And with the workloads running in virtual machines, they can be managed, monitored and run remotely on any device, anywhere.

Graph shows normalized performance of MIG 2g.10gb running inferencing workload in bare metal (dark green) is nearly the same when running a Virtual Compute Server VM on each MIG instance (light green).

Engineers, researchers, students, data scientists and others can now tackle compute-intensive workloads in a virtual environment, accessing the most powerful GPU in the world through virtual machines that can be securely provisioned in minutes. As NVIDIA A100 GPUs become available in vGPU-certified servers from NVIDIA’s partners, professionals across all industries can accelerate their workloads with powerful performance.

Also, IT professionals get the management, monitoring and multi-tenancy benefits from hypervisors like Red Hat RHV/RHEL.

“Our customers have an increasing need to manage multi-tenant workflows running on virtual machines while providing isolation and security benefits,” said Chuck Dubuque, senior director of product marketing at Red Hat. “The new multi-instance GPU capabilities on NVIDIA A100 GPUs enable a new range of AI-accelerated workloads that run on Red Hat platforms from the cloud to the edge.”

Additional new features of the NVIDIA vGPU September 2020 release include:

  1. Multi-Instance GPU (MIG) with VMs: MIG expands the performance and value of NVIDIA A100 by partitioning the GPUs in up to seven instances. Each MIG can be fully isolated with its own high-bandwidth memory, cache and compute cores. Combining MIG with vCS, enterprises can take advantage of management, monitoring and operational benefits of hypervisor-based server virtualization, running a VM on each MIG partition.
  2. Heterogeneous Profiles and OSes: With the ability to have different sized instances through MIG, heterogenous vCS profiles can be used on an A100 GPU. This allows VMs of various sizes to be run on a single A100 GPU. Additionally, with VMs running on the NVIDIA GPUs with vCS, heterogeneous operating systems can also be run on an A100 GPU, where different Linux distributions can be run simultaneously in different VMs.
  3. GPUDirect Remote Direct Memory Access: Now supported with NVIDIA vCS, GPUDirect RDMA enables network devices to directly access GPU memory, bypassing CPU host memory and decreasing GPU-GPU communication latency to completely offload the CPU in a virtualized environment.

Learn more about NVIDIA Virtual Compute Server, including how the technology was recognized as Disruptive Technology of the Year at VMworld, and see the latest announcement of VMware and NVIDIA partnering to develop enterprise AI solutions.

VMware vSphere support for vCS with A100 will be available next year. The NVIDIA virtual GPU portfolio also includes the Quadro Virtual Workstation for technical and creative professionals, and GRID vPC and vApps for knowledge workers.

GTC Brings the Latest in vGPU

Hear more about how NVIDIA Virtual Compute Server is being used in industries at the GPU Technology Conference, taking place October 5-9.

Adam Tetelman and Jeff Weiss from NVIDIA, joined by Timothy Dietrich from NetApp, will give an overview of NVIDIA Virtual Compute Server technology and discuss use cases and manageability.

As well, a panel of experts from NVIDIA, ManTech and Maxar will share how NVIDIA vGPU is used in their solutions to analyze large amounts of data, enable remote visualization and accelerate compute for video streams and images.

Register now for GTC and check out all the sessions available.

The post NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture appeared first on The Official NVIDIA Blog.

Read More

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Despite the pandemic putting in-person training on hold, organizations can still offer instructor-led courses to their staff to develop key skills in AI, data science and accelerated computing.

NVIDIA’s Deep Learning Institute offers many online courses that deliver hands-on training. One of its most popular — recently updated and retitled as The Fundamentals of Deep Learning — will be taken by hundreds of attendees at next week’s GPU Technology Conference, running Oct. 5-9.

Organizations interested in boosting the deep learning skills of their personnel can arrange to get their teams trained by requesting a workshop from the DLI Course Catalog.

“Technology professionals who take our revamped deep learning course will emerge with the basics they need to start applying deep learning to their most challenging AI and machine learning applications,” said Craig Clawson, director of Training Services at NVIDIA. “This course is a key building block for developing a cutting-edge AI skillset.”

Huge Demand for Deep Learning

Deep learning is at the heart of the fast-growing fields of machine learning and AI. This makes it a skill that’s in huge demand and has put companies across industries in a race to recruit talent. Linkedin recently reported that the fastest growing job category in the U.S. is AI specialist, with annual job growth of 74 percent and an average annual salary of $136,000.

For many organizations, especially those in the software, internet, IT, higher education and consumer electronics sectors, investing in upskilling current employees can be critical to their success while offering a path to career advancement and increasing worker retention.

Deep Learning Application Development-

With interest in the field heating up, a recent article in Forbes highlighted that AI and machine learning, data science and IoT are among the most in-demand skills tech professionals should focus on. In other words, tech workers who lack these skills could soon find themselves at a professional disadvantage.

By developing needed skills, employees can make themselves more valuable to their organizations. And their employers benefit by embedding machine learning and AI functionality into their products, services and business processes.

“Organizations are looking closely at how AI and machine learning can improve their business,” Clawson said. “As they identify opportunities to leverage these technologies, they’re hustling to either develop or import the required skills.”

Get a glimpse of the DLI experience in this short video:

DLI Courses: An Invaluable Resource

The DLI has trained more than 250,000 developers globally. It has continued to deliver a wide range of training remotely via virtual classrooms during the COVID-19 pandemic.

Classes are taught by DLI-certified instructors who are experts in their fields, and breakout rooms support collaboration among students, and interaction with the instructors.

And by completing select courses, students can earn an NVIDIA Deep Learning Institute certificate to demonstrate subject matter competency and support career growth.

It would be hard to exaggerate the potential that this new technology and the NVIDIA developer community holds for improving the world — and the community is growing faster than ever. It took 13 years for the number of registered NVIDIA developers to reach 1 million. Just two years later, it has grown to over 2 million.

Whether enabling new medical procedures, inventing new robots or joining the effort to combat COVID-19, the NVIDIA developer community is breaking new ground every day.

Courses like the re-imagined Fundamentals of Deep Learning are helping developers and data scientists deliver breakthrough innovations across a wide range of industries and application domains.

“Our courses are structured to give developers the skills they need to thrive as AI and machine learning leaders,” said Clawson. “What they take away from the courses, both for themselves and their organizations, is immeasurable.”

To get started on the journey of transforming your organization into an AI powerhouse, request a DLI workshop today.

What is deep learning? Read more about this core technology.

The post Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse appeared first on The Official NVIDIA Blog.

Read More

How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos

How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos

Cybersecurity has grown into a morass.

With increasingly hybrid computing environments, dispersed users accessing networks around the clock, and the Internet of Things creating more data than security teams have ever seen, organizations are throwing more security tools than ever at the problem.

In fact, Jonathan Flack, principal systems architect at BroadBridge Networks, said it’s not unusual for a company with a big network and a large volume of intellectual property to have 50 to 75 vendor solutions deployed within their networks.

“That’s insanity to me,” said Flack. “How can you converge all the information in a single space in order to effectively to act upon it in context?”

That’s precisely the problem BroadBridge, based in Fremont, Calif., is looking to solve. The three-year-old company is a member of NVIDIA Inception, a program that provides AI startups go-to-market support, expertise and technology.

It’s applying AI, powered by NVIDIA GPUs, to security data such that varying data sources can be aligned temporally, essentially connecting all the dots for any moment in time.

A company might have active directory logs, Windows event logs and firewall logs, with events occurring within microseconds of each other. Overworked security staff don’t have time to fish through all those logs trying to align events.

Instead, BroadBridge does it for them, automatically collecting the data, correlating it and presenting it as a single slice of time, with precision down to the millisecond.

The company’s software effectively pinpoints the causes of events and suggests potential actions to be taken. And given that most security teams are understaffed amid a global shortage of qualified cybersecurity employees, they can use all the help they can get.

“Our objective is to lighten the workload so those people can go home after an eight-hour shift, spend time with their families and have some down time,” said Flack. “If you find an intrusion six months ago, you shouldn’t have to go mine through logs from all the affected systems to reassemble a picture of what  happened. With all that data properly aggregated, aligned, and archived you simply run a BlazingSQL query against all of your network data for that specific timeframe.”

Organic Approach to Data

While BroadBridge’s original models were trained on open-source data from the security community, the company’s AI approach is different from other companies in that providing a more mature model out of the gate isn’t necessary. Instead, BroadBridge’s system is designed to be trained by each customer’s network.

“GM is going to have a different threat environment than some DoD office inside the Pentagon,” said Flack. “We provide a good initial starting point, and then we retrain the model using the customer’s own network data over time. The system is 100 percent self-reinforcing.”

The initial AI model provides security analysts with the ability to work through events that need to be investigated. They can triage and tag events as nominal or deserving of more investigation.

That metadata then gets stored, providing a record of what the inference server identified, what the analyst looked at, and what other events are worthy of analysis. All of that is then funneled into a deep learning pipeline that improves the model.

BroadBridge uses Kubernetes and Docker to provide dynamic scaling. Flack said the software can run real-time analytics on a 100GB network. The customer’s deep learning process is uploaded to an NVIDIA GPU instance on AWS, Azure, Google or Oracle clouds, where the AI is trained on the specifics of the customer’s network.

The company’s internal development has unfolded on NVIDIA DGX systems, which are purpose-built for the unique demands of AI. The first wave of development was conducted on DGX-1, and more recently on DGX A100, which Flack said has improved performance significantly.

“Four or five years ago, none of what we’re doing was at all possible,” he said. “Now we have a way to run multiple concurrent GPU-based workloads on systems that are as affordable as some 1U appliances.”

More to Come

Down the line, Flack said he envisions exposing an API to third-party vendors so they can use BroadBridge’s data to dynamically reconfigure device security postures. He also foresees the arrival of 5G as boosting the need for a tool that can parse through the increased data flows.

More immediately, Flack said the company has been looking to address the limitations of virtual private networks in the wake of the huge increase in working from home due to the COVID-19 pandemic.

Flack was careful to note that BroadBridge has no interest in replacing any of the sensors, logs or assessment tools companies are deploying in their security operations centers, or SOCs. Rather, it’s simply trying to create a platform to help security analysts make sense of all the data coming from all of these sources.

“Most of what you’re paying your SOC analysts for is herding cats,” he said. “Our objective is to stop them from herding cats so they can perform actual analysis.”

See BroadBridge Networks present in the NVIDIA Inception Premier Showcase at the GPU Technology Conference on Tuesday, October 6. Register for GTC here.

The post How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos appeared first on The Official NVIDIA Blog.

Read More

AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership

AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership

Promising to bring AI to every enterprise, VMware CEO Pat Gelsinger and NVIDIA CEO Jensen Huang kicked off VMworld 2020 Tuesday with a conversation detailing the companies’ broad new partnership.

VMware and NVIDIA announced that, together, they will deliver an end-to-end enterprise platform for AI as well as a new architecture for data center, cloud and edge that uses NVIDIA DPUs to support existing and next-generation applications.

“We’re going to bring the power of AI to every enterprise. We’re going to bring the NVIDIA AI computing platform and our AI application frameworks onto VMware,” Huang said.

View today’s VMworld 2020 CEO discussion featuring Pat Gelsinger and Jensen Huang, and join us at GTC 2020 on October 5 to learn more.

Through this collaboration, the rich set of AI software available on the NVIDIA NGC hub will be integrated into VMware vSphere, VMware Cloud Foundation and VMware Tanzu.

“For every virtual infrastructure admin, we have millions of people that know how to run the vSphere stack,” Gelsinger said. “They’re running it every day, all day long, it’s now the same tools, the same processes, the same networks, the same security, is now fully being made available on the GPU infrastructure.”

VMware CEO Pat Gelsinger

This will help accelerate AI adoption, enabling enterprises to extend existing infrastructure for AI, manage all applications with a single set of operations, and deploy AI-ready infrastructure where the data resides, across the data center, cloud and edge.

Additionally, as part of VMware’s Project Monterey, also announced Tuesday, the companies will partner to deliver an architecture for the hybrid cloud based on SmartNIC technology, including NVIDIA’s programmable BlueField-2 DPU.

“The characteristics, the pillars of Project Monterey of offloading the operating system, the data center operating system, onto the SmartNIC, isolating the applications from the control plane and the data plane, and accelerating the data processing and the security processing to line speed is going to make the data center so much more powerful, so much more performant,” Huang said.

Among the organizations integrating their VMware and NVIDIA ecosystems is the UCSF Center for Intelligent Imaging.

“I can’t imagine a more impactful use of AI than healthcare,” Huang said. “The intersection of people, disease and treatments is one of the greatest challenges of humanity, and one where AI will be needed to move the needle.”

A leader in the development of AI and analysis tools in medical imaging, the center uses the NVIDIA Clara healthcare application framework for AI-powered imaging and VMware Cloud Foundation to support a broad range of mission-critical workloads.

“This way of doing computing is going to be the way that the future data centers are built. It’s going to allow us to essentially turn every enterprise into an AI,” Huang said. “Every company will become AI-driven.”

“Our audience is so excited to see how we’re coming together, to see how everything they’ve done for the past two decades with VMware now it’s going to be even further expanded,” Gelsinger said.

The post AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership appeared first on The Official NVIDIA Blog.

Read More

The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center

The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center

Two key components of enterprise AI just snapped in place thanks to longtime partners who pioneered virtual desktops, virtual graphics workstations and more.

Taking their partnership to a new level, VMware and NVIDIA are uniting accelerated computing and virtualization to bring the power of AI to every company.

It’s a collaboration that will enable users to run data analytics and machine learning workloads in containers or virtual machines, secured and managed with familiar VMware tools. It will create a new sweet spot in hybrid cloud computing with greater control, lowered costs and expanded performance.

The partnership plants behind the firewalls of private companies the power of AI that public clouds provide from the world’s largest AI data centers.

The two companies will demonstrate these capabilities this week at VMworld.

Welcome to the Modern, Accelerated Data Center

Thanks to this collaboration, users will be able to run AI and data science software from NGC Catalog, NVIDIA’s hub for GPU-optimized AI software, using containers or virtual machines in a hybrid cloud based on VMware Cloud Foundation. It’s the kind of accelerated computing that’s a hallmark of the modern data center.

NVIDIA and VMware also launched a related effort enabling users to build a more secure and powerful hybrid cloud accelerated by NVIDIA BlueField-2 DPUs. These data processing units are built to offload and accelerate software-defined storage, security and networking tasks, freeing up CPU resources for enterprise applications.

Enterprises Gear Up for AI

Machine learning lets computers write software humans never could. It’s a capability born in research labs that’s rapidly spreading to data centers across every industry from automotive and banking to healthcare, retail and more.

The partnership will let VMware users train and run neural networks across multiple GPUs in public and private clouds. It also will enable them to share a single GPU across multiple jobs or users thanks to the multi-instance capabilities in the latest NVIDIA A100 GPUs.

To achieve these goals, the two companies will bring GPU acceleration to VMware vSphere to run AI and data-science jobs at near bare-metal performance next to existing enterprise apps on standard enterprise servers. In addition, software and models in NGC will support VMware Tanzu.

With these links, AI workloads can be virtualized and virtual environments become AI-ready without sacrificing system performance. And users can create hybrid clouds that give them the choice to run jobs in private or public data centers.

Companies will no longer need standalone AI systems for machine learning or big data analytics that are separate from their IT resources. Now a single enterprise infrastructure can run AI and traditional workloads managed by VMware tools and administrators.

“We’re providing the best of both worlds by bringing mature management capabilities to bare-metal systems and great performance to virtualized AI workloads,” said Kit Colbert, vice president and CTO of VMware’s cloud platform group.

Demos Show the Power of Two

Demos at VMworld will show a platform that delivers AI results fast as the public cloud and robust enough to tackle critical jobs like fighting COVID-19. They will run containers from NVIDIA NGC, managed by Tanzu, on VMware Cloud Foundation.

We’ll show those same VMware environments also tapping into the power of BlueField-2 DPUs to secure and accelerate hybrid clouds that let remote designers collaborate in an immersive, real-time environment.

That’s just the beginning. NVIDIA is committed to giving VMware the support to be a first-class platform for everything we build. In the background, VMware and NVIDIA engineers are driving a multi-year effort to deliver game-changing capabilities.

Colbert of VMware agreed. “We view the two initiatives we’re announcing today as initial steps, and there is so much more we can do. We invite customers to tell us what they need most to help prioritize our work,” he said.

To learn more, register for the early-access program and tune in to VMware sessions at GTC 2020 next week.

 

 

The post The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center appeared first on The Official NVIDIA Blog.

Read More

Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs

Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs

The data center’s grid is about to plug in to a new source of power.

It rides a kind of network interface card called a SmartNIC. Its smarts and speed spring from an ASIC called a data processing unit.

In short, the DPU packs the power of data center infrastructure on a chip.

DPU-enabled SmartNICs will be available for millions of virtualized servers thanks to a collaboration between VMware and NVIDIA. They bring advances in security and storage as well as networking that will stretch from the core to the edge of the corporate network.

What’s more, the companies announced a related initiative that will put the power of the public AI cloud behind the corporate firewall. It enables enterprise AI managed with familiar VMware tools.

Lighting Up the Modern Data Center

Together, these efforts will give users the choice to run machine learning workloads in containers or virtual machines, secured and managed with familiar VMware tools. And they will create a new sweet spot in hybrid cloud computing with greater control, lowered costs and the highest performance.

Laying the foundation for these capabilities, the partnership will help users build more secure and powerful distributed networks inside VMware Cloud Foundation, powered by the NVIDIA BlueField-2 DPU. It’s the Swiss Army knife of data center infrastructure that can accelerate security, storage, networking, and management tasks, freeing up CPUs to focus on enterprise applications.

The DPU’s jobs include:

  • Blocking malware
  • Advanced encryption
  • Network virtualization
  • Load balancing
  • Intrusion detection and prevention
  • Data compression
  • Packet switching
  • Packet inspection
  • Managing pools of solid-state and hard-disk storage

Our DPUs can run these tasks today across two ports, each carrying traffic at 100 Gbit/second. That’s an order of magnitude faster than CPUs geared for enterprise apps. The DPU is taking on these jobs so CPU cores can run more apps, boosting vSphere and data center efficiency.

As a result, data centers can handle more apps and their networks will run faster, too.

“The BlueField-2 SmartNIC is a fundamental building block for us because we can take advantage of its DPU hardware for better network performance and dramatically reduced cost to operate data center infrastructure,” said Kit Colbert, vice president and CTO of VMware’s cloud platform group.

NVIDIA BlueField-2 DPU in VMware's Project Monterey
Running VMware Cloud Foundation on the NVIDIA BlueField-2 DPU provides security isolation and lets CPUs support more apps per server.

Securing the Data Center with DPUs

DPUs also will usher in a new era of advanced security.

Today, most companies run their security policies on the same CPUs that run their applications. That kind of multitasking leaves IT departments vulnerable to malware or attacks in the guise of a new app.

With the BlueField DPU, all apps and requests can be vetted on a processor isolated from the application domain, enforcing security and other policies. Many cloud computing services already use this approach to create so-called zero-trust environments where software authenticates everything.

VMware is embracing SmartNICs in its products as part of an initiative called Project Monterey. With SmartNICs, corporate data centers can take advantage of the same advances Web giants enjoy.

“These days the traditional security perimeter is gone. So, we believe you need to root security in the hardware of the SmartNIC to monitor servers and network traffic very fast and without performance impacts,” said Colbert.

BlueField-2 DPU demo with VMwar
A demo shows an NVIDIA BlueField-2 DPU preventing a DDOS attack that swamps a CPU.

See DPUs in Action at VMworld

The companies are demonstrating these capabilities this week at VMworld. For example, the demo below shows how virtual servers running VMware ESXi clients can use Bluefield-2 DPUs to stop a distributed denial-of-service attack in a server cluster.

Leading OEMs are already preparing to bring the capabilities of DPUs to market. NVIDIA also plans to support BlueField-2 SmartNICs across its portfolio of platforms including its EGX systems for enterprise and edge computing.

You wouldn’t hammer a nail with a monkey wrench or pound in a screw with a hammer — you need to use the right tool for the job. To build the modern data center network, that means using an NVIDIA DPU enabled by VMware.

The post Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs appeared first on The Official NVIDIA Blog.

Read More

Drug Discovery in the Age of COVID-19

Drug Discovery in the Age of COVID-19

Drug discovery is like searching for the right jigsaw tile — in a puzzle box with 1060 molecular-size pieces. AI and HPC tools help researchers more quickly narrow down the options, like picking out a subset of correctly shaped and colored puzzle pieces to experiment with.

An effective small-molecule drug will bind to a target enzyme, receptor or other critical protein along the disease pathway. Like the perfect puzzle piece, a successful drug will be the ideal fit, possessing the right shape, flexibility and interaction energy to attach to its target.

But it’s not enough just to interact strongly with the target. An effective therapeutic must modify the function of the protein in just the right way, and also possess favorable absorption, distribution, metabolism, excretion and toxicity properties — creating a complex optimization problem for scientists.

Researchers worldwide are racing to find effective vaccine and drug candidates to inhibit infection with and replication of SARS-CoV-2, the virus that causes COVID-19. Using NVIDIA GPUs, they’re accelerating this lengthy discovery process — whether for structure-based drug design, molecular docking, generative AI models, virtual screening or high-throughput screening.

Identifying Protein Targets with Genomics

To develop an effective drug, researchers have to know where to start. A disease pathway — a chain of signals between molecules that trigger different cell functions — may involve thousands of interacting proteins. Genomic analyses can provide invaluable insights for researchers, helping them identify promising proteins to target with a specific drug.

With the NVIDIA Clara Parabricks genome analysis toolkit, researchers can sequence and analyze genomes up to 50x faster. Given the unprecedented spread of the COVID pandemic, getting results in hours versus days can have an extraordinary impact on understanding the virus and developing treatments.

To date, hundreds of institutions, including hospitals, universities and supercomputing centers, in 88 countries have downloaded the software to accelerate their work — to sequence the viral genome itself, as well as to sequence the DNA of COVID patients and investigate why some are more severely affected by the virus than others.

Another method, cryo-EM, uses electron microscopes to directly observe flash-frozen proteins — and can harness GPUs to shorten processing time for the complex, massive datasets involved.

Using CryoSPARC, a GPU-accelerated software built by Toronto startup Structura Biotechnology, researchers at the National Institutes of Health and the University of Texas at Austin created the first 3D, atomic-scale map of the coronavirus, providing a detailed view into the virus’ spike proteins, a key target for vaccines, therapeutic antibodies and diagnostics.

GPU-Accelerated Compound Screening

Once a target protein has been identified, researchers search for candidate compounds that have the right properties to bind with it. To evaluate how effective drug candidates will be, researchers can screen drug candidates virtually, as well as in real-world labs.

New York-based Schrödinger creates drug discovery software that can model the properties of potential drug molecules. Used by the world’s biggest biopharma companies, the Schrödinger platform allows its users to determine the binding affinity of a candidate molecule on NVIDIA Tensor Core GPUs in under an hour and with just a few dollars of compute cost — instead of many days and thousands of dollars using traditional methods.

Generative AI Models for Drug Discovery

Rather than evaluating a dataset of known drug candidates, a generative AI model starts from scratch. Tokyo-based startup Elix, Inc., a member of the NVIDIA Inception virtual accelerator program, uses generative models trained on NVIDIA DGX Station systems to come up with promising molecular structures. Some of the AI’s proposed molecules may be unstable or difficult to synthesize, so additional neural networks are used to determine the feasibility for these candidates to be tested in the lab.

With DGX Station, Elix achieves up to a 6x speedup on training the generative models, which would otherwise take a week or more to converge, or to reach the lowest possible error rate.

Molecular Docking for COVID-19 Research

With the inconceivable size of the chemical space, researchers couldn’t possibly test every possible molecule to figure out which will be effective to combat a specific disease. But based on what’s known about the target protein, GPU-accelerated molecular dynamics applications can be used to approximate molecular behavior and simulate target proteins at the atomic level.

Software like AutoDock-GPU, developed by the Center for Computational Structural Biology at the Scripps Research Institute, enables researchers to calculate the interaction energy between a candidate molecule and the protein target. Known as molecular docking, this computationally complex process simulates millions of different configurations to find the most favorable arrangement of each molecule for binding. Using the more than 27,000 NVIDIA GPUs on Oak Ridge National Laboratory’s Summit supercomputer, scientists were able to screen 1 billion drug candidates for COVID-19 in just 12 hours. Even using a single NVIDIA GPU provides more than 230x speedup over using a single CPU.

Argonne deployed one of the first DGX-A100 systems. Courtesy of Argonne National Laboratory.

In Illinois, Argonne National Laboratory is accelerating COVID-19 research using an NVIDIA A100 GPU-powered system based on the DGX SuperPOD reference architecture. Argonne researchers are combining AI and advanced molecular modelling methods to perform accelerated simulations of the viral proteins, and to screen billions of potential drug candidates, determining the most promising molecules to pursue for clinical trials.

Accelerating Biological Image Analysis

The drug discovery process involves significant high-throughput lab experiments as well. Phenotypic screening is one method of testing, in which a diseased cell is exposed to a candidate drug. With microscopes, researchers can observe and record subtle changes in the cell to determine if it starts to more closely resemble a healthy cell. Using AI to automate the process, thousands of possible drugs can be screened.

Digital biology company Recursion, based in Salt Lake City, uses AI and NVIDIA GPUs to observe these subtle changes in cell images, analyzing terabytes of data each week. The company has released an open-source COVID dataset, sharing human cellular morphological data with researchers working to create therapies for the virus.

Future Directions in AI for Drug Discovery

As AI and accelerated computing continue to accelerate genomics and drug discovery pipelines, precision medicine — personalizing individual patients’ treatment plans based on insights about their genome and their phenotype — will become more attainable.

Increasingly powerful NLP models will be applied to organize and understand massive datasets of scientific literature, helping connect the dots between independent investigations. Generative models will learn the fundamental equations of quantum mechanics and be able to suggest the optimal molecular therapy for a given target.

To learn more about how NVIDIA GPUs are being used to accelerate drug discovery, check out talks by Schrödinger, Oak Ridge National Laboratory and Atomwise at the GPU Technology Conference next week.

For more on how AI and GPUs are advancing COVID research, read our blog stories and visit the COVID-19 research hub.

Subscribe to NVIDIA healthcare news here

The post Drug Discovery in the Age of COVID-19 appeared first on The Official NVIDIA Blog.

Read More