NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote

NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote

Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off this week’s GPU Technology Conference.

Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.

“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.

Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.

More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.

“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”

This week’s GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.

Accelerated Data Center 

Modern data centers, Huang explained, are software-defined, making them more flexible and adaptable.

That creates an enormous load. Running a data center’s infrastructure can consume 20-30 percent of its CPU cores. And as “east-west traffic, or traffic within a data center, and microservices increase, this load will increase dramatically.

“A new kind of processor is needed,” Huang explained: “We call it the data processing unit.”

The DPU consists of accelerators for networking, storage, security and programmable Arm CPUs to offload the hypervisor, Huang said.

The new NVIDIA BlueField 2 DPU is a programmable processor with powerful Arm cores and acceleration engines for at-line-speed processing for networking, storage and security. It’s the latest fruit of NVIDIA’s acquisition of high-speed interconnect provider Mellanox Technologies, which closed in April.

Data Center — DOCA — A Programmable Data Center Infrastructure Processor

NVIDIA also announced DOCA, its programmable data-center-infrastructure-on-a-chip architecture.

“DOCA SDKs let developers write infrastructure apps for software-defined networking, software-defined storage, cybersecurity, telemetry and in-network computing applications yet to be invented,” Huang said.

Huang also touched on a partnership with VMware, announced last week, to port VMware onto BlueField. VMware “runs the world’s enterprises — they are the OS platform in 70 percent of the world’s companies,” Huang explained.

Data Center — DPU Roadmap in ‘Full Throttle’

Further out, Huang said NVIDIA’s DPU roadmap shows advancements coming fast.

BlueField-2 is sampling now, BlueField-3 is finishing and BlueField-4 is in high gear, Huang reported.

“We are going to bring a ton of technology to networking,” Huang said. “In just a couple of years, we’ll span nearly 1,000 times in compute throughput” on the DPU.

BlueField-4, arriving in 2023, will add support for the CUDA parallel programming platform and NVIDIA AI — “turbocharging the in-network computing vision.”

You can get those capabilities now, Huang announced, with the new BlueField-2X. It adds an NVIDIA Ampere GPU to BlueField-2 for in-networking computing with CUDA and NVIDIA AI.

“Bluefield-2X is like having a Bluefield-4, today,” Huang said.

Data Center — GPU Inference Momentum

Consumer internet companies are also turning to NVIDIA technology to deliver AI services.

Inference — which puts fully-trained AI models to work — is key to a new generation of AI-powered consumer services.

In aggregate, NVIDIA GPU inference compute in the cloud already exceeds all cloud CPUs, Huang said.

Huang announced that Microsoft is adopting NVIDIA AI on Azure to power smart experiences on Microsoft Office, including smart grammar correction and text prediction.

Microsoft Office joins Square, Twitter, eBay, GE Healthcare and Zoox, among other companies, in a broad array of industries using NVIDIA GPUs for inference.

Data Center — Cloudera and VMware 

The ability to put vast quantities of data to work, fast, is key to modern AI and data science.

NVIDIA RAPIDS is the fastest extract, transform, load, or ETL, engine on the planet, and supports multi-GPU and multi-node.

NVIDIA modeled its API after hugely popular data science frameworks — Pandas, XGBoost and ScikitLearn — so RAPIDS is easy to pick up.

On the industry-standard data processing benchmark, running the 30 complex database queries on a 10TB dataset, a 16-node NVIDIA DGX cluster ran 20x faster than the fastest CPU server.

Yet it’s one-seventh the cost and uses one-third the power.

Huang announced that Cloudera, a hybrid-cloud data platform that lets you manage, secure, analyze and learn predictive models from data, will accelerate the Cloudera Data Platform with NVIDIA RAPIDS, NVIDIA AI and NVIDIA-accelerated Spark.

NVIDIA and VMware also announced a second partnership, Huang said.

The companies will create a data center platform that supports GPU acceleration for all three major computing domains today: virtualized, distributed scale-out and composable microservices.

“Enterprises running VMware will be able to enjoy NVIDIA GPU and AI computing in any computing mode,” Huang said. “

(Cutting) Edge AI 

Someday, Huang said, trillions of AI devices and machines will populate the Earth – in homes, office buildings, warehouses, stores, farms, factories, hospitals, airports.

The NVIDIA EGX AI platform makes it easy for the world’s enterprises to stand up a state-of-the-art edge-AI server quickly, Huang said. It can control factories of robots, perform automatic checkout at retail or help nurses monitor patients, Huang explained.

Huang announced the EGX platform is expanding to combine the NVIDIA Ampere GPU and BlueField-2 DPU on a single PCIe card. The updates give enterprises a common platform to build secure, accelerated data centers.

Huang also announced an early access program for a new service called NVIDIA Fleet Command. This new application makes it easy to deploy and manage updates across IoT devices, combining the security and real-time processing capabilities of edge computing with the remote management and ease of software-as-a-service.

Among the first companies provided early access to Fleet Command is KION Group, a leader in global supply chain solutions, which is using the NVIDIA EGX AI platform to develop AI applications for its intelligent warehouse systems.

Additionally, Northwestern Memorial Hospital, the No. 1 hospital in Illinois and one of the top 10 in the nation, is working with Whiteboard Coordinator to use Fleet Command for its IoT sensor platform.

“This is the iPhone moment for the world’s industries — NVIDIA EGX will make it easy to create, deploy and operate industrial AI services,” Huang said.

Edge AI — Democratizing Robotics

Soon, Huang added, everything that moves will be autonomous. AI software is the big breakthrough that will make robots smarter and more adaptable. But it’s the NVIDIA Jetson AI computer that will democratize robotics.

Jetson is an Arm-based SoC designed from the ground up for robotics. That’s thanks to the sensor processors, the CUDA GPU and Tensor Cores, and, most importantly, the richness of AI software that runs on it, Huang explained.

The latest addition to the Jetson family, the Jetson Nano 2GB, will be $59, Huang announced. That’s roughly half the cost of the $99 Jetson Nano Developer Kit announced last year.

“NVIDIA Jetson is mighty, yet tiny, energy-efficient and affordable,” Huang said.

Collaboration Tools

The shared, online world of the “metaverse” imagined in Neal Stephensen’s 1992 cyberpunk classic, “Snow Crash,” is already becoming real, in shared virtual worlds like Minecraft and Fortnite, Huang said.

First introduced in March 2019, NVIDIA Omniverse — a platform for simultaneous, real-time simulation and collaboration across a broad array of existing industry tools — is now in open beta.

“Omniverse allows designers, artists, creators and even AIs using different tools, in different worlds, to connect in a common world—to collaborate, to create a world together,” Huang said.

Another tool NVIDIA pioneered, NVIDIA Jarvis conversational AI, is also now in open beta, Huang announced. Using the new SpeedSquad benchmark, Huang showed it’s twice as responsive and more natural sounding when running on NVIDIA GPUs.

It also runs for a third of the cost, Huang said.

“What did I tell you?” Huang said, referring to a catch phrase he’s used in keynotes over the years. “The more you buy, the more you save.”

Collaboration Tools — Introducing NVIDIA Maxine

Video calls have moved from a curiosity to a necessity.

For work, social, school, virtual events, doctor visits — video conferencing is now the most critical application for many people. More than 30 million web meetings take place every day.

To improve this experience, Huang announced NVIDIA Maxine, a cloud-native streaming video AI platform for applications like video calls.

Using AI, Maxine can reduce the bandwidth consumed by video calls by a factor of 10. “AI can do magic for video calls,” Huang said.

“With Jarvis and Maxine, we have the opportunity to revolutionize video conferencing of today and invent the virtual presence of tomorrow,” Huang said.

Healthcare 

When it comes to drug discovery amidst the global COVID-19 pandemic, lives are on the line.

Yet for years the costs of new drug discovery for the $1.5 trillion pharmaceutical industry have risen. New drugs take over a decade to develop, cost over $2.5 billion in research and development — doubling every nine years — and 90 percent of efforts fail.

New tools are needed. “COVID-19 hits home this urgency,” Huang said.

Using breakthroughs in computer science, we can begin to use simulation and in-silico methods to understand the biological machinery of the proteins that affect disease and search for new drug candidates, Huang explained.

To accelerate this, Huang announced NVIDIA Clara Discovery — a state-of-the-art suite of tools for scientists to discover life-saving drugs.

“Where there are popular industry tools, our computer scientists accelerate them,” Huang said. “Where no tools exist, we develop them — like NVIDIA Parabricks, Clara Imaging, BioMegatron, BioBERT, NVIDIA RAPIDS.”

Huang also outlined an effort to build the U.K.’s fastest supercomputer, Cambridge-1, bringing state-of-the-art computing infrastructure to “an epicenter of healthcare research.”

Cambridge-1 will boast 400 petaflops of AI performance, making it among the world’s top 30 fastest supercomputers. It will host NVIDIA’s U.K. AI and healthcare collaborations with academia, industry and startups.

NVIDIA’s first partners are AstraZeneca, GSK, King’s College London, the Guy’s and St Thomas’ NHS Foundation Trust and startup Oxford Nanopore.

NVIDIA also announced a partnership with GSK to build the world’s first AI drug discovery lab.

Arm

Huang wrapped up his keynote with an update on NVIDIA’s partnership with Arm, whose power-efficient designs run the world’s smart devices.

NVIDIA agreed to acquire the U.K. semiconductor designer last month for $40 billion.

“Arm is the most popular CPU in the world,” Huang said. “Together, we will offer NVIDIA accelerated and AI computing technologies to the Arm ecosystem.”

Last year, Huang said, NVIDIA announced it would port CUDA and our scientific computing stack to Arm. Today, Huang announced a major initiative to advance the Arm platform — we’re making investments across three dimensions:

  • First, NVIDIA will complement Arm partners with GPU, networking, storage and security technologies to create complete accelerated platforms.
  • Second, NVIDIA is working with Arm partners to create platforms for HPC, cloud, edge and PC — this requires chips, systems and system software.
  • And third, NVIDIA is porting the NVIDIA AI and NVIDIA RTX engines to Arm.

“Today, these capabilities are available only on x86,” Huang said, “With this initiative, Arm platforms will also be leading-edge at accelerated and AI computing.”

 

The post NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote appeared first on The Official NVIDIA Blog.

Read More

NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word

NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word

It’s been said that good writing comes from editing. Fortunately for discerning readers everywhere, Microsoft is putting an AI-powered grammar editor at the fingertips of millions of people.

Like any good editor, it’s quick and knowledgeable. That’s because Microsoft Editor’s grammar refinements in Microsoft Word for the web can now tap into NVIDIA Triton Inference Server, ONNX Runtime and Microsoft Azure Machine Learning, which is part of Azure AI, to deliver this smart experience.

Speaking at the digital GPU Technology Conference, NVIDIA CEO Jensen Huang announced the news during the keynote presentation on October 5.

Everyday AI in Office

Microsoft is on a mission to wow users of Office productivity apps with the magic of AI. New, time-saving experiences will include real-time grammar suggestions, question-answering within documents — think Bing search for documents beyond “exact match” — and predictive text to help complete sentences.

Such productivity-boosting experiences are only possible with deep learning and neural networks. For example, unlike services built on traditional rules-based logic, when it comes to correcting grammar, Editor in Word for the web is able to understand the context of a sentence and suggest the appropriate word choices.

 

And these deep learning models, which can involve hundreds of millions of parameters, must be scalable and provide real-time inference for an optimal user experience. Microsoft Editor’s AI model  for grammar checking in Word on the web alone is expected to handle more than 500 billion queries a year.

Deployment at this scale could blow up deep learning budgets. Thankfully, NVIDIA Triton’s dynamic batching and concurrent model execution features, accessible through Azure Machine Learning, slashed the cost by about 70 percent and achieved a throughput of 450 queries per second on a single NVIDIA V100 Tensor Core GPU, with less than 200-millisecond response time. Azure Machine Learning provided the required scale and capabilities to manage the model lifecycle such as versioning and monitoring.

High Performance Inference with Triton on Azure Machine Learning

Machine learning models have expanded in size, and GPUs have become necessary during model training and deployment. For AI deployment in production, organizations are looking for scalable inference serving solutions, support for multiple framework backends, optimal GPU and CPU utilization and machine learning lifecycle management.

The NVIDIA Triton and ONNX Runtime stack in Azure Machine Learning deliver scalable high-performance inferencing. Azure Machine Learning customers can take advantage of Triton’s support for multiple frameworks, real time, batch and streaming inferencing, dynamic batching and concurrent execution.

Writing with AI in Word

Author and poet Robert Graves was quoted as saying, “There is no good writing, only good rewriting.”  In other words, write, and then edit and improve.

Editor in Word for the web lets you do both simultaneously. And while Editor is the first feature in Word to gain the speed and breadth of advances enabled by Triton and ONNX Runtime, it is likely just the start of more to come.

 

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

 

The post NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word appeared first on The Official NVIDIA Blog.

Read More

To 3D and Beyond: Pixar’s USD Coming to an Industry Near You

To 3D and Beyond: Pixar’s USD Coming to an Industry Near You

It was the kind of career moment developers dream of but rarely experience. To whoops and cheers from the crowd at SIGGRAPH 2016, Dirk Van Gelder of Pixar Animation Studios launched Universal Scene Description.

USD would become the open-source glue filmmakers used to bind their favorite tools together so they could collaborate with colleagues around the world, radically simplifying the job of creating animated movies. At its birth, it had backing from three seminal partners—Autodesk, Foundry and SideFX.

Today, more than a dozen companies from Apple to Unity support USD. The standard is on the cusp of becoming the solder that fuses all sorts of virtual and physical worlds into environments where everything from skyscrapers to sports cars and smart cities will be designed and tested in simulation.

What’s more, it’s helping spawn machinima, an emerging form of digital storytelling based on game content.

How USD Found an Audience

The 2016 debut “was pretty exciting” for Van Gelder, who spent more than 20 years developing Pixar’s tools.

“We had talked to people about USD, but we weren’t sure they’d embrace it,” he said. “I did a live demo on a laptop of a scene from Finding Dory so they could see USD’s scalability and performance and what we at Pixar could do with it, and they really got the message.”

One of those in the crowd was Rev Lebaredian, vice president of simulation technology at NVIDIA.

“Dirk’s presentation of USD live and in real time inspired us. It triggered a series of ideas and events that led to what is NVIDIA Omniverse today, with USD as its soul. So, it was fate that Dirk would end up on the Omniverse team,” said Lebaredian of the 3D graphics platform, now in open beta, that aims to carry the USD vision forward.

Developers Layer Effects on 3D Graphics

Adobe’s developers were among many others who welcomed USD and now support it in their products.

“USD has a whole world of features that are incredibly powerful,” said Davide Pesare, who worked on USD at Pixar and is now a senior R&D manager at Adobe.

“For example, with USD layering, artists can work in the same scene without stepping on each other’s toes. Each artist has his or her own layer, so you can let the modeler work while someone else is building the shading,” he said.

“Today USD has spread beyond the film industry where it is pervasive in animation and special effects. Game developers are looking at it, Apple’s products can read it, we have partners in architecture using it and the number of products compatible with USD is only going to grow,” Pesare said.

CityEngine uses USD
Thinking on a grand scale: NVIDIA and partner Esri, a specialist in mapping software, are both building virtual worlds using USD.

Building a Virtual 3D Home for Architects

Although it got its start in the movies, USD can play many roles.

Millions of architects, engineers and designers need a way to quickly review progress on construction projects with owners and real-estate developers. Each stakeholder wants different programs often running on different computers, tablets or even handsets. It’s a script for an IT horror film where USD can write a happy ending.

Companies such as Autodesk, Bentley Systems, McNeel & Associates and Trimble Inc. are already exploring what USD can do for this community. NVIDIA used Omniverse to create a video showing some of the possibilities, such as previewing how the sun will play on the glassy interior of a skyscraper through the day.

Product Design Comes Alive with USD

It’s a similar story with a change of scene in the manufacturing industry. Here, companies have a cast of thousands of complex products they want to quickly design and test, ranging from voice-controlled gadgets to autonomous trucks.

The process requires iterations using programs in the hands of many kinds of specialists who demand photorealistic 3D models. Beyond de rigueur design reviews, they dream of the possibilities like putting visualizations in the hands of online customers.

Showing the shape of things to come, the Omniverse team produced a video for the debut of the NVIDIA DGX A100 system with exploding views of how its 30,000 components snap into a million drill holes. More recently, it generated a video of NVIDIA’s GeForce RTX 30 Series graphics card, (below) complete with a virtual tour of its new cooling subsystem, thanks to USD in Omniverse.

“These days my team spends a lot of time working on real-time physics and other extensions of USD for autonomous vehicles and robotics for the NVIDIA Isaac and DRIVE platforms,” Van Gelder said.

To show what’s possible today, engineers used USD to import into Omniverse an accurately modelled luxury car and details of a 17-mile stretch of highway around NVIDIA’s Silicon valley headquarters. The simulation, to be shown this week at GTC, shows the potential for environments detailed enough to test both vehicles and their automated driving capabilities.

Another team imported Kaya, a robotic car for consumers, so users could program the digital model and test its behavior in an Omniverse simulation before building or buying a physical robot.

The simulation was accurate despite the fact “the wheels are insanely complex because they can drive forward, backward or sideways,” said Mike Skolones, manager of the team behind NVIDIA Isaac Sim.

Lights! Camera! USD!

In gaming, Epic’s Unreal Engine supports USD and Unity and Blender are working to support it as well. Their work is accelerating the rise of machinima, a movie-like spinoff from gaming demonstrated in a video for NVIDIA Omniverse Machinima.

Meanwhile, back in Hollywood, studios are well along in adopting USD.

Pixar produced Finding Dory using USD. Dreamworks Animation described its process adopting USD to create the 2019 feature How to Train Your Dragon: The Hidden World. Disney Animation Studios blended USD into its pipeline for animated features, too.

Steering USD into the Omniverse

NVIDIA and partners hope to take USD into all these fields and more with Omniverse, an environment one team member describes as “like Google Docs for 3D graphics.”

Omniverse plugs the power of NVIDIA RTX real-time ray-tracing graphics into USD’s collaborative, layered editing. The recent “Marbles at Night” video (below) showcased that blend, created by a dozen artists scattered across the U.S., Australia, Poland, Russia and the U.K.

That’s getting developers like Pesare of Adobe excited.

“All industries are going to want to author everything with real time texturing, modeling, shading and animation,” said Pesare.

That will pave the way for a revolution in people consuming real-time media with AR and VR glasses linked on 5G networks for immersive, interactive experience anywhere, he added.

He’s one of more than 400 developers who’ve had a hands-on with Omniverse so far. Others come from companies like Ericsson, Foster & Partners and Industrial Light & Magic.

USD Gives Lunar Explorers a Hand

The Frontier Development Lab (FDL), a NASA partner, recently approached NVIDIA for help simulating light on the surface of the moon.

Using data from a lunar satellite, the Omniverse team generated images FDL used to create a video for a public talk, explaining its search for water ice on the moon and a landing site for a lunar rover.

Back on Earth, challenges ahead include using USD’s Hydra renderer to deliver content at 30 frames per second that might blend images from a dozen sources for a filmmaker, an architect or a product designer.

“It’s a Herculean effort to get this in the hands of the first customers for production work,” said Richard Kerris, general manager of NVIDIA’s media and entertainment group and former chief technologist at Lucasfilm. “We’re effectively building an operating system for creatives across multiple markets, so support for USD is incredibly important,” he said.

Kerris called on anyone with an RTX-enabled system to get their hands on the open beta of Omniverse and drive the promise of USD forward.

“We can’t wait to see what you will build,” he said.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post To 3D and Beyond: Pixar’s USD Coming to an Industry Near You appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders

NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders

We’ve all been there: on a road trip and hungry. Wouldn’t it be amazing to ask your car’s driving assistant and get recommendations to nearby food, personalized to your taste?

Now, it’s possible for any business to build and deploy such experiences and many more with NVIDIA GPU systems and software libraries. That’s because NVIDIA Jarvis for conversational AI services and NVIDIA Merlin for recommender systems have entered open beta. Speaking today at the GPU Technology Conference, NVIDIA CEO Jensen Huang announced the news.

While AI for voice services and recommender systems has never been more needed in our digital worlds, development tools have lagged. And the need for better voice AI services is rising sharply.

More people are working from home and remotely learning, shopping, visiting doctors and more, putting strains on services and revealing shortcomings in user experiences. Some call centers report a 34 percent increase in hold times and a 68 percent increase in call escalations, according to a report from Harvard Business Review.

Meanwhile, current recommenders personalize the internet but often come up short. Retail recommenders suggest items recently purchased or continue pursuing people with annoying promos. Media and entertainment recommendations are often more of the same and not diverse. These systems are often fairly crude because they only go off of past recommendations or similarities.

NVIDIA Jarvis and NVIDIA Merlin allow companies to explore larger deep learning models, and develop more nuanced and intelligent recommendation systems. Conversational AI services built on Jarvis and recommender systems built on Merlin offer the fast track forward to better services from businesses.

Early Access Jarvis Adopter Advances

Some companies in the NVIDIA Developer program have already begun work on conversational AI services with NVIDIA Jarvis. Early adopters included Voca, an AI agent for call center support; Kensho, for automatic voice transcriptions for finance and business; and Square, offering a virtual assistant for scheduling appointments.

London-based Intelligent Voice, which offers high-performance speech recognition services, is always looking for more, said its CTO, Nigel Cannings.

“Jarvis takes a multimodal approach that fuses key elements of automatic speech recognition with entity and intent matching to address new use cases where high-throughput and low latency are required,” he said. “The Jarvis API is very easy to use, integrate and customize to our customers’ workflows for optimized performance.”

It has allowed Intelligent Voice to pivot quickly during the COVID crisis to bring to market in record time a complete new product, Myna, that allows accurate and useful meeting recall.

Better Conversational AI Needed

In the U.S., call center assistants handle 200 million calls per day, and telemedicine services enable 2.4 million daily physician visits, demanding transcriptions with high accuracy.

Traditional voice systems leave room for improvement. With processing constrained by CPUs, their lower quality models result in lag-filled robotic voice products. Jarvis includes Megatron-BERT models, the largest today, to offer the highest accuracy and lowest latency.

Deploying real-time conversational AI for natural interactions requires model computations in under 300 milliseconds — versus 600 milliseconds on CPU-powered models.

Jarvis provides more natural interactions through sensor fusion — the integration of video cameras and microphones. Its ability to handle multiple data streams in real time enables the delivery of improved services.

Complex Model Pipelines, Easier Solutions

Model pipelines in conversational AI can be complex and require coordination across multiple services.

Microservices are required to run at scale with automatic speech recognition models, natural language understanding, text-to-speech and domain-specific apps. These super-specialized tasks, sped up when run in parallel processing, gain a 3x cost advantage over a competing CPU-only server.

NVIDIA Jarvis is a comprehensive framework, offering software libraries for building conversational AI applications and including GPU-optimized services for ASR, NLU, TTS and computer vision that use the latest deep learning models.

Developers can meld these multiple skills within their applications, and quickly help our hungry vacationer find just the right place.

Merlin Creates a More Relevant Internet

Recommender systems are the engine of the personalized internet and they’re everywhere online. They suggest food you might like, offer items related to your purchases and can capture your interest in the moment with retargeted advertising for product offers as you bounce from site to site.

But when recommenders don’t do their best, people may walk away empty-handed and businesses leave money on the table.

On some of the world’s largest online commerce sites, recommender systems account for as much as 30 percent of revenue. Just a 1 percent improvement in the relevance of recommendations can translate into billions of dollars in revenue.

Recommenders at Scale on GPUs

At Tencent, recommender systems support videos, news, music and apps. Using NVIDIA Merlin, the company reduced its recommender training time from 20 hours to three.

“With the use of the Merlin HugeCTR advertising recommendation acceleration framework, our advertising business model can be trained faster and more accurately, which is expected to improve the effect of online advertising,” said Ivan Kong, AI technical leader at Tencent TEG.

Democratizes Access to Recommenders

Now everyone has access to the NVIDIA Merlin application framework, which allows businesses of all kinds to build recommenders accelerated by NVIDIA GPUs.

Merlin’s collection of libraries includes tools for building deep learning-based systems that provide better predictions than traditional methods and increase click-through rates. Each stage of the pipeline is optimized to support hundreds of terabytes of data, all accessible through easy-to-use APIs.

Merlin is used at one of the world’s largest media companies and is in testing with hundreds of companies worldwide. Social media giants in the U.S. are experimenting with its ability to share related news. Streaming media services are testing it for suggestions on next views and listens. And major retailers are looking at it for suggestions on next items to purchase.

Those who are interested can learn more about the technology advances behind Merlin since its initial launch, including its support for  NVTabular, multi-GPU support, HugeCTR and NVIDIA Triton Inference Server.

Businesses can sign up for the NVIDIA Jarvis beta for access to the latest developments in conversational AI, and get started with the NVIDIA Merlin beta for the fastest way to upload terabytes of training data and deploy recommenders at scale.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

 

The post NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders appeared first on The Official NVIDIA Blog.

Read More

AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls

AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls

Ming-Yu Liu and Arun Mallya were on a video call when one of them started to break up, then freeze.

It’s an irksome reality of life in the pandemic that most of us have shared. But unlike most of us, Liu and Mallya could do something about it.

They are AI researchers at NVIDIA and specialists in computer vision. Working with colleague Ting-Chun Wang, they realized they could use a neural network in place of the software called a video codec typically used to compress and decompress video for transmission over the net.

Their work enables a video call with one-tenth the network bandwidth users typically need. It promises to reduce bandwidth consumption by orders of magnitude in the future.

“We want to provide a better experience for video communications with AI so even people who only have access to extremely low bandwidth can still upgrade from voice to video calls,” said Mallya.

Better Connections Thanks to GANs

The technique works even when callers are wearing a hat, glasses, headphones or a mask. And just for fun, they spiced up their demo with a couple bells and whistles so users can change their hair styles or clothes digitally or create an avatar.

A more serious feature in the works (shown at top) uses the neural network to align the position of users’ faces for a more natural experience. Callers watch their video feeds, but they appear to be looking directly at their cameras, enhancing the feeling of a face-to-face connection.

“With computer vision techniques, we can locate a person’s head over a wide range of angles, and we think this will help people have more natural conversations,” said Wang.

Say hello to the latest way AI is making virtual life more real.

How AI-Assisted Video Calls Work

The mechanism behind AI-assisted video calls is simple.

A sender first transmits a reference image of the caller, just like today’s systems that typically use a compressed video stream. Then, rather than sending a fat stream of pixel-packed images, it sends data on the locations of a few key points around the user’s eyes, nose and mouth.

A generative adversarial network on the receiver’s side uses the initial image and the facial key points to reconstruct subsequent images on a local GPU. As a result, much less data is sent over the network.

Liu’s work in GANs hit the spotlight last year with GauGAN, an AI tool that turns anyone’s doodles into photorealistic works of art. GauGAN has already been used to create more than a million images and is available at the AI Playground.

“The pandemic motivated us because everyone is doing video conferencing now, so we explored how we can ease the bandwidth bottlenecks so providers can serve more people at the same time,” said Liu.

GPUs Bust Bandwidth Bottlenecks

The approach is part of an industry trend of shifting network bottlenecks into computational tasks that can be more easily tackled with local or cloud resources.

“These days lots of companies want to turn bandwidth problems into compute problems because it’s often hard to add more bandwidth and easier to add more compute,” said Andrew Page, a director of advanced products in NVIDIA’s media group.

NVIDIA Maxine bundles a suite of tools for video conferencing and streaming services.

AI Instruments Tune Video Services

GAN video compression is one of several capabilities coming to NVIDIA Maxine, a cloud-AI video-streaming platform to enhance video conferencing and calls. It packs audio, video and conversational AI features in a single toolkit that supports a broad range of devices.

Announced this week at GTC, Maxine lets service providers deliver video at super resolution with real-time translation, background noise removal and context-aware closed captioning. Users can enjoy features such as face alignment, support for virtual assistants and realistic animation of avatars.

“Video conferencing is going through a renaissance,” said Page. “Through the pandemic, we’ve all lived through its warts, but video is here to stay now as a part of our lives going forward because we are visual creatures.”

Maxine harnesses the power of NVIDIA GPUs with Tensor Cores running software such as NVIDIA Jarvis, an SDK for conversational AI that delivers a suite of speech and text capabilities. Together, they deliver AI capabilities that are useful today and serve as building blocks for tomorrow’s video products and services.

Learn more about NVIDIA Research.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Delivers Streaming AR and VR from the Cloud with AWS

NVIDIA Delivers Streaming AR and VR from the Cloud with AWS

NVIDIA and AWS are bringing the future of XR streaming to the cloud.

Announced today, the NVIDIA CloudXR platform will be available on Amazon EC2 P3 and G4 instances, which support NVIDIA V100 and T4 GPUs, allowing cloud users to stream high-quality immersive experiences to remote VR and AR devices.

The CloudXR platform includes the NVIDIA CloudXR software development kit, NVIDIA Virtual Workstation software and NVIDIA AI SDKs to deliver photorealistic graphics, with the mobile convenience of all-in-one XR headsets. XR is a collective term for VR, AR and mixed reality.

With the ability to stream from the cloud, professionals can now easily set up, scale and access immersive experiences from anywhere — they no longer need to be tethered to expensive workstations or external VR tracking systems.

The growing availability of advanced tools like CloudXR is paving the way for enhanced collaboration, streamlined workflows and high fidelity virtual environments. XR solutions are also introducing new possibilities for adding AI features and functionality.

With the CloudXR platform, many early access customers and partners across industries like manufacturing, media and entertainment, healthcare and others are enhancing immersive experiences by combining photorealistic graphics with the mobility of wireless head-mounted displays.

Lucid Motors recently announced the new Lucid Air, a powerful and efficient electric vehicle that users can experience through a custom implementation of the ZeroLight platform. Lucid Motors is developing a virtual design showroom using the CloudXR platform. By streaming the experience from AWS, shoppers can enter the virtual environment and see the advanced features of Lucid Air.

“NVIDIA CloudXR allows people all over the world to experience an incredibly immersive, personalized design with the new Lucid Air,“ said Thomas Orenz, director of digital interactive marketing at Lucid Motors. “By using the AWS cloud, we can save on infrastructure costs by removing the need for onsite servers, while also dynamically scaling the VR configuration experiences for our customers.”

Another early adopter of CloudXR on AWS is The Gettys Group, a hospitality design, branding and development company based in Chicago. Gettys frequently partners with visualization company Theia Interactive to turn the design process into interactive Unreal Engine VR experiences.

When the coronavirus pandemic hit, Gettys and Theia used NVIDIA CloudXR to deliver customer projects to a local Oculus Quest HMD, streaming from the AWS EC2 P3 instance with NVIDIA Virtual Workstations.

“This is a game changer — by streaming collaborative experiences from AWS, we can digitally bring project stakeholders together on short notice for quick VR design alignment meetings,” said Ron Swidler, chief innovation officer at The Gettys Group. “This is going to save a ton of time and money, but more importantly it’s going to increase client engagement, understanding and satisfaction.”


Next-Level Streaming from the Cloud

CloudXR is built on NVIDIA RTX GPUs to allow streaming of immersive AR, VR or mixed reality experiences from anywhere.

The platform includes:

  • NVIDIA CloudXR SDK, which provides support for all OpenVR apps and includes broad client support for phones, tablets and HMDs. Its adaptive streaming protocol delivers the richest experiences with the lowest perceived latency by constantly adapting to network conditions.
  • NVIDIA Virtual Workstations to deliver the most immersive, highest quality graphics at the fastest frame rates. It’s available from cloud providers such as AWS, or can be deployed from an enterprise data center.
  • NVIDIA AI SDKs to accelerate performance and enhance immersive presence.

With the NVIDIA CloudXR platform on Amazon EC2 G4 and P3 instances supporting NVIDIA T4 and V100 GPUs, companies can deliver high-quality virtual experiences to any user, anywhere in the world.

Availability Coming Soon

NVIDIA CloudXR on AWS will be generally available early next year, with a private beta available in the coming months. Sign up now to get the latest news and updates on upcoming CloudXR releases, including the private beta.

The post NVIDIA Delivers Streaming AR and VR from the Cloud with AWS appeared first on The Official NVIDIA Blog.

Read More

Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs

Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs

Researchers at NVIDIA and Massachusetts General Brigham Hospital have developed an AI model that determines whether a person showing up in the emergency room with COVID-19 symptoms will need supplemental oxygen hours or even days after an initial exam.

The original model, named CORISK, was developed by scientist Dr. Quanzheng Li at Mass General Brigham. It combines medical imaging and health records to help clinicians more effectively manage hospitalizations at a time when many countries may start seeing a second wave of COVID-19 patients.

Oxygen prediction AI workflow

To develop an AI model that doctors trust and that generalizes to as many hospitals as possible, NVIDIA and Mass General Brigham embarked on an initiative called EXAM (EMR CXR AI Model) the largest, most diverse federated learning initiative with 20 hospitals from around the world.

In just two weeks, the global collaboration achieved a model with .94 area under the curve (with an AUC goal of 1.0), resulting in excellent prediction for the level of oxygen required by incoming patients. The federated learning model will be released as part of NVIDIA Clara on NGC in the coming weeks.

Looking Inside the ‘EXAM’ Initiative

Using NVIDIA Clara Federated Learning Framework, researchers at individual hospitals were able to use a chest X-ray, patient vitals and lab values to train a local model and share only a subset of model weights back with the global model in a privacy-preserving technique called federated learning.

The ultimate goal of this model is to predict the likelihood that a person showing up in the emergency room will need supplemental oxygen, which can aid physicians in determining the appropriate level of care for patients, including ICU placement.

Dr. Ittai Dayan, who leads development and deployment of AI at Mass General Brigham, co-led the EXAM initiative with NVIDIA, and facilitated the use of CORISK as the starting point for the federated learning training. The improvements were achieved by training the model on distributed data from a multinational, diverse dataset of patients across North and South America, Canada, Europe and Asia.

In addition to Mass Gen Brigham and its affiliated hospitals, other participants included: Children’s National Hospital in Washington, D.C.; NIHR Cambridge Biomedical Research Centre; The Self-Defense Forces Central Hospital in Tokyo; National Taiwan University MeDA Lab and MAHC and Taiwan National Health Insurance Administration; Kyungpook National University Hospital in South Korea; Faculty of Medicine, Chulalongkorn University in Thailand; Diagnosticos da America SA in Brazil; University of California, San Francisco; VA San Diego; University of Toronto; National Institutes of Health in Bethesda, Maryland; University of Wisconsin-Madison School of Medicine and Public Health; Memorial Sloan Kettering Cancer Center in New York; and Mount Sinai Health System in New York.

Each of these hospitals used NVIDIA Clara to train its local models and participate in EXAM.

Rather than needing to pool patient chest X-rays and other confidential information into a single location, each institution uses a secure, in-house server for its data. A separate server, hosted on AWS, holds the global deep neural network, and each participating hospital gets a copy of the model to train on its own dataset.

Collaboration on a Global Scale

Large-scale federated learning projects also are underway, aimed at improving drug discovery and bringing AI benefits to the point of care.

Owkin is teaming up with NVIDIA, King’s College London and more than a dozen other organizations on MELLODDY, a drug-discovery consortium based in the U.K., to demonstrate how federated learning techniques could give pharmaceutical partners the best of both worlds: the ability to leverage the world’s largest collaborative drug compound dataset for AI training without sacrificing data privacy.

King’s College London is hoping that its work with federated learning, as part of its London Medical Imaging and Artificial Intelligence Centre for Value-Based Healthcare project, could lead to breakthroughs in classifying stroke and neurological impairments, determining the underlying causes of cancers, and recommending the best treatment for patients.

Learn more about another AI model for COVID-19 utilizing a multinational dataset in this paper, and about the science behind federated learning in this paper.

The post Triaging COVID-19 Patients: 20 Hospitals in 20 Days Build AI Model that Predicts Oxygen Needs appeared first on The Official NVIDIA Blog.

Read More

American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime

American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime

Financial fraud is surging along with waves of cybersecurity breaches.

Cybercrime cost the global economy $600 billion annually, or 0.8 percent of worldwide GDP, according to an estimate in 2018 from McAfee. And consulting firm Accenture forecasts cyberattacks could cost companies $5.2 trillion worldwide by 2024.

Credit and bank cards are a major target. American Express, which handles more than eight  billion transactions a year, is using deep learning on the NVIDIA GPU computing platform to combat fraud detection.

American Express has now deployed deep-learning-based models optimized with NVIDIA TensorRT and running on NVIDIA Triton Inference Server to detect fraud, NVIDIA CEO Jensen Huang announced at the GPU Technology Conference on Monday.

NVIDIA TensorRT is a high performance deep learning inference optimizer and runtime that minimizes latency and maximizes throughput.

NVIDIA Triton Inference Server software simplifies model deployment at scale and can be used as a  microservice that enables applications to use AI models in datacenter production.

“Our fraud algorithms monitor in real time every American Express transaction around the world for more than $1.2 trillion spent annually, and we generate fraud decisions in mere milliseconds,” said Manish Gupta, vice president of Machine Learning and Data Science Research at American Express.

Online Shopping Spree

Online shopping has spiked since the pandemic. In the U.S. alone, online commerce rose 49 percent in April compared with early March, according to Adobe’s Digital Economy Index.

That means less cash, more digital dollars. And more digital dollars demand bank and credit card usage, which has already seen increased fraud.

“Card fraud netted criminals $3.88 billion more in 2018 than in 2017,” said David Robertson, publisher of The Nilson Report, which tracks information about the global payments industry.

American Express, with more than 115 million active credit cards, has maintained the lowest fraud rate in the industry for 13 years in a row, according to The Nilson Report

“Having our card members and merchants’ back is our top priority, so keeping our fraud rates low is key to achieving that goal,” said Gupta.

Anomaly Detection with GPU Computing

With online transactions rising, fraudsters are waging more complex attacks as financial firms step up security measures.

One area that is easier to monitor is anomalous spending patterns. These types of transactions on one card — known as “out of pattern” — could show a coffee was purchased in San Francisco and then five minutes later a tank of gas was purchased in Los Angeles.

Such anomalies are red-flagged using recurrent neural networks, or RNNs, which are particularly good at guessing what comes next in a sequence of data.

American Express has deployed long short-term memory networks, or LSTMs, which can provide improved performance in RNNs.

And that can mean the closing gaps on latency and accuracy, two areas where American Express has made leaps. The teams there used NVIDIA DGX systems to accelerate the building and training of these LSTM models on mountains of structured and unstructured data using TensorFlow.

50x Gains Over CPUs

The recently released TensorRT-optimized LSTM network aids the system that analyzes transaction data on tens of millions of daily transactions in real time. This LSTM is now deployed using the NVIDIA Triton Inference Server on NVIDIA T4 GPUs for split-second inference.

Results are in: American Express was able to implement this enhanced, real-time fraud detection system for improved accuracy. It operates within a tight two-millisecond latency requirement, and this new system delivers a 50x improvement over a CPU-based configuration, which couldn’t meet the goal.

The financial services giant’s GPU-accelerated LSTM deep neural network combined with its long-standing gradient boosting machine (GBM) model — used for regression and classification — has improved fraud detection accuracy by up to six percent in specific segments.

Accuracy matters. A false positive that denies a customer’s legitimate transaction is an unpleasant situation to be in for card members and merchants, says American Express.

“Especially in this environment, our customers need us now more than ever, so we’re supporting them with best-in-class fraud protection and servicing,” Gupta said.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post American Express Adopts NVIDIA AI to Help Prevent Fraud and Foil Cybercrime appeared first on The Official NVIDIA Blog.

Read More

NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture

NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture

From AI to VDI, NVIDIA virtual GPU products provide employees with powerful performance for any workflow.

vGPU technology helps IT departments easily scale the delivery of GPU resources, and allows professionals to collaborate and run advanced graphics and computing workflows from the data center or cloud.

Now, NVIDIA is expanding its vGPU software features with a new release that supports the NVIDIA A100 Tensor Core GPU with NVIDIA Virtual Compute Server (vCS) software. Based on NVIDIA vGPU technology, vCS enables AI and compute-intensive workloads to run in VMs.

With support for the NVIDIA A100, the latest NVIDIA vCS delivers significantly faster performance for AI and data analytics workloads.

Powered by the NVIDIA Ampere architecture, the A100 GPU provides strong scaling for GPU compute and deep learning applications running in single- and multi-GPU workstations, servers, clusters, cloud data centers, systems at the edge and supercomputers.

Enterprise data centers standardized on hypervisor-based virtualization can now deploy the A100 with vCS for all the operational benefits that virtualization brings with management and monitoring, without sacrificing performance. And with the workloads running in virtual machines, they can be managed, monitored and run remotely on any device, anywhere.

Graph shows normalized performance of MIG 2g.10gb running inferencing workload in bare metal (dark green) is nearly the same when running a Virtual Compute Server VM on each MIG instance (light green).

Engineers, researchers, students, data scientists and others can now tackle compute-intensive workloads in a virtual environment, accessing the most powerful GPU in the world through virtual machines that can be securely provisioned in minutes. As NVIDIA A100 GPUs become available in vGPU-certified servers from NVIDIA’s partners, professionals across all industries can accelerate their workloads with powerful performance.

Also, IT professionals get the management, monitoring and multi-tenancy benefits from hypervisors like Red Hat RHV/RHEL.

“Our customers have an increasing need to manage multi-tenant workflows running on virtual machines while providing isolation and security benefits,” said Chuck Dubuque, senior director of product marketing at Red Hat. “The new multi-instance GPU capabilities on NVIDIA A100 GPUs enable a new range of AI-accelerated workloads that run on Red Hat platforms from the cloud to the edge.”

Additional new features of the NVIDIA vGPU September 2020 release include:

  1. Multi-Instance GPU (MIG) with VMs: MIG expands the performance and value of NVIDIA A100 by partitioning the GPUs in up to seven instances. Each MIG can be fully isolated with its own high-bandwidth memory, cache and compute cores. Combining MIG with vCS, enterprises can take advantage of management, monitoring and operational benefits of hypervisor-based server virtualization, running a VM on each MIG partition.
  2. Heterogeneous Profiles and OSes: With the ability to have different sized instances through MIG, heterogenous vCS profiles can be used on an A100 GPU. This allows VMs of various sizes to be run on a single A100 GPU. Additionally, with VMs running on the NVIDIA GPUs with vCS, heterogeneous operating systems can also be run on an A100 GPU, where different Linux distributions can be run simultaneously in different VMs.
  3. GPUDirect Remote Direct Memory Access: Now supported with NVIDIA vCS, GPUDirect RDMA enables network devices to directly access GPU memory, bypassing CPU host memory and decreasing GPU-GPU communication latency to completely offload the CPU in a virtualized environment.

Learn more about NVIDIA Virtual Compute Server, including how the technology was recognized as Disruptive Technology of the Year at VMworld, and see the latest announcement of VMware and NVIDIA partnering to develop enterprise AI solutions.

VMware vSphere support for vCS with A100 will be available next year. The NVIDIA virtual GPU portfolio also includes the Quadro Virtual Workstation for technical and creative professionals, and GRID vPC and vApps for knowledge workers.

GTC Brings the Latest in vGPU

Hear more about how NVIDIA Virtual Compute Server is being used in industries at the GPU Technology Conference, taking place October 5-9.

Adam Tetelman and Jeff Weiss from NVIDIA, joined by Timothy Dietrich from NetApp, will give an overview of NVIDIA Virtual Compute Server technology and discuss use cases and manageability.

As well, a panel of experts from NVIDIA, ManTech and Maxar will share how NVIDIA vGPU is used in their solutions to analyze large amounts of data, enable remote visualization and accelerate compute for video streams and images.

Register now for GTC and check out all the sessions available.

The post NVIDIA vGPU Software Accelerates Performance with Support for NVIDIA Ampere Architecture appeared first on The Official NVIDIA Blog.

Read More

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Despite the pandemic putting in-person training on hold, organizations can still offer instructor-led courses to their staff to develop key skills in AI, data science and accelerated computing.

NVIDIA’s Deep Learning Institute offers many online courses that deliver hands-on training. One of its most popular — recently updated and retitled as The Fundamentals of Deep Learning — will be taken by hundreds of attendees at next week’s GPU Technology Conference, running Oct. 5-9.

Organizations interested in boosting the deep learning skills of their personnel can arrange to get their teams trained by requesting a workshop from the DLI Course Catalog.

“Technology professionals who take our revamped deep learning course will emerge with the basics they need to start applying deep learning to their most challenging AI and machine learning applications,” said Craig Clawson, director of Training Services at NVIDIA. “This course is a key building block for developing a cutting-edge AI skillset.”

Huge Demand for Deep Learning

Deep learning is at the heart of the fast-growing fields of machine learning and AI. This makes it a skill that’s in huge demand and has put companies across industries in a race to recruit talent. Linkedin recently reported that the fastest growing job category in the U.S. is AI specialist, with annual job growth of 74 percent and an average annual salary of $136,000.

For many organizations, especially those in the software, internet, IT, higher education and consumer electronics sectors, investing in upskilling current employees can be critical to their success while offering a path to career advancement and increasing worker retention.

Deep Learning Application Development-

With interest in the field heating up, a recent article in Forbes highlighted that AI and machine learning, data science and IoT are among the most in-demand skills tech professionals should focus on. In other words, tech workers who lack these skills could soon find themselves at a professional disadvantage.

By developing needed skills, employees can make themselves more valuable to their organizations. And their employers benefit by embedding machine learning and AI functionality into their products, services and business processes.

“Organizations are looking closely at how AI and machine learning can improve their business,” Clawson said. “As they identify opportunities to leverage these technologies, they’re hustling to either develop or import the required skills.”

Get a glimpse of the DLI experience in this short video:

DLI Courses: An Invaluable Resource

The DLI has trained more than 250,000 developers globally. It has continued to deliver a wide range of training remotely via virtual classrooms during the COVID-19 pandemic.

Classes are taught by DLI-certified instructors who are experts in their fields, and breakout rooms support collaboration among students, and interaction with the instructors.

And by completing select courses, students can earn an NVIDIA Deep Learning Institute certificate to demonstrate subject matter competency and support career growth.

It would be hard to exaggerate the potential that this new technology and the NVIDIA developer community holds for improving the world — and the community is growing faster than ever. It took 13 years for the number of registered NVIDIA developers to reach 1 million. Just two years later, it has grown to over 2 million.

Whether enabling new medical procedures, inventing new robots or joining the effort to combat COVID-19, the NVIDIA developer community is breaking new ground every day.

Courses like the re-imagined Fundamentals of Deep Learning are helping developers and data scientists deliver breakthrough innovations across a wide range of industries and application domains.

“Our courses are structured to give developers the skills they need to thrive as AI and machine learning leaders,” said Clawson. “What they take away from the courses, both for themselves and their organizations, is immeasurable.”

To get started on the journey of transforming your organization into an AI powerhouse, request a DLI workshop today.

What is deep learning? Read more about this core technology.

The post Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse appeared first on The Official NVIDIA Blog.

Read More