What Is Cloud Gaming?

Cloud gaming uses powerful, industrial-strength GPUs inside secure data centers to stream your favorite games over the internet to you. So you can play the latest games on nearly any device, even ones that can’t normally play that game.

But First, What Is Cloud Gaming?

While the technology is complex, the concept is simple.

Cloud gaming takes your favorite game, and instead of using the device in front of you to power it, a server — a powerful, industrial-strength PC — runs the game from a secure data center.

Gameplay is then streamed over the internet back to you, allowing you to play the latest games on nearly any device, even ones that are not capable of running can’t actually play that game.

Cloud gaming streams the latest games from powerful GPUs in remote data centers to nearly any device.

Video games are interactive, obviously. So, cloud gaming servers need to process information and render frames in real time. Unlike movies or TV shows that can provide a buffer — a few extra seconds of information that gets sent to your device before it’s time to be displayed — games are dependent on the user’s next keystroke or button press.

Introducing GeForce NOW

We started our journey to cloud gaming over 10 years ago, spending that time to optimize every millisecond of the pipeline that we manage, from the graphics cards in the data centers to the software on your local device.

Here’s how it works.

GeForce NOW is a service that takes a GeForce gaming PC’s power and flexibility and makes it accessible through the cloud. This gives you an always-on gaming rig that never needs upgrading, patching or updating — across all of your devices.

One of the things that makes GeForce NOW unique is that it connects to popular PC games stores — Steam, Epic Games Store, Ubisoft Connect and more — so gamers can play the same PC version of games their friends are playing.

It also means, if they already own a bunch of games, they can log in and start playing them. And if they have, or upgrade to, a gaming rig, they have access to download and play those games on that local PC.

GeForce NOW empowers you to take your PC games with you, wherever you go.

Gamers get an immersive PC gaming experience, instant access to the world’s most popular games and gaming communities, and the freedom to play on any device, at any time.

It’s PC gaming for those whose PCs have integrated graphics, for Macs and Chromebooks that don’t have access to the latest games, or for internet-connected mobile devices where PC gaming is only a dream.

Over 80 percent of GeForce NOW members are playing on devices that don’t meet the min spec for the games they’re playing.

To start, sign up for the service, download the app and begin your cloud gaming journey.

Powering PC Gaming from the Cloud

Cloud data centers with NVIDIA GPUs power the world’s most computationally complex tasks, from AI to data analytics and research. Combined with advanced GeForce PC gaming technologies, GeForce NOW delivers high-end PC gaming to passionate gamers.

NVIDIA RTX servers provide the backbone for GeForce NOW.

GeForce NOW data centers include NVIDIA RTX servers that feature RTX GPUs. These GPUs enable the holy grail of modern graphics: real-time ray tracing, and DLSS, NVIDIA’s groundbreaking AI rendering that boosts frame rates for uncompromised image quality. The hardware is supported with NVIDIA Game Ready Driver performance improvements.

Patented encoding technology — along with hardware acceleration in both video encoding and decoding, pioneered by NVIDIA more than a decade ago — allows for gameplay to be streamed at high frame rates, with low enough latency that most games will feel like the game is being played locally. Gameplay rendered in GeForce NOW data centers is converted into high-definition H.265 and H.264 video and streamed back to the gamer instantaneously.

The total time it takes from button press or keystroke to the action appearing on the screen is less than one-tenth of a second, faster than the blink of an eye.

Growing Cloud Gaming Around the World

With the ambition to deliver quality cloud gaming to all gamers, NVIDIA works with partners around the world including telecommunications and service providers to put GeForce NOW servers to work in their own data centers, ensuring lightning-fast connections.

Partners that have already deployed RTX cloud gaming servers include SoftBank and KDDI in Japan, LG Uplus in Korea, GFN.RU in Russia, Armenia, Azerbaijan, Belarus, Kazakhstan, Georgia, Moldova, Ukraine and Uzbekistan, Zain in Saudi Arabia and Taiwan Mobile in Taiwan.

Together with partners from around the globe, we’re scaling GeForce NOW to enable millions of gamers to play their favorite games, when and where they want.

Get started with your gaming adventures on GeForce NOW.

Editor’s note: This is the first in a series on the GeForce NOW game-streaming service, how it works, ways you can make the most of it, and where it’s going next. 

In our next blog, we’ll talk about how we bring your games to GeForce NOW.

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post What Is Cloud Gaming? appeared first on The Official NVIDIA Blog.

Read More

In the Drink of an AI: Startup Opseyes Instantly Analyzes Wastewater

Let’s be blunt. Potentially toxic waste is just about the last thing you want to get in the mail. And that’s just one of the opportunities for AI to make the business of analyzing wastewater better.

It’s an industry that goes far beyond just making sure water coming from traditional sewage plants is clean.

Just about every industry on earth — from computer chips to potato chips — relies on putting water to work, which means we’re all, literally, swimming in the stuff.

Just What the Doctor Ordered

That started to change, however, thanks to a conversation Opseyes founder Bryan Arndt, then a managing consultant with Denmark-based architecture and engineering firm Ramboll, had with his brother, a radiologist.

Arndt was intrigued when his brother described how deep learning was being set loose on medical images.

Arndt quickly realized that the same technology — deep learning — that helps radiologists analyze images of the human body faster and more accurately could almost instantly analyze images, taken through microscopes, of wastewater samples.

Faster Flow

The result, developed by Arndt and his colleagues at Ramboll, a wastewater industry leader for more than 50 years, dramatically speeds up an industry that’s long relied on sending tightly sealed samples of some of the stinkiest stuff on earth through the mail.

That’s critical when cities and towns and industries of all kinds are constantly taking water from lakes and rivers, like the Mississippi, treating it, and returning it to nature.

“We had one client find out their discharge was a quarter-mile, at best, from the intake for the next city’s water supply,” Arndt says. “Someone is always drinking what your tube is putting out.”

That makes wastewater enormously important.

Water, Water, Everywhere

It’s an industry that was kicked off by the 1972 U.S. Clean Water Act, a landmark not just in the United States, but globally.

Thanks to growing awareness of the importance of clean water, analysts estimate the global wastewater treatment market will be worth more than $210 billion by 2025.

The challenge: while almost every industry creates wastewater, wastewater expertise isn’t exactly ubiquitous.

Experts who can peer through a microscope and identify, say, the six most common bacterial “filaments” as they’re known in the industry, or critters such as tardigrades, are scarce.

You’ve Got … Ugh

That means samples of wastewater, or soil containing that water, have to be sent through the mail to get to these experts, who often have a backlog of samples to go through.

While Ardnt says people in his industry take precautions to seal potentially toxic waste and track it to ensure it gets to the right place, it’s still time-consuming.

The solution, Arndt realized, was to use deep learning to train an AI that could yield instantaneous results. To do this, last year Arndt reached out on social media to colleagues throughout the wastewater industry to send him samples.

Least Sexy Photoshoot Ever

He and his small team then spent months creating more than 6,000 images of these samples in Ramboll’s U.S. labs, where they build elaborate models of wastewater systems before deploying full-scale systems for clients. Think of it as the least sexy photoshoot, ever.

These images were then labeled and used by a data science  team lead by Robin Schlenga to train a convolutional neural network accelerated by NVIDIA GPUs. Launched last September after a year-and-a-half of development, Opseyes allows customers to use their smartphone to take a picture of a sample through a microscope and get answers within minutes.

It’s just another example of how expertise in companies seemingly far outside of tech can be transformed into an AI. After all, “no one wants to have to wait a week to know if it’s safe to take a sip of water,” Arndt says.

Bottoms up.

Featured image credit: Opseyes

The post In the Drink of an AI: Startup Opseyes Instantly Analyzes Wastewater appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators

As data grows in volume, velocity and complexity, the field of data science is booming.

There’s an ever-increasing demand for talent and skillsets to help design the best data science solutions. However, expertise that can help drive these breakthroughs requires students to have a foundation in various tools, programming languages, computing frameworks and libraries.

That’s why the NVIDIA Deep Learning Institute has released the first version of its Accelerated Data Science Teaching Kit for qualified educators. The kit has been co-developed with Polo Chau, from the Georgia Institute of Technology, and Xishuang Dong, from Prairie View A&M University, two highly regarded researchers and educators in the fields of data science and accelerating data analytics with GPUs.

“Data science unlocks the immense potential of data in solving societal challenges and large-scale complex problems across virtually every domain, from business, technology, science and engineering to healthcare, government and many more,” Chau said.

The free teaching materials cover fundamental and advanced topics in data collection and preprocessing, accelerated data science with RAPIDS, GPU-accelerated machine learning, data visualization and graph analytics.

Content also covers culturally responsive topics such as fairness and data bias, as well as challenges and important individuals from underrepresented groups.

This first release of the Accelerated Data Science Teaching Kit includes focused modules covering:

  • Introduction to Data Science and RAPIDS
  • Data Collection and Pre-processing (ETL)
  • Data Ethics and Bias in Data Sets
  • Data Integration and Analytics
  • Data Visualization
  • Distributed Computing with Hadoop, Hive, Spark and RAPIDS

More modules are planned for future releases.

All modules include lecture slides, lecture notes and quiz/exam problem sets, and most modules include hands-on labs with included datasets and sample solutions in Python and interactive Jupyter notebook formats. Lecture videos will be included for all modules in later releases.

DLI Teaching Kits also come bundled with free GPU resources in the form of Amazon Web Services credits for educators and their students, as well as free DLI online, self-paced courses and certificate opportunities.

“Data science is such an important field of study, not just because it touches every domain and vertical, but also because data science addresses important societal issues relating to gender, race, age and other ethical elements of humanity,“ said Dong, whose school is a Historically Black College/University.

This is the fourth teaching kit released by the DLI, as part of its program that has reached 7,000 qualified educators so far. Learn more about NVIDIA Teaching Kits.

The post NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators appeared first on The Official NVIDIA Blog.

Read More

What Is Conversational AI?

For a quality conversation between a human and a machine, responses have to be quick, intelligent and natural-sounding.

But up to now, developers of language-processing neural networks that power real-time speech applications have faced an unfortunate trade-off: Be quick and you sacrifice the quality of the response; craft an intelligent response and you’re too slow.

That’s because human conversation is incredibly complex. Every statement builds on shared context and previous interactions. From inside jokes to cultural references and wordplay, humans speak in highly nuanced ways without skipping a beat. Each response follows the last, almost instantly. Friends anticipate what the other will say before words even get uttered.

What Is Conversational AI? 

True conversational AI is a voice assistant that can engage in human-like dialogue, capturing context and providing intelligent responses. Such AI models must be massive and highly complex.

But the larger a model is, the longer the lag between a user’s question and the AI’s response. Gaps longer than just three-tenths of a second can sound unnatural.

With NVIDIA GPUs, conversational AI software, and CUDA-X AI libraries, massive, state-of-the-art language models can be rapidly trained and optimized to run inference in just a couple of milliseconds — thousandths of a second — which is a major stride toward ending the trade-off between an AI model that’s fast versus one that’s large and complex.

These breakthroughs help developers build and deploy the most advanced neural networks yet, and bring us closer to the goal of achieving truly conversational AI.

GPU-optimized language understanding models can be integrated into AI applications for such industries as healthcare, retail and financial services, powering advanced digital voice assistants in smart speakers and customer service lines. These high-quality conversational AI tools can allow businesses across sectors to provide a previously unattainable standard of personalized service when engaging with customers.

How Fast Does Conversational AI Have to Be?

The typical gap between responses in natural conversation is about 300 milliseconds. For an AI to replicate human-like interaction, it might have to run a dozen or more neural networks in sequence as part of a multilayered task — all within that 300 milliseconds or less.

Responding to a question involves several steps: converting a user’s speech to text, understanding the text’s meaning, searching for the best response to provide in context, and providing that response with a text-to-speech tool. Each of these steps requires running multiple AI models — so the time available for each individual network to execute is around 10 milliseconds or less.

If it takes longer for each model to run, the response is too sluggish and the conversation becomes jarring and unnatural.

Working with such a tight latency budget, developers of current language understanding tools have to make trade-offs. A high-quality, complex model could be used as a chatbot, where latency isn’t as essential as in a voice interface. Or, developers could rely on a less bulky language processing model that more quickly delivers results, but lacks nuanced responses.

NVIDIA Jarvis is an application framework for developers building highly accurate conversational AI applications that can run far below the 300-millisecond threshold required for interactive apps. Developers at enterprises can start from state-of-the-art models that have been trained for more than 100,000 hours on NVIDIA DGX systems

Enterprises can apply transfer learning with Transfer Learning Toolkit to fine-tune these models on their custom data. These models are better suited to understand company-specific jargon leading to higher user satisfaction. The models can be optimized with TensorRT, NVIDIA’s high-performance inference SDK, and deployed as services that can run and scale in the data center. Speech and vision can be used together to create apps that make interactions with devices natural and more human-like. Jarvis makes it possible for every enterprise to use world-class conversational AI technology that previously was only conceivable for AI experts to attempt. 

What Will Future Conversational AI Sound Like? 

Basic voice interfaces like phone tree algorithms (with prompts like “To book a new flight, say ‘bookings’”) are transactional, requiring a set of steps and responses that move users through a pre-programmed queue. Sometimes it’s only the human agent at the end of the phone tree who can understand a nuanced question and solve the caller’s problem intelligently.

Voice assistants on the market today do much more, but are based on language models that aren’t as complex as they could be, with millions instead of billions of parameters. These AI tools may stall during conversations by providing a response like “let me look that up for you” before answering a posed question. Or they’ll display a list of results from a web search rather than responding to a query with conversational language.

A truly conversational AI would go a leap further. The ideal model is one complex enough to accurately understand a person’s queries about their bank statement or medical report results, and fast enough to respond near instantaneously in seamless natural language.

Applications for this technology could include a voice assistant in a doctor’s office that helps a patient schedule an appointment and follow-up blood tests, or a voice AI for retail that explains to a frustrated caller why a package shipment is delayed and offers a store credit.

Demand for such advanced conversational AI tools is on the rise: an estimated 50 percent of searches will be conducted with voice by 2020, and, by 2023, there will be 8 billion digital voice assistants in use.

What Is BERT? 

BERT (Bidirectional Encoder Representations from Transformers) is a large, computationally intensive model that set the state of the art for natural language understanding when it was released last year. With fine-tuning, it can be applied to a broad range of language tasks such as reading comprehension, sentiment analysis or question and answer. 

Trained on a massive corpus of 3.3 billion words of English text, BERT performs exceptionally well — better than an average human in some cases — to understand language. Its strength is its capability to train on unlabeled datasets and, with minimal modification, generalize to a wide range of applications. 

The same BERT can be used to understand several languages and be fine-tuned to perform specific tasks like translation, autocomplete or ranking search results. This versatility makes it a popular choice for developing complex natural language understanding. 

At BERT’s foundation is the Transformer layer, an alternative to recurrent neural networks that applies an attention technique — parsing a sentence by focusing attention on the most relevant words that come before and after it. 

The statement “There’s a crane outside the window,” for example, could describe either a bird or a construction site, depending on whether the sentence ends with “of the lakeside cabin” or “of my office.” Using a method known as bidirectional or nondirectional encoding, language models like BERT can use context cues to better understand which meaning applies in each case.

Leading language processing models across domains today are based on BERT, including BioBERT (for biomedical documents) and SciBERT (for scientific publications).

How Does NVIDIA Technology Optimize Transformer-Based Models? 

The parallel processing capabilities and Tensor Core architecture of NVIDIA GPUs allow for higher throughput and scalability when working with complex language models — enabling record-setting performance for both the training and inference of BERT.

Using the powerful NVIDIA DGX SuperPOD system, the 340 million-parameter BERT-Large model can be trained in under an hour, compared to a typical training time of several days. But for real-time conversational AI, the essential speedup is for inference.

NVIDIA developers optimized the 110 million-parameter BERT-Base model for inference using TensorRT software. Running on NVIDIA T4 GPUs, the model was able to compute responses in just 2.2 milliseconds when tested on the Stanford Question Answering Dataset. Known as SQuAD, the dataset is a popular benchmark to evaluate a model’s ability to understand context.

The latency threshold for many real-time applications is 10 milliseconds. Even highly optimized CPU code results in a processing time of more than 40 milliseconds.

By shrinking inference time down to a couple milliseconds, it’s practical for the first time to deploy BERT in production. And it doesn’t stop with BERT — the same methods can be used to accelerate other large, Transformer-based natural language models like GPT-2, XLNet and RoBERTa.

To work toward the goal of truly conversational AI, language models are getting larger over time. Future models will be many times bigger than those used today, so NVIDIA built and open-sourced the largest Transformer-based AI yet: GPT-2 8B, an 8.3 billion-parameter language processing model that’s 24x bigger than BERT-Large.

Chart showing the growing number of parameters in deep learning language models

Learn How to Build Your Own Transformer-Based Natural Language Processing Applications

The NVIDIA Deep Learning Institute offers instructor-led, hands-on training on the fundamental tools and techniques for building Transformer-based natural language processing models for text classification tasks, such as categorizing documents. Taught by an expert, this in-depth, 8-hour workshop instructs participants in being able to:

  • Understand how word embeddings have rapidly evolved in NLP tasks, from  Word2Vec and recurrent neural network-based embeddings to Transformer-based contextualized embeddings.
  • See how Transformer architecture features, especially self-attention, are used to create language models without RNNs.
  • Use self-supervision to improve the Transformer architecture in BERT, Megatron and other variants for superior NLP results.
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER and question answering.
  • Manage inference challenges and deploy refined models for live applications.

Earn a DLI certificate to demonstrate subject-matter competency and accelerate your career growth. Take this workshop at an upcoming GTC or request a workshop for your organization.

For more information on conversational AI, training BERT on GPUs, optimizing BERT for inference and other projects in natural language processing, check out the NVIDIA Developer Blog.

The post What Is Conversational AI? appeared first on The Official NVIDIA Blog.

Read More

Think Aggressively This GFN Thursday with Outriders Demo, 11 Additional Games

Here comes another GFN Thursday, dropping in to co-op with you as we explore the world of Square Enix’s new Outriders game. Before we get into the rest of this week’s new additions, let’s head to Enoch and take a closer look at what makes People Can Fly’s upcoming launch special.

Let’s Ride

From the studio that launched Bulletstorm, Gears of War Judgement and Painkiller, Outriders takes gamers to the world of Enoch. Embark on a brand-new experience: a single-player or co-op RPG shooter with brutal powers, intense action, deep RPG mechanics, and a compelling story set in a dark sci-fi world. The game dynamically adjusts the balance to account for how many players are in a session to keep the challenge level just right.

Play Outriders on GeForce NOW
Outriders is coming to GeForce NOW in April, but members can play the demo now.

Combining intense gunplay with violent powers and an arsenal of increasingly twisted weaponry and gear-sets, OUTRIDERS offers countless hours of gameplay from one of the finest shooter developers in the industry, People Can Fly.

The demo has a ton of content to explore. Beyond the main storyline, gamers can explore four quests. And all progress made in the demo carries over to the full game when it launches in April.

GeForce NOW members can play on any supported device — PC, Mac, Chromebook, iOS, Android or Android TV. And with crossplay, members can join friends in Enoch regardless of which platform their friends are playing on.

Like most day-and-date releases on GeForce NOW, we expect to have the Outriders demo streaming within a few hours of it going live on Steam.

Let’s Play Today

In addition to the Outriders demo, let’s take a look at this week’s 11 more new additions to the GeForce NOW library.

Curse of the Dead Gods on GeForce NOW

Curse of the Dead Gods (Steam)

Now out of Early Access, this skill-based roguelike challenges you to explore endless dungeon rooms while turning curses to your advantage. IGN calls the game’s combat system “mechanically simple, but impressively deep.”

Old School Runescape on GeForce NOW

Old School RuneScape (Steam)

Old School RuneScape is RuneScape, but older! This is the open-world gamers know and love, but as it was in 2007. Saying that, it’s even better – Old School is shaped by you, the players, with regular new content, fixes and expansions voted for by the fans!

Rogue Heroes: Ruins of Tasos on GeForce NOW

Rogue Heroes: Ruins of Tasos (Steam)

Classic adventure for you and three of your friends! Delve deep into procedural dungeons, explore an expansive overworld full of secrets and take down the Titans to save the once peaceful land of Tasos.

In addition, members can look for the following:

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Think Aggressively This GFN Thursday with Outriders Demo, 11 Additional Games appeared first on The Official NVIDIA Blog.

Read More

Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class

It’s hard not to feel your best when your car makes every commute a VIP experience.

This week, Mercedes-Benz launched the redesigned C-Class sedan and C-Class wagon, packed with new features for the next generation of driving. Both models prominently feature the latest MBUX AI cockpit, powered by NVIDIA, delivering an intelligent user interface for daily driving.

The newest MBUX system debuted with the flagship S-Class sedan in September. With the C-Class, the system is now in Mercedes-Benz’ most popular model in the mid-size sedan segment — the automaker has sold 10.5 million C-Class vehicles since it was first introduced and one in every seven Mercedes-Benz sold is a member of that model line.

NVIDIA and Mercedes-Benz have been working together to drive the future of automotive innovation, starting with the first generation MBUX to the upcoming fleet of software-defined vehicles.

This extension of MBUX to such an appealing model is accelerating the adoption of AI into everyday commutes, ushering in a new generation where the car adapts to the driver, not the other way around.

Uncommon Intelligence

With MBUX, the new C-Class sedan and wagon share much of the innovations that have made the S-Class a standout in its segment.

AI cockpits orchestrate crucial safety and convenience features, constantly learning to continuously deliver joy to the customer. Similarly, the MBUX system serves as the central nervous system of the vehicle, intelligently networking all its functions.

“MBUX combines so many features into one intelligent user interface,” said Georges Massing, vice president of Digital Vehicle and Mobility at Mercedes-Benz. “It makes life much easier for our customers.”

The new MBUX system makes the cutting edge in graphics, passenger detection and natural language processing seem effortless. Like in the S-Class, the C-Class system features a driver and media display with crisp graphics that are easily understandable at a glance. The “Hey Mercedes” voice assistant has become even sharper, can activate online services, and continuously improves over time.

MBUX can even recognize biometric identification to ensure the car is always safe and secure. A fingerprint scanner located beneath the central display allows users to quickly and securely access personalized features.

And with over-the-air updates, MBUX ensures the latest technology will always be at the user’s fingertips, long after they leave the dealership.

A Modern Sedan for the Modern World

With AI at the helm, the C-Class embraces modern and forward-looking technology as the industry enters a new era of mobility.

The redesigned vehicle maintains the Mercedes-Benz heritage of unparalleled driving dynamics while incorporating intelligent features such as headlights that automatically adapt to the surrounding environment for optimal visibility.

Both the sedan and wagon variants come with plug-in hybrid options that offer more than 60 miles of electric range for a luxurious driving experience that’s also sustainable.

These features, combined with the only AI cockpit available today, will have C-Class drivers feeling like a million bucks.

The post Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class appeared first on The Official NVIDIA Blog.

Read More

New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors

For the first time ever, the NVIDIA Deep Learning Institute is making its popular instructor-led workshops available to the general public.

With the launch of public workshops this week, enrollment will be open to individual developers, data scientists, researchers and students. NVIDIA is increasing accessibility and the number of courses available to participants around the world. Anyone can learn from expert NVIDIA instructors in courses on AI, accelerated computing and data science.

Previously, DLI workshops were only available to large organizations that wanted dedicated and specialized training for their in-house developers, or to individuals attending GPU Technology Conferences.

But demand for in-depth training has increased dramatically in the last few years. Individuals are looking to acquire new skills and organizations are seeking to provide their workforces with advanced software development techniques.

“Our public workshops provide a great opportunity for individual developers and smaller organizations to get industry-leading training in deep learning, accelerated computing and data science,” said Will Ramey, global head of Developer Programs at NVIDIA. “Now the same expert instructors and world-class learning materials that help accelerate innovation at leading companies are available to everyone.”

The current lineup of DLI workshops for individuals includes:

March 2021

  • Fundamentals of Accelerated Computing with CUDA Python
  • Applications of AI for Predictive Maintenance

April 2021

  • Fundamentals of Deep Learning
  • Applications of AI for Anomaly Detection
  • Fundamentals of Accelerated Computing with CUDA C/C++
  • Building Transformer-Based Natural Language Processing Applications
  • Deep Learning for Autonomous Vehicles – Perception
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Accelerating CUDA C++ Applications with Multiple GPUs
  • Fundamentals of Deep Learning for Multi-GPUs

May 2021

  • Building Intelligent Recommender Systems
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Deep Learning for Industrial Inspection
  • Building Transformer-Based Natural Language Processing Applications
  • Applications of AI for Anomaly Detection

Visit the DLI website for details on each course and the full schedule of upcoming workshops, which is regularly updated with new training opportunities.

Jump-Start Your Software Development

As organizations invest in transforming their workforce to benefit from modern technologies, it’s critical that their software and solutions development teams are equipped with the right skills and tools. In a market where developers with the latest skills in deep learning, accelerated computing and data science are scarce, DLI strengthens their employees’ skillsets through a wide array of course offerings.

The full-day workshops offer a comprehensive learning experience that includes hands-on exercises and guidance from expert instructors certified by DLI. Courses are delivered virtually and in many time zones to reach developers worldwide. Courses are offered in English, Chinese, Japanese and other languages.

Registration fees cover learning materials, instructors and access to fully configured GPU-accelerated development servers for hands-on exercises.

A complete list of DLI courses are available in the DLI course catalog.

Register today for a DLI instructor-led workshop for individuals. Space is limited so sign up early.

For more information, visit the DLI website or email nvdli@nvidia.com.

The post New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors appeared first on The Official NVIDIA Blog.

Read More

Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai

Pooja Rao, a doctor, data scientist and entrepreneur, wants to make cutting-edge medical care available to communities around the world, regardless of their resources. Her startup, Qure.ai, is doing exactly that, with technology that’s used in 150+ healthcare facilities in 27 countries.

Rao is the cofounder and head of research and development at the Mumbai-based company, which started in 2016.  The company develops AI technology that interprets medical images, with a focus on pulmonary and neurological scans.

Qure.ai is also a member of the NVIDIA Inception startup accelerator program.

Qure.ai received an NVIDIA Inception Social Innovation award back in 2016,” Rao said in advance of the interview.  “This was our first ever external recognition, generating exposure for us in the AI ecosystem. Since then, we’ve been a regular participant at GTC – the world’s premier AI conference. NVIDIA’s commitment to the startup community is unmatched, and I’m always inspired by the new applications of AI that are showcased at the conference.”

Qure.ai technology has proven extremely useful in rapidly diagnosing tuberculosis, a disease that infects millions each year and can cause death if not treated early. By providing fast diagnoses and compensating in areas with fewer trained healthcare professionals, Qure.ai is saving lives.

Their AI is also helping to prioritize critical cases in teleradiology. Teleradiologists remotely analyze large volumes of medical images, with no way of knowing which scans might portray a time-sensitive issue, such as a brain hemorrhage. Qure.ai technology analyzes and prioritizes the scans for teleradiologists, reducing the time it takes them to read critical cases by 97 percent, according to Rao.

Right now, a major focus is helping fight COVID-19 — Qure.ai’s AI tool qXR is helping monitor disease progression and provide a risk score, aiding triage decisions.

In the future, Rao anticipates eventually building Qure.ai technology into medical imaging machinery to identify areas that need to be photographed more closely.

Key Points From This Episode:

  • Qure.ai has just received its first U.S. FDA approval. Its technology has also been acknowledged by the World Health Organization, which recently officially endorsed AI as a means to diagnose tuberculosis, especially in areas with fewer healthcare professionals.
  • Because Qure.ai’s mission is to create AI technology that can function in areas with limited resources, it has built systems that have learned to work with patchy internet and images that aren’t of the highest quality.
  • In order to be a global tool, Qure.ai partnered with universities and hospitals to train on data from patients of different genders and ethnicities from around the world.

Tweetables:

“You can have the fanciest architectures, but at some point it really becomes about the quality, the quantity and the diversity of the training data.” — Pooja Rao [7:46]

“I’ve always thought that the point of studying medicine was to be able to improve it — to develop new therapies and technology.” — Pooja Rao [18:57]

You Might Also Like:

How Nuance Brings AI to Healthcare

Nuance, a pioneer of voice recognition technology, is now bringing AI to the healthcare industry. Karen Holzberger, vice president and general manager of Nuance’s Healthcare Diagnostic Solutions business, talks about how their technology is helping physicians make people healthier.

Exploring the AI Startup Ecosystem with NVIDIA Inception’s Jeff Herbst

Jeff Herbst, vice president of business development at NVIDIA and head of NVIDIA Inception, is a fixture of the AI startup ecosystem. He joins the NVIDIA podcast to talk about how Inception is accelerating startups in every industry.

Anthem Could Have Healthcare Industry Singing a New Tune

Health insurance company Anthem is using AI to help patients personalize and better understand their healthcare information. Rajeev Ronanki, senior vice president and chief digital officer at Anthem, talks about how AI makes data as useful as possible for the healthcare giant.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai appeared first on The Official NVIDIA Blog.

Read More

The Sky’s No Longer the Limit: GFN Thursday Celebrates Indie Breakthroughs — Valheim, Terraria and More

GFN Thursday is bringing gamers 11 new titles this week, but first, a question: What’s your favorite indie game?

We ask because GeForce NOW supports nearly 300 of them, streaming straight from the cloud. And that’s great for gamers and indie devs.

An Indie Spotlight

PC gaming thrives because of independent studios. Some of the world’s smallest developers have made the biggest and best games. It’s one of the things we love most about PC gaming and why NVIDIA built its Indie Spotlight program.

Developing a great game is challenging enough, so we’re supporting indie devs by helping them reach a wider audience. GeForce NOW connects to game stores where PC games are already offered, so developers can grow their audience while focusing on their creative vision, without worrying about ports.

Teams like Iron Gate AB and Coffee Stain Studios are now able to bring a graphically intense PC game like Valheim to more gamers by streaming the PC version from our cloud servers.

Valheim on GeForce NOWYou can build your dream Viking home in Valheim. And with GeForce NOW, you don’t even need a PC.

Valheim asks you to battle, build and conquer your way to a saga worthy of Odin’s patronage. The game’s already a huge success on Steam, and with GeForce NOW, Iron Gate’s team can share their vision with cloud gamers on Mac, Android, iOS and Chromebooks.

“We launched Valheim in early access on Steam, and immediately NVIDIA helped us bring it to more gamers with GeForce NOW. That way, even Mac users can play Valheim,” said Henrik Törnqvist, co-founder of Iron Gate AB.

Motoring Toward the Indie 300

GeForce NOW’s library includes nearly 300 of the most-popular and most-loved indie games, with more released every GFN Thursday.

“Streaming Terraria on GeForce NOW makes perfect sense to us. We have always sought out ways to make our game as accessible to as many people as possible. GFN helps accomplish that goal by giving our players the ability to play on any device they want, without any added development work on our side. We’re looking forward to seeing both new and existing players enjoy all that Terraria has to offer, whether that be via the more traditional PC/console/mobile route or streaming from the cloud,” said Ted Murphy, head of business strategy and marketing at Re-Logic.

Terraria, from Re-Logic, is one of the most popular indie hits of all time. It’s also one of the longest-running, best-supported games. Regular content updates since launch have lifted the total item count from 250 to over 5,000.

Terraria on GeForce NOWUsing GeForce NOW, members can check in on their Terraria homes on any of their supported devices.

The indie catalog is a great place to discover games you might’ve missed. Monster Train, from Shiny Shoe and Good Shepherd Entertainment, a strategic roguelike deck building game with a twist, was PC Gamer’s Best Card Game of 2020 and is streaming from the cloud.

Indie Games on GeForce NOWMembers can see even more highlights in the “Indie Spotlight” in-app row, and the complete indie catalog by clicking “See More.”

GeForce NOW’s indies include incredible global success stories. Chinese-developer TPP Studio’s Home Behind 2 is a fairly new indie title that’s rapidly growing in popularity. The game, released in November, has a two-person development team and starts streaming on GeForce NOW this week.

Since GFN streams the PC versions of games from popular digital stores, when a promotion happens — like Team17’s Worms Rumble free weekend on Steam, happening through Feb. 21 — members are able to participate, instantly.

And when games take advantage of NVIDIA technology like DLSS, GeForce NOW members can reap the benefits. Recent examples include War Thunder, and — just this week — Mount & Blade II: Bannerlord. It’s yet another way GeForce NOW supports future indie development.

Let’s Play Today

As is GFN Thursday tradition, let’s take a look at this week’s new additions to the GeForce NOW library.

Hellish Quart (day-and-date release on Steam, Feb. 16)

A new Steam release this week, Kubold’s sword-dueling game includes intense physics and motion-captured fencing techniques. 

South Park: The Stick of Truth (Steam)

A brilliant RPG that satirizes the genre, Ubisoft’s first South Park game lets you pal around with Cartman, Stan, Kyle, Kenny and more in search of a twig of limitless power.

Here are the rest of this week’s additions:

What’s your gaming plan this weekend, members? Let us know on Twitter.

The post The Sky’s No Longer the Limit: GFN Thursday Celebrates Indie Breakthroughs — Valheim, Terraria and More appeared first on The Official NVIDIA Blog.

Read More

GeForce Is Made for Gaming, CMP Is Made to Mine

We are gamers, through and through. We obsess about new gaming features, new architectures, new games and tech. We designed GeForce GPUs for gamers, and gamers are clamoring for more.

Yet NVIDIA GPUs are programmable. And users are constantly discovering new applications for them, from weather simulation and gene sequencing to deep learning and robotics. Mining cryptocurrency is one of them.

With the launch of GeForce RTX 3060 on Feb. 25, we’re taking an important step to help ensure GeForce GPUs end up in the hands of gamers.

Halving Hash Rate

RTX 3060 software drivers are designed to detect specific attributes of the Ethereum cryptocurrency mining algorithm, and limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent.

That only makes sense. Our GeForce RTX GPUs introduce cutting-edge technologies — such as RTX real-time ray-tracing, DLSS AI-accelerated image upscaling technology, Reflex super-fast response rendering for the best system latency, and many more — tailored to meet the needs of gamers and those who create digital experiences.

To address the specific needs of Ethereum mining, we’re announcing the NVIDIA CMP, or, Cryptocurrency Mining Processor, product line for professional mining.

CMP products — which don’t do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and, thus, don’t impact the availability of GeForce GPUs to gamers.

For instance, CMP lacks display outputs, enabling improved airflow while mining so they can be more densely packed. CMPs also have a lower peak core voltage and frequency, which improves mining power efficiency.

Creating tailored products for customers with specific needs delivers the best value for customers. With CMP, we can help miners build the most efficient data centers while preserving GeForce RTX GPUs for gamers.

The post GeForce Is Made for Gaming, CMP Is Made to Mine appeared first on The Official NVIDIA Blog.

Read More