Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class

It’s hard not to feel your best when your car makes every commute a VIP experience.

This week, Mercedes-Benz launched the redesigned C-Class sedan and C-Class wagon, packed with new features for the next generation of driving. Both models prominently feature the latest MBUX AI cockpit, powered by NVIDIA, delivering an intelligent user interface for daily driving.

The newest MBUX system debuted with the flagship S-Class sedan in September. With the C-Class, the system is now in Mercedes-Benz’ most popular model in the mid-size sedan segment — the automaker has sold 10.5 million C-Class vehicles since it was first introduced and one in every seven Mercedes-Benz sold is a member of that model line.

NVIDIA and Mercedes-Benz have been working together to drive the future of automotive innovation, starting with the first generation MBUX to the upcoming fleet of software-defined vehicles.

This extension of MBUX to such an appealing model is accelerating the adoption of AI into everyday commutes, ushering in a new generation where the car adapts to the driver, not the other way around.

Uncommon Intelligence

With MBUX, the new C-Class sedan and wagon share much of the innovations that have made the S-Class a standout in its segment.

AI cockpits orchestrate crucial safety and convenience features, constantly learning to continuously deliver joy to the customer. Similarly, the MBUX system serves as the central nervous system of the vehicle, intelligently networking all its functions.

“MBUX combines so many features into one intelligent user interface,” said Georges Massing, vice president of Digital Vehicle and Mobility at Mercedes-Benz. “It makes life much easier for our customers.”

The new MBUX system makes the cutting edge in graphics, passenger detection and natural language processing seem effortless. Like in the S-Class, the C-Class system features a driver and media display with crisp graphics that are easily understandable at a glance. The “Hey Mercedes” voice assistant has become even sharper, can activate online services, and continuously improves over time.

MBUX can even recognize biometric identification to ensure the car is always safe and secure. A fingerprint scanner located beneath the central display allows users to quickly and securely access personalized features.

And with over-the-air updates, MBUX ensures the latest technology will always be at the user’s fingertips, long after they leave the dealership.

A Modern Sedan for the Modern World

With AI at the helm, the C-Class embraces modern and forward-looking technology as the industry enters a new era of mobility.

The redesigned vehicle maintains the Mercedes-Benz heritage of unparalleled driving dynamics while incorporating intelligent features such as headlights that automatically adapt to the surrounding environment for optimal visibility.

Both the sedan and wagon variants come with plug-in hybrid options that offer more than 60 miles of electric range for a luxurious driving experience that’s also sustainable.

These features, combined with the only AI cockpit available today, will have C-Class drivers feeling like a million bucks.

The post Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class appeared first on The Official NVIDIA Blog.

Read More

New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors

For the first time ever, the NVIDIA Deep Learning Institute is making its popular instructor-led workshops available to the general public.

With the launch of public workshops this week, enrollment will be open to individual developers, data scientists, researchers and students. NVIDIA is increasing accessibility and the number of courses available to participants around the world. Anyone can learn from expert NVIDIA instructors in courses on AI, accelerated computing and data science.

Previously, DLI workshops were only available to large organizations that wanted dedicated and specialized training for their in-house developers, or to individuals attending GPU Technology Conferences.

But demand for in-depth training has increased dramatically in the last few years. Individuals are looking to acquire new skills and organizations are seeking to provide their workforces with advanced software development techniques.

“Our public workshops provide a great opportunity for individual developers and smaller organizations to get industry-leading training in deep learning, accelerated computing and data science,” said Will Ramey, global head of Developer Programs at NVIDIA. “Now the same expert instructors and world-class learning materials that help accelerate innovation at leading companies are available to everyone.”

The current lineup of DLI workshops for individuals includes:

March 2021

  • Fundamentals of Accelerated Computing with CUDA Python
  • Applications of AI for Predictive Maintenance

April 2021

  • Fundamentals of Deep Learning
  • Applications of AI for Anomaly Detection
  • Fundamentals of Accelerated Computing with CUDA C/C++
  • Building Transformer-Based Natural Language Processing Applications
  • Deep Learning for Autonomous Vehicles – Perception
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Accelerating CUDA C++ Applications with Multiple GPUs
  • Fundamentals of Deep Learning for Multi-GPUs

May 2021

  • Building Intelligent Recommender Systems
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Deep Learning for Industrial Inspection
  • Building Transformer-Based Natural Language Processing Applications
  • Applications of AI for Anomaly Detection

Visit the DLI website for details on each course and the full schedule of upcoming workshops, which is regularly updated with new training opportunities.

Jump-Start Your Software Development

As organizations invest in transforming their workforce to benefit from modern technologies, it’s critical that their software and solutions development teams are equipped with the right skills and tools. In a market where developers with the latest skills in deep learning, accelerated computing and data science are scarce, DLI strengthens their employees’ skillsets through a wide array of course offerings.

The full-day workshops offer a comprehensive learning experience that includes hands-on exercises and guidance from expert instructors certified by DLI. Courses are delivered virtually and in many time zones to reach developers worldwide. Courses are offered in English, Chinese, Japanese and other languages.

Registration fees cover learning materials, instructors and access to fully configured GPU-accelerated development servers for hands-on exercises.

A complete list of DLI courses are available in the DLI course catalog.

Register today for a DLI instructor-led workshop for individuals. Space is limited so sign up early.

For more information, visit the DLI website or email nvdli@nvidia.com.

The post New Training Opportunities Now Available Worldwide from NVIDIA Deep Learning Institute Certified Instructors appeared first on The Official NVIDIA Blog.

Read More

Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai

Pooja Rao, a doctor, data scientist and entrepreneur, wants to make cutting-edge medical care available to communities around the world, regardless of their resources. Her startup, Qure.ai, is doing exactly that, with technology that’s used in 150+ healthcare facilities in 27 countries.

Rao is the cofounder and head of research and development at the Mumbai-based company, which started in 2016.  The company develops AI technology that interprets medical images, with a focus on pulmonary and neurological scans.

Qure.ai is also a member of the NVIDIA Inception startup accelerator program.

Qure.ai received an NVIDIA Inception Social Innovation award back in 2016,” Rao said in advance of the interview.  “This was our first ever external recognition, generating exposure for us in the AI ecosystem. Since then, we’ve been a regular participant at GTC – the world’s premier AI conference. NVIDIA’s commitment to the startup community is unmatched, and I’m always inspired by the new applications of AI that are showcased at the conference.”

Qure.ai technology has proven extremely useful in rapidly diagnosing tuberculosis, a disease that infects millions each year and can cause death if not treated early. By providing fast diagnoses and compensating in areas with fewer trained healthcare professionals, Qure.ai is saving lives.

Their AI is also helping to prioritize critical cases in teleradiology. Teleradiologists remotely analyze large volumes of medical images, with no way of knowing which scans might portray a time-sensitive issue, such as a brain hemorrhage. Qure.ai technology analyzes and prioritizes the scans for teleradiologists, reducing the time it takes them to read critical cases by 97 percent, according to Rao.

Right now, a major focus is helping fight COVID-19 — Qure.ai’s AI tool qXR is helping monitor disease progression and provide a risk score, aiding triage decisions.

In the future, Rao anticipates eventually building Qure.ai technology into medical imaging machinery to identify areas that need to be photographed more closely.

Key Points From This Episode:

  • Qure.ai has just received its first U.S. FDA approval. Its technology has also been acknowledged by the World Health Organization, which recently officially endorsed AI as a means to diagnose tuberculosis, especially in areas with fewer healthcare professionals.
  • Because Qure.ai’s mission is to create AI technology that can function in areas with limited resources, it has built systems that have learned to work with patchy internet and images that aren’t of the highest quality.
  • In order to be a global tool, Qure.ai partnered with universities and hospitals to train on data from patients of different genders and ethnicities from around the world.

Tweetables:

“You can have the fanciest architectures, but at some point it really becomes about the quality, the quantity and the diversity of the training data.” — Pooja Rao [7:46]

“I’ve always thought that the point of studying medicine was to be able to improve it — to develop new therapies and technology.” — Pooja Rao [18:57]

You Might Also Like:

How Nuance Brings AI to Healthcare

Nuance, a pioneer of voice recognition technology, is now bringing AI to the healthcare industry. Karen Holzberger, vice president and general manager of Nuance’s Healthcare Diagnostic Solutions business, talks about how their technology is helping physicians make people healthier.

Exploring the AI Startup Ecosystem with NVIDIA Inception’s Jeff Herbst

Jeff Herbst, vice president of business development at NVIDIA and head of NVIDIA Inception, is a fixture of the AI startup ecosystem. He joins the NVIDIA podcast to talk about how Inception is accelerating startups in every industry.

Anthem Could Have Healthcare Industry Singing a New Tune

Health insurance company Anthem is using AI to help patients personalize and better understand their healthcare information. Rajeev Ronanki, senior vice president and chief digital officer at Anthem, talks about how AI makes data as useful as possible for the healthcare giant.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Miracle Qure: Founder Pooja Rao Talks Medical Technology at Qure.ai appeared first on The Official NVIDIA Blog.

Read More

The Sky’s No Longer the Limit: GFN Thursday Celebrates Indie Breakthroughs — Valheim, Terraria and More

GFN Thursday is bringing gamers 11 new titles this week, but first, a question: What’s your favorite indie game?

We ask because GeForce NOW supports nearly 300 of them, streaming straight from the cloud. And that’s great for gamers and indie devs.

An Indie Spotlight

PC gaming thrives because of independent studios. Some of the world’s smallest developers have made the biggest and best games. It’s one of the things we love most about PC gaming and why NVIDIA built its Indie Spotlight program.

Developing a great game is challenging enough, so we’re supporting indie devs by helping them reach a wider audience. GeForce NOW connects to game stores where PC games are already offered, so developers can grow their audience while focusing on their creative vision, without worrying about ports.

Teams like Iron Gate AB and Coffee Stain Studios are now able to bring a graphically intense PC game like Valheim to more gamers by streaming the PC version from our cloud servers.

Valheim on GeForce NOWYou can build your dream Viking home in Valheim. And with GeForce NOW, you don’t even need a PC.

Valheim asks you to battle, build and conquer your way to a saga worthy of Odin’s patronage. The game’s already a huge success on Steam, and with GeForce NOW, Iron Gate’s team can share their vision with cloud gamers on Mac, Android, iOS and Chromebooks.

“We launched Valheim in early access on Steam, and immediately NVIDIA helped us bring it to more gamers with GeForce NOW. That way, even Mac users can play Valheim,” said Henrik Törnqvist, co-founder of Iron Gate AB.

Motoring Toward the Indie 300

GeForce NOW’s library includes nearly 300 of the most-popular and most-loved indie games, with more released every GFN Thursday.

“Streaming Terraria on GeForce NOW makes perfect sense to us. We have always sought out ways to make our game as accessible to as many people as possible. GFN helps accomplish that goal by giving our players the ability to play on any device they want, without any added development work on our side. We’re looking forward to seeing both new and existing players enjoy all that Terraria has to offer, whether that be via the more traditional PC/console/mobile route or streaming from the cloud,” said Ted Murphy, head of business strategy and marketing at Re-Logic.

Terraria, from Re-Logic, is one of the most popular indie hits of all time. It’s also one of the longest-running, best-supported games. Regular content updates since launch have lifted the total item count from 250 to over 5,000.

Terraria on GeForce NOWUsing GeForce NOW, members can check in on their Terraria homes on any of their supported devices.

The indie catalog is a great place to discover games you might’ve missed. Monster Train, from Shiny Shoe and Good Shepherd Entertainment, a strategic roguelike deck building game with a twist, was PC Gamer’s Best Card Game of 2020 and is streaming from the cloud.

Indie Games on GeForce NOWMembers can see even more highlights in the “Indie Spotlight” in-app row, and the complete indie catalog by clicking “See More.”

GeForce NOW’s indies include incredible global success stories. Chinese-developer TPP Studio’s Home Behind 2 is a fairly new indie title that’s rapidly growing in popularity. The game, released in November, has a two-person development team and starts streaming on GeForce NOW this week.

Since GFN streams the PC versions of games from popular digital stores, when a promotion happens — like Team17’s Worms Rumble free weekend on Steam, happening through Feb. 21 — members are able to participate, instantly.

And when games take advantage of NVIDIA technology like DLSS, GeForce NOW members can reap the benefits. Recent examples include War Thunder, and — just this week — Mount & Blade II: Bannerlord. It’s yet another way GeForce NOW supports future indie development.

Let’s Play Today

As is GFN Thursday tradition, let’s take a look at this week’s new additions to the GeForce NOW library.

Hellish Quart (day-and-date release on Steam, Feb. 16)

A new Steam release this week, Kubold’s sword-dueling game includes intense physics and motion-captured fencing techniques. 

South Park: The Stick of Truth (Steam)

A brilliant RPG that satirizes the genre, Ubisoft’s first South Park game lets you pal around with Cartman, Stan, Kyle, Kenny and more in search of a twig of limitless power.

Here are the rest of this week’s additions:

What’s your gaming plan this weekend, members? Let us know on Twitter.

The post The Sky’s No Longer the Limit: GFN Thursday Celebrates Indie Breakthroughs — Valheim, Terraria and More appeared first on The Official NVIDIA Blog.

Read More

GeForce Is Made for Gaming, CMP Is Made to Mine

We are gamers, through and through. We obsess about new gaming features, new architectures, new games and tech. We designed GeForce GPUs for gamers, and gamers are clamoring for more.

Yet NVIDIA GPUs are programmable. And users are constantly discovering new applications for them, from weather simulation and gene sequencing to deep learning and robotics. Mining cryptocurrency is one of them.

With the launch of GeForce RTX 3060 on Feb. 25, we’re taking an important step to help ensure GeForce GPUs end up in the hands of gamers.

Halving Hash Rate

RTX 3060 software drivers are designed to detect specific attributes of the Ethereum cryptocurrency mining algorithm, and limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent.

That only makes sense. Our GeForce RTX GPUs introduce cutting-edge technologies — such as RTX real-time ray-tracing, DLSS AI-accelerated image upscaling technology, Reflex super-fast response rendering for the best system latency, and many more — tailored to meet the needs of gamers and those who create digital experiences.

To address the specific needs of Ethereum mining, we’re announcing the NVIDIA CMP, or, Cryptocurrency Mining Processor, product line for professional mining.

CMP products — which don’t do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and, thus, don’t impact the availability of GeForce GPUs to gamers.

For instance, CMP lacks display outputs, enabling improved airflow while mining so they can be more densely packed. CMPs also have a lower peak core voltage and frequency, which improves mining power efficiency.

Creating tailored products for customers with specific needs delivers the best value for customers. With CMP, we can help miners build the most efficient data centers while preserving GeForce RTX GPUs for gamers.

The post GeForce Is Made for Gaming, CMP Is Made to Mine appeared first on The Official NVIDIA Blog.

Read More

A Capital Calculator: Upstart Credits AI with Advancing Loans

With two early hits and the promise of more to come, it feels like a whole new ballgame in lending for Grant Schneider.

The AI models he helped create as vice president of machine learning for Upstart are approving more personal loans at lower interest rates than the rules traditional banks use to gauge credit worthiness.

What’s more, he’s helping the Silicon Valley startup, now one of the newest public companies in the U.S., pioneer a successful new hub of AI development in Columbus, Ohio.

A Mentor in the Midwest

Schneider’s career has ridden an AI rocket courtesy of two simple twists of fate.

“In the 2009 downturn, I was about to graduate from Ohio State in finance and there were no finance jobs, but a mentor convinced me to take some classes in statistics,” he said.

He wound up getting a minor, a master’s and then a Ph.D. in the field in 2014, just as machine learning was emerging as the hottest thing in computing.

“Then I read about Upstart in a random news article, sent them a cold email and got a response — I was blown away by the team,” he said.

A Breakthrough with Big Data

Schneider signed on as a data scientist, experimenting with ways to process online loan requests from the company’s website. He trained AI models on publicly available datasets while the startup slowly curated its own private trove of data.

The breakthrough came with the first experiment training a model on Upstart’s own data. “Overnight our approval rates nearly doubled … and over time it became clear we were actually moving the needle in improving access to credit,” he said.

As the business grew, Upstart gathered more data. That data helped make models more accurate so it could extend credit to more borrowers at lower rates. And that attracted more business.

Riding the Virtuous Cycle of AI

The startup found itself on a flywheel it calls the virtuous cycle of AI.

“One of the coolest parts of working on AI models is they directly drive the interest rates we can offer, so as we get better at modeling we extend access to credit — that’s a powerful motivator for the team,” he said.

Borrowers like it, too. More than 620,000 of them were approved by Upstart’s models to get a total $7.8 billion in personal loans so far, about 27 percent more than would’ve been approved by traditional credit models, at interest rates 16 percent below average, according to a study from the U.S. Consumer Financial Protection Bureau.

The figures span all demographic groups, regardless of age, race or ethnicity. “Our AI models are getting closer to the truth of credit worthiness than traditional methods, and that means there should be less bias,” Schneider said.

Betting on the Buckeyes

As it grew, the Silicon Valley company sought a second location where it could expand its R&D team. A study showed the home of Schneider’s alma mater could be a good source of tech talent, so the Ohio State grad boomeranged back to the Midwest.

Columbus exceeded expectations even for a bullish Schneider. What was going to be a 140-person office in a few years has already hit nearly 250 people primarily in AI, software engineering and operations with plans to double to 500 soon.

“Having seen the company when it was 20 people in a room below a dentist’s office, that’s quite a change,” Schneider said.

GPUs Slash Test Time

Upstart has experience with nearly a dozen AI modeling techniques and nearly as many use cases. These days neural networks and gradient-boosted trees are driving most of the gains.

The models track as many as 1,600 variables across data from millions of transactions. So Upstart can use billions of data points to test competing models.

“At one point, these comparisons took more than a day to run on a CPU, but our research found we could cut that down by a factor of five by porting the work to GPUs,” Schneider said.

These days, Upstart trains and evaluates new machine-learning models in a few hours instead of days.

The Power of Two

Looking ahead, the company’s researchers are experimenting with NVIDIA RAPIDS, libraries that quickly move data science jobs to GPUs.

Schneider gives a glowing report of the “customer support on steroids” his team gets from solution architects at NVIDIA.

“It’s so nice for our research team to have experts helping us solve our problems. Having a proactive partner who understands the technology’s inner workings frees us up to focus on interesting business problems and turn around model improvements that affect our end users,” he said.

Early Innings for AI Banking

As a startup, the company built and tested models on GPU-powered laptops. These days it uses the cloud to handle its scaled up AI work, but Schneider sees the potential for another boomerang in the future with some work hosted on the company’s own systems.

Despite its successful IPO in December, it’s still early innings for Upstart. For example, the company started offering auto loans in September.

Going public amid a global pandemic “was a very surreal and exciting experience and a nice milestone validating years of work we’ve put in, but were still early in this company’s lifecycle and the most exciting things are still ahead of us,” he said. “We’re still far from perfectly predicting the future but that’s what we’re aiming at,” he added.

Visit NVIDIA’s financial services industry page to learn more.

The post A Capital Calculator: Upstart Credits AI with Advancing Loans appeared first on The Official NVIDIA Blog.

Read More

Fetching AI Data: Researchers Get Leg Up on Teaching Dogs New Tricks with NVIDIA Jetson

AI is going to the dogs. Literally.

Colorado State University researchers Jason Stock and Tom Cavey have published a paper on an AI system to recognize and reward dogs for responding to commands.

The graduate students in computer science trained image classification networks to determine whether a dog is sitting, standing or lying. If a dog responds to a command by adopting the correct posture, the machine dispenses it a treat.

The duo relied on the NVIDIA Jetson edge AI platform for real-time trick recognition and treats.

Stock and Cavey see their prototype system as a dog trainer’s aid — it handles the treats — or a way to school dogs on better behavior at home.

“We’ve demonstrated the potential for a future product to come out of this,” Stock said

Fetching Dog Training Data

The researchers needed to fetch dog images that exhibited the three postures. They found the Stanford Dogs datasets had more than 20,000 at many positions and image sizes, requiring preprocessing. They wrote a program to help quickly label them.

To refine the model, they applied features of dogs from ImageNet to enable transfer learning. Next, they applied post-training and optimization techniques to boost the speed and reduce model size.

For optimizations, they tapped into NVIDIA’s Jetpack SDK on Jetson, offering an easy way to get it up and running quickly and to access the TensorRT and cuDNN libraries, Stock said. NVIDIA TensorRT optimization libraries offered “significant improvements in speed,” he added.

Tapping into the university’s computing system, Stock trained the model overnight on two 24GB NVIDIA RTX 6000 GPUs.

“The RTX GPU is a beast — with 24GB of VRAM, the entire dataset can be loaded into memory,” he said. “That makes the entire process way faster.”

Deployed Models on Henry

The researchers tested their models on Henry, Cavey’s Australian Shepherd.

They achieved model accuracy in tests of up to 92 percent and an ability to make split-second inference at nearly 40 frames per second.

Powered by the NVIDIA Jetson Nano, the system makes real-time decisions on dog behaviors and reinforces positive actions with a treat, transmitting a signal to a servo motor to release a reward.

“We looked at Raspberry Pi and Coral but neither was adequate, and the choice was obvious for us to use Jetson Nano,” said Cavey.

Biting into Explainable AI 

Explainable AI helps provide transparency about the makeup of neural networks. It’s becoming more common in the financial services industry to understand fintech models. Stock and Cavey included model interpretation in their paper to provide explainable AI for the pet industry.

They do this with images of the videos that show the posture analysis. One set of images relies on GradCAM — a common technique to display where a convolutional neural network model is focused. Another set of images explains the model by tapping into Integrated Gradients, which helps analyze pixels.

The researchers said it was important to create a trustworthy and ethical component of the AI system for trainers and general users. Otherwise, there’s no way to explain your methodology should it come into question.

“We can explain what our model is doing, and that might be helpful to certain stakeholders — otherwise how can you back up what your model is really learning?” said Cavey.

The NVIDIA Deep Learning Institute offers courses in computer vision and the Jetson Nano.

The post Fetching AI Data: Researchers Get Leg Up on Teaching Dogs New Tricks with NVIDIA Jetson appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday Shines Ray-Traced Spotlight on Sweet Six-Pack of RTX Games

If Wednesday is hump day, GFN Thursday is the new official beginning of the weekend. We have a great lineup of games streaming from the cloud this week, with more details on that below.

This GFN Thursday also spotlights some of the games using NVIDIA RTX tech, including a sweet six-pack that you’ll want to check out (okay, you’ve probably already played Cyberpunk 2077, but you get the idea).

What Is RTX?

NVIDIA RTX GPUs introduced real-time ray tracing and AI to PC gaming.

Ray tracing provides realistic lighting by simulating the physical behavior of light, adding cinematic-quality rendering to scenes in a game. Normally these effects are computationally expensive, but RTX GPUs’ dedicated hardware allows for ray-tracing acceleration in real time.

Cyberpunk 2077 with RTX ON on GeForce NOW

Real-time ray tracing is only one benefit, though. NVIDIA DLSS provides GeForce NOW the performance headroom to maximize visual settings, while maintaining smooth frame rates, and stream those benefits to members.

RTX On Any Device

When a game joins GeForce NOW with RTX support, Founders members can play with cinematic, real-time ray tracing and NVIDIA DLSS support — even on devices that don’t have an RTX-capable GPU. That means adding RTX ON on your Macbook Air. Love your Chromebook and Android phone but wish they could play the latest games? Thanks to the cloud, they can.

Last week, we announced that the Outriders demo would launch on GeForce NOW later this month. Paired with last month’s announcement of a technical partnership, it means PC gamers, GeForce RTX users and GFN members will get the best possible experience.

Control with RTX ON on GeForce NOW

The GeForce NOW library already supports some of the best examples of RTX ON available in gaming. Here are six amazing games that you can play in unprecedented graphical glory across all of your devices, right now:

  • A landmark PC game, Cyberpunk 2077 presents the vibrant future metropolis of Night City with the power of RTX. Tech inside: ray-traced reflections, ambient occlusion, shadows, diffuse illumination, global illumination and DLSS.
  • Control, a former Game of the Year winner, is another landmark title for ray tracing. Explore other dimensions within the mysterious “Oldest House” with RTX. Tech inside: ray-traced reflections, contact shadows, indirect diffuse lighting and DLSS.
  • The blockbuster title Watch Dogs: Legion with GeForce RTX showcases near-future London in all of its glory. Tech inside: ray-traced reflections and DLSS.
  • A multiple Game of the Year nominee, Metro Exodus immerses gamers in a stark and frightening post-apocalyptic world, brought to life in stunning realism. Tech inside: ray-traced ambient occlusion, diffuse global illumination, emissive lighting and DLSS.
  • For binge players of the six Tomb Raider games on GeForce NOW, keep an eye out for the incredible RTX visuals in Shadow of the Tomb Raider Definitive Edition. Relive Lara Croft’s defining moment as she becomes the Tomb Raider with game-changing RTX technologies. Tech inside: ray-traced shadows and DLSS.
  • Ray tracing has landed in Deliver Us The Moon. Explore the future of space and PC graphics in an unforgettably immersive experience with GeForce RTX. Tech inside: ray-traced shadows, reflections and DLSS.

These RTX ON favorites deserve to be played if they’re sitting in your backlog, and GeForce NOW Founders members can play them with real-time ray tracing across any of their supported devices.

And with GeForce NOW, there’s always more.

Our friends at Cloud Gaming Xtreme take a look at even more of the RTX-enabled games on GeForce NOW.

Let’s Play Today

No GFN Thursday is complete without new additions to the GeForce NOW Library. Here are a few highlights, and check out the full list below.

Everspace on GeForce NOW

Everspace (Steam)

Everspace combines fast-paced combat with roguelike elements, great visuals and a captivating story. It takes gamers on a challenging journey through an ever-changing, beautifully crafted universe full of surprises. Shoot, craft and loot your way to victory while the odds are stacked against you.

Legend of Heroes In The Sky: Trials of Cold Steel III on GeForce NOW

Legend of Heroes In The Sky: Trials of Cold Steel III (Steam)

Experience an epic story developed across three titles, and crafted for new and old fans alike. Also includes an interactive introduction to catch up new players to the ongoing story so anyone can dive right in to the world of Trails of Cold Steel.

South Park: The Fractured But Whole on GeForce NOW

South Park: The Fractured But Whole (Steam)

Spend your staycation exploring South Park with Cartman, Kyle, Kenny and Stan by playing through South Park: The Stick of Truth and the newly added sequel, South Park: The Fractured But Whole. From the creators of South Park, Trey Parker and Matt Stone, both games are routinely found their way onto RPG of the Year lists.

In addition, members can look for the following:

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post GFN Thursday Shines Ray-Traced Spotlight on Sweet Six-Pack of RTX Games appeared first on The Official NVIDIA Blog.

Read More

Startup Couples AI with OR Video to Sharpen Surgeon Performance, Improve Patient Outcomes

AI is teaching cars to make better decisions, so could it do the same for surgeons?

Addressing that question is the mission of Theator, a startup based in Palo Alto, Calif., with an R&D site in Tel Aviv, that’s striving to fuel the nascent revolution in autonomous surgery.

Theator co-founder and Chief Technology Officer Dotan Asselmann said his company has been monitoring advances in self-driving cars as a blueprint for surgery, with a focus on using AI-driven analytics to improve decision-making.

Just as autonomous carmakers want to stop a vehicle before an accident, Theator wants to stop surgeries before any mistakes. And it’s doing this by analyzing video taken of surgeries being performed all over the world.

“Because it can scale, AI can acquire much more experience than any surgeon,” said Asselmann. “Our model has already analyzed thousands of surgeries that one individual physician would never have time to experience themselves.”

Turning Video Into Shared Knowledge

The problem Asselmann and the Theator team have identified is a lack of standardization in the surgical review process. Most surgeons learn their craft from just a few people. In fact, Asselmann said, many pick up the majority of their knowledge from their own experiences.

“Horizontal data sharing between surgeons has been limited — it’s mainly happened at conferences,” he said. “In today’s COVID reality, surgeons’ ability to expand their knowledge at scale is stifled.”

However, while the practice of conducting visually aided surgery has taken off and most operating rooms have been equipped with cameras that record procedures, surgeries are not routinely captured, stored or analyzed. This is what spurred Theator’s inception and has fueled its ongoing mission to harness AI and computer vision to power surgery.

The company’s technology is delivered through an edge appliance mounted on an operating room’s laparoscopic cart. From there, the NVIDIA Jetson AGX Xavier platform processes the videos, and Theator’s software anonymizes and then uploads them to its training environments in the Amazon and Azure clouds.

There, the company runs a variety of AI models, with training occurring on a cluster of NVIDIA V100 Tensor Core GPUs, while NVIDIA T4 Tensor Core GPUs handle inference.

Once a video of a surgery has been processed, surgeons can immediately view highlight packages that focus on the select critical minutes where important decisions were made. Each procedure is added to Theator’s training dataset, thus expanding its models’ understanding.

By applying AI-driven analytics to the videos, Theator’s platform deconstructs the resulting data into steps, events, decisions and milestones. This allows surgeons to conduct post-surgery reviews, where they can compare parts of the procedure with previous identical procedures.

The platform can also use previous procedures to provide pre-operative assistance, and it can match videos to identify the cause of post-surgical complications. Future applications include the ability to predict, and potentially reduce, the need for costly and time-consuming interventions resulting from complications.

For example, a patient who develops a post-operative fever may have a bleed that has been left open. In the future, watching a Theator video summary could help a surgeon determine whether there was a problem before performing a scan or corrective procedure.

Ensuring Better Surgical Decisions

Asselmann believes that Theator can remove the cloud component from the equation and achieve the holy grail of real-time surgical support within a year or two – relying solely on its AI algorithms to conduct entire analytics processes on premises during minimally invasive surgery.

While the company’s focus is currently on aiding surgeons, he expects semi-autonomous surgery to be possible within the next five years. And while there will likely always be a surgeon in the loop, Asselmann believes level 3 or 4 automation for surgery will be utilized first and foremost in developing countries, where 5 billion people lack access to adequate surgical care.

Theator has reached this far with the help of NVIDIA Inception, an accelerator program for startups in the AI and data science fields. Asselmann credits the program with helping “increase our model training efficiency and reduce compute costs, while guiding the selection of the right hardware for our edge device.”

Through the NVIDIA Inception program, Theator was also provided a private demonstration of the NVIDIA Clara Guardian AI healthcare framework, as well as the NVIDIA DeepStream software development kit, which the startup used to build its high-efficiency real-time video pipeline.

Equipped with NVIDIA’s support, Theator can continue to bring critical context to the decisions surgeons are making every day in operating rooms all over the world.

“Surgeons are inundated with endless parameters flowing from multiple directions during surgery,” said Asselmann. “Our objective is to reduce the cognitive overload and aid them in making the optimal decision at the right time, for the right patient and circumstances.”

“You’ll still call the shots,” he says, “but you’ll be a much better surgeon with AI.”

Feature image credit: David Mark.

The post Startup Couples AI with OR Video to Sharpen Surgeon Performance, Improve Patient Outcomes appeared first on The Official NVIDIA Blog.

Read More

The Truck Stops Here: How AI Is Creating a New Kind of Commercial Vehicle

For many, the term “autonomous vehicles” conjures up images of self-driving cars. Autonomy, however, is transforming much more than personal transportation.

Autonomous trucks are commercial vehicles that use AI to automate everything from shipping yard operations to long-haul deliveries. Due to industry pressures from rising delivery demand and driver shortages, as well as straightforward operational domains such as highways, these intelligent trucks may be the first autonomous vehicles to hit public roads at scale.

This technology uses long-range, high-resolution sensors, a range of deep neural networks and high-performance, energy-efficient compute to improve safety and efficiency for everyday logistics.

With the rise of e-commerce and next-day delivery, trucking plays an increasingly vital role in moving the world forward. Trucks transport more than 70 percent of all freight in the U.S. Experts estimate that most essential businesses, such as grocery stores and gas stations, would run out of supplies within days without these vehicles.

These trends come as driver shortages accelerate. The American Trucking Association reports the industry has struggled with driver supply over the past 15 years. It estimates the industry could be in need of 160,000 drivers by 2028 if trends continue. Additionally, limits on the amount of hours drivers can consecutively work restricts operation.

Autonomous driving can help ease the strain of trucking demand, as well as increase efficiency, by operating around the clock with lower requirements for human labor. In fact, a recent pilot run by self-driving trucking startup TuSimple and the U.S. Postal Service showed that autonomous trucks repeatedly arrived ahead of schedule on hub-to-hub routes.

And with hub-to-hub autonomous trucks constrained to fenced-in areas or highways, most autonomous trucks don’t have to deal with the challenges of urban traffic and neighborhood driving, freeing up roadblocks to widespread deployment.

This groundbreaking development is possible in part due to centralized, high-performance compute such as the NVIDIA DRIVE platform. With the capability to process the redundant and diverse deep neural networks necessary to operate without human supervision, these vehicles are poised to revolutionize delivery and logistics in the years to come.

Scalable Solutions for the Long Haul

Autonomous driving is a scalable technology. The Society of Automotive Engineers (SAE) defines it in categories that include assisted driving where the driver is still in control (level 2) as well as full self-driving, where no human supervision is required (level 4/5). AI compute must also be able to scale with the capabilities of self-driving software.

In addition, the system must be able to handle the harsh environments of trucking. The average truck driver travels 100,000 miles a year, compared with the average motorist, who drives about 13,500 miles a year.

NVIDIA DRIVE is the only solution that easily scales from level 2 AI-assisted driving to fully autonomous operation while being designed to withstand the wear and tear of long-haul trucking.

This versatility and durability is already in development today. Companies such as Locomation are leveraging the compute platform for platooning pilots, where one driver operates a lead truck while a fully autonomous follower truck drives in tandem. Truck manufacturer FAW and startup PlusAI are jointly developing a large-scale autonomous trucking fleet. TuSimple uses NVIDIA DRIVE in its fleet.

On the Open Road

Beyond improving current trucking practices, autonomous driving technology is opening up entirely new possibilities for the industry.

Volvo Group, one of the largest truck makers in the world, is using NVIDIA DRIVE to train, test and deploy self-driving AI vehicles, targeting public transport, freight transport, refuse and recycling collection, construction, mining, forestry and more.

It’s even envisioning cab-less operation within shipping yards and on industrial roads with the Vera pilot truck.

Self-driving truck startup Einride is also developing cab-less vehicles. It recently announced the next generation of its Pod trucks, powered by NVIDIA DRIVE AGX Orin. These futuristic electric haulers will be able to scale from closed-facility operation to fully autonomous driving on backroads and highways.

With high-performance, energy-efficient AI compute at the core, autonomous trucks will push the limits of what’s possible in delivery and logistics, transforming industries around the world.

The post The Truck Stops Here: How AI Is Creating a New Kind of Commercial Vehicle appeared first on The Official NVIDIA Blog.

Read More