Tooth Tech: AI Takes Bite Out of Dental Slide Misses by Assisting Doctors

Your next trip to the dentist might offer a taste of AI.

Pearl, a West Hollywood startup, provides AI for dental images to assist in diagnosis. It landed FDA clearance last month, the first to get such a go-ahead for dentistry AI.

The approval paves the way for its use in clinics across the United States.

“It’s really a first of its kind for dentistry,” said Ophir Tanz, co-founder and CEO of Pearl. “But we also have similar regulatory approvals across 50 countries globally.”

Pearl’s software platform, available in the cloud as a service, enables dentists to run real-time screening of X-rays. Dentists can then review the AI findings and share them with patients to facilitate informed dentist-patient discussions about diagnosis and treatment planning.

Behind the scenes, NVIDIA GPU-driven convolutional neural networks developed by Pearl can spot not just tooth decay but many other dental issues, like cracked crowns and root abscess requiring a root canal.

Pearl’s AI offers dentist results. The startup’s FDA application showed that on average Pearl AI was capable of spotting 36 percent more pathologies and other dental issues than an average dentist. “And that’s important because in dentistry it’s extremely common and routine to miss a pathology,” said Tanz.

The company’s products include its Practice Intelligence, which enables dental practices to run AI on patient data to discover missed diagnoses and treatment opportunities. Pearl Protect can help screen for dental insurance fraud, waste and abuse, while Claims Review offers automated claims examination.

Pearl, founded in 2019, is a member of the NVIDIA Inception startup program, which provided it access to Deep Learning Institute courses, NVIDIA Developer Forums and technical workshops.

Hatching Dental AI

The son of a dentist, Tanz has a mouthful of a founding tale. The entrepreneur decided to pursue AI for dental radiology after talking shop on a visit with his dentist. A partner at the practice liked the idea so much he jumped on board as a co-founder.

Pearl co-founders Cambron Carter, Kyle Stanley and Ophir Tanz (left to right)

Tanz, who founded tech unicorn GumGum for AI to analyze images, video and text for better contextual advertising, was joined by GumGum colleague Cambron Carter, now CTO and co-founder at Pearl. Dentist Kyle Stanley, co-founder and chief clinical officer, rounds out the trio with clinical experience.

Pearl’s founders targeted a host of conditions commonly addressed in dental clinics. They labeled more than a million images to help train their proprietary CNN models, running on NVIDIA V100 Tensor Core GPUs in the cloud, to identify issues. Before that they had prototyped on local NVIDIA-powered workstations.

Inference is done on cloud-based GPUs, where Pearl’s system synchronizes with the dentist’s real-time and historial radiology data. “The dental vertical is still undergoing a transition to the cloud, and now we’re bringing them AI in the cloud — we represent a wave of technology that will propel the field of dentistry into the future,” said Carter.

Getting FDA approval wasn’t easy, he said. It required completing an expansive clinical trial. Pearl submitted four studies, each involving thousands of X-rays and over 80 expert dentists and radiologists.

Getting Second Opinion for Diagnosis

Pearl offers dentists a product called Second Opinion to aid in the detection of disease in radiography. Second Opinion can identify dozens of conditions to help validate dentist’s findings, according to Tanz.

“We’re the only company in the world that is able to diagnose pathology and detect disease in an AI-driven manner in the dental practice,” he said. “We’re driving a much more comprehensive diagnosis, and it’s a diagnostic aid for general practitioners.”

Second Opinion is taking root in clinics. Sage Dental, which has more than 60 offices across the East Coast, is a customer. Dental 365 is a customer with more than 60 offices in the region as well.

“Second Opinion is an extremely important tool for the future of dentistry,” said Cindy Roark, chief clinical officer at Sage. “Dentistry has needed consistency for a very long time. Diagnosis is highly variable, and variability leads to confusion and distrust from patients.”

Boosting Doctor-Patient Rapport 

Dentists review X-rays while patients are in the chair, pointing out any issues as they go. Even for experienced dentists, making sense of the grayscale imagery that forms the basis of most treatment plans can be challenging — only compounded by the many demands on their attention throughout a busy day juggling patients.

For patients, comprehending the indistinct gradations in X-rays that separate healthy tooth structures from unhealthy ones is even harder.

But with AI-aided images, dentists are able to present areas of concern outlined by simple, digestible bounding boxes. This ensures that their treatment plans have a sound basis, while providing patients with a much clearer picture of what exactly is going on in their X-rays.

“You’re able to have a highly visual sort of discussion and paint a visual narrative for patients so that they really start to understand what is going on in their mouth,” said Dr. Stanley.

The post Tooth Tech: AI Takes Bite Out of Dental Slide Misses by Assisting Doctors appeared first on NVIDIA Blog.

Read More

GFN Thursday Is Fit for the Gods: ‘God of War’ Arrives on GeForce NOW

The gods must be smiling this GFN Thursday — God of War today joins the GeForce NOW library.

Sony Interactive Entertainment and Santa Monica Studios’ masterpiece is available to stream from GeForce NOW servers, across nearly all devices and at up to 1440p and 120 frames per second for RTX 3080 members.

Get ready to experience Kratos’ latest adventure as part of nine titles joining this week.

The Story of a Generation Comes to the Cloud

This GFN Thursday, God of War (Steam) comes to GeForce NOW.

With his vengeance against the Gods of Olympus years behind him, play as Kratos, now living as a man in the realm of Norse Gods and monsters. Mentor his son, Atreus, to survive a dark, elemental world filled with fearsome creatures and use your weapons and abilities to protect him by engaging in grand and grueling combat.

God of War’s PC port is as much a masterpiece as the original game, and RTX 3080 members can experience it the way its developers intended. Members can explore the dark, elemental world of fearsome creatures at up to 1440p and 120 FPS on PC, and up to 4K on SHIELD TV. The power of AI in NVIDIA DLSS brings every environment to life with phenomenal graphics and uncompromised image quality. Engage in visceral, physical combat with ultra-low latency that rivals even local console experiences.

Streaming from the cloud, you can play one of the best action games on your Mac at up to 1440p or 1600p on supported devices. Or take the action with you by streaming to your mobile device at up to 120 FPS, with up to eight-hour gaming session lengths for RTX 3080 members.

Enter the Norse realm and play God of War today.

More? More.

This week also brings a new instant-play free game demos streaming on GeForce NOW.

Squish, bop and bounce around to the rhythms of an electronica soundscape in the demo for Lumote: The Mastermote Chronicles. Give the demo a try for free before adding the full title to your wishlist on Steam.

Experience the innovative games being developed by studios from across the greater China region, which will participate in the Enter the Dragon indie game festival. Starting today, play the Nobody – The Turnaround demo, and look for others to be added in the days ahead.

Finally, get ready for the upcoming launch of Terraformers with the instant-play free demo that went live on GeForce NOW last week.

MotoGP22 on GeForce NOW
Get your zoomies out and be the fastest on the track in the racing game MotoGP22.

In addition, members can look for the following games arriving in full this week:

Finally, in case you hadn’t guessed it before, we bet you can now. Let us know your guess on Twitter or in the comments below.

The post GFN Thursday Is Fit for the Gods: ‘God of War’ Arrives on GeForce NOW appeared first on NVIDIA Blog.

Read More

Welcome ‘In the NVIDIA Studio’: A Weekly Celebration of Extraordinary Artists, Their Inspiring Art and Innovative Techniques

Creating content is no longer tethered to using paint and stone as mediums, nor being in massive studios. Visual art can now be created anywhere, anytime.

But being creative is still challenging and time-consuming. NVIDIA is making artistic workflows easier and faster by giving creators tools that enable them to remain in their flow state.

That’s what NVIDIA Studio is — an ecosystem of creative app optimizations, GPU-accelerated features and AI-powered apps, powered by NVIDIA RTX GPUs and backed by world-class Studio Drivers.

Our new In the NVIDIA Studio,’ blog series celebrates creativity everywhere by spotlighting 3D animators, video editors, photographers and more, every week. We’ll showcase their inspirational and thought-provoking work, and detail how creators are using NVIDIA GPUs to go from concept to completion, faster than ever.

The series kicks off with 3D artist Jasmin Habezai-Fekri. Check out her work below, created with Unreal Engine, Adobe Substance 3D and Blender, accelerated by her GeForce RTX 2070 GPU.

Habezai-Fekri Dreams in 3D

‘Old Forgotten Library’ and ‘Shiba Statue’ highlight Habezai-Fekri’s use of vivid colors.

Based in Germany, Habezai-Fekri works in gaming as a 3D environment artist, making props and environments with hand-painted and stylized physically based rendering textures. She revels in creating fantasy and nature-themed scenes, accentuated by big, bold colors.

 

Habezai-Fekri’s passion is creating artwork with whimsical charm, piquing the interest of her audiences while creating a sense of immersion, rounding out her unique flair.

 

One such piece is Bird House — a creative fusion of styles and imagination.

With this piece, Habezai-Fekri was learning the ins and outs of using Unreal Engine, while trying to replicate “something very 2D-esque in a 3D space, giving it all a very painterly yet next-gen feeling.” Through iteration, she developed her foundational skills and found that having a set art direction and visual style gave it her own signature.

Prop development for ‘Bird House.’

Habezai-Fekri uses Blender software for modeling and employs Zbrush for her highpoly sculpts to help bring stylized details into textures and models. The fine details are critical for invoking the real-life emotions she hopes to cultivate. “Creating immersiveness is a huge aspect for me when making my art,” Habezai-Fekri said.

Hand-painted, stylized wood textures and details in ‘Bird House.’

Looking closer reveals Habezai-Fekri’s personal touches in the textures in Bird House — she hand-painted them in Adobe Substance 3D Painter. RTX GPU-accelerated light and ambient occlusion in Substance 3D helps speed up her process by outputting new textures in mere seconds.

 

“Having a hand-painted pass on my textures really enhances the assets and lets me channel that artistic side throughout a heavily technical process,” she said.

“In our industry, with new tools being released so frequently, it’s inevitable to constantly learn and expand your skill set. Being open to that from the start really helps to be more receptive to it.”

Habezai-Fekri’s work often uses vivid colors. To make it look inviting and friendly, she purposely saturates colors, even if the subject matter is not colorful by nature.

Habezai-Fekri also finds inspiration in trying new tools and workflows, particularly when she sees other artists and creatives doing amazing work.

By partnering with creative app developers, the NVIDIA Studio ecosystem regularly gives Habezai-Fekri new tools that help her create faster. For example, RTX-accelerated OptiX ray tracing in Blender’s viewport enables her to enjoy interactive, photorealistic rendering in real time.

RTX GPUs also deliver rendering speeds up to 2.5x faster with Blender Cycles 3.0. This means a lot less waiting and a lot more creating.

Everything comes together for Habezai-Fekri with the application of final textures and colors in Unreal Engine. NVIDIA RTX GPUs feature advanced capabilities like DLSS, which enhances interactivity of the viewport in Unreal Engine by using AI to upscale frames rendered at lower resolution, while still retaining detail.

Habezai-Fekri works for Airship Syndicate. Previously, she has been an artist at Square Enix and ArtStation. View her work on ArtStation, including a new learning course providing project insights.

NVIDIA Studio Resources

Habezai-Fekri is one of the artists spotlighted in the latest Studio Standouts video, “Stunning Art From Incredible Women Artists.”

See more amazing digital art in the video from Yulia Sokolova, Nourhan Ishmai, Ecem Okumus and Laura Escoin.

Learn more about texturing in Substance 3D Painter by exploring artist and Adobe Creative Director Vladimir Petkovic’s series, “From Texturing to Final Render in Adobe Substance Painter.”

Join the growing number of 3D artists collaborating around the globe in real time, and working in multiple apps simultaneously, with NVIDIA Omniverse.

Check back In the NVIDIA Studio every week to discover new featured artists, creative tips and tricks, and the latest NVIDIA Studio news. Follow NVIDIA Studio on Facebook, Twitter and Instagram, subscribe to the Studio YouTube channel and get updates directly in your inbox by joining the NVIDIA Studio newsletter.

The post Welcome ‘In the NVIDIA Studio’: A Weekly Celebration of Extraordinary Artists, Their Inspiring Art and Innovative Techniques appeared first on NVIDIA Blog.

Read More

Startup Transforms Meeting Notes With Time-Saving Features

Gil Makleff and Artem Koren are developing AI for meeting transcripts, creating time-savers like shareable highlights of the text that is often TL;DR (too long; didn’t read).

The Sembly founders conceived the idea after years of working in enterprise operational consulting at UMT Consulting Group, which was acquired by Ernst & Young.

“We had an intuition that if AI were applied to those operational conversations and able to make sense of them, the value gains to enterprises could be enormous,” said Koren, chief product officer at Sembly.

Sembly goes far beyond basic transcription, allowing people to skip meetings and receive speaker highlights and key action items for follow-ups.

The New York startup uses proprietary AI models to transcribe and analyze meetings, transforming them into actionable insights. It aims to supercharge teams who want to focus on delivering results rather than spending time compiling notes.

Sembly’s GPU-fueled automatic speech recognition AI can be used with popular video call services such as Zoom, Webex, Microsoft Teams and Google Meet. In a few clicks on the Sembly site, it can be synced to Outlook or Google calendars or used for calls in progress via e-mail, web app, or the Sembly mobile app.

The service delivers market-leading transcript accuracy and AI-driven analytics, including highlights to pinpoint important discussion topics. It also allows users to zero in on meeting speakers and easily share clips of individual passages with team members, enhancing collaboration.

Sembly, founded in 2019, is a member of the NVIDIA Inception startup program.

Improving Speaker Tracking With NeMo

One of the pain points Sembly addresses in transcripts is what’s known as diarization, or identifying the correct speaker in text, which can be problematic. The company had tried popular diarization systems from major software makers with negligible results.

Diarization is a key step in the meeting processing pipeline because many of Sembly’s natural language processing features rely on that text to be properly identified. Its Glance View feature, for instance, can identify key meeting topics and who raised them.

Attributing meeting topics to the wrong person throws a wrench in follow-ups on action items.

Harnessing NVIDIA NeMo —  an open source framework for building, training and fine-tuning GPU-accelerated speech and natural language understanding models — provided a significant leap in accuracy.

Using the NeMo conversational AI toolkit for diarization model training, running on NVIDIA A100 GPUs, dramatically improved its speaker tracking. Before applying Nemo, it had an 11 percent error rate in diarization. After implementation, its error rate declined to 5 percent.

Business Boost Amid Meeting Fatigue

With a shift to fewer face-to-face meetings and more virtual ones, companies are seeking ways to counter online meeting fatigue for employees, said Koren. That’s important for delivering more engaging workplace experiences, he added.

“There’s a concept of ‘meeting tourists’ in large organizations. And this is one of those things that we’re hoping Sembly will help to address,” he said.

Adopting Semby to easily highlight key points and speakers in transcripts for sharing gives workers more time back in the day, he said. And leaner operational technologies that help companies stay more focused on key business objectives offer competitive advantages, said Koren.

For those with bloated calendars and the need to try to dance between two meetings, Sembly can also assist. Sembly can be directed to attend a meeting instead of the user and come back with a summary and a list of key items, saving time while keeping teams more informed.

“Sometimes I’d like to attend two meetings that overlap — with Sembly, now I can,” Koren said.

The post Startup Transforms Meeting Notes With Time-Saving Features appeared first on NVIDIA Blog.

Read More

A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision

Talk about a bright idea. A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. 

In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. 

The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. 

The study’s authors explain that humans see light in the so-called “visible spectrum,” or light with wavelengths of between 400 and 700 nanometers.

Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can’t see. 

Information gathered by these cameras is then transposed to a display that shows a monochromatic representation of what the infrared camera detects, the researchers explain.

The team at UC Irvine developed an imaging algorithm that relies on deep learning to predict what humans would see using light captured by an infrared camera.

 

Researchers at the University of California, Irvine, aimed to use deep learning to predict visible spectrum images using infrared illumination alone. Source: Browne, et al. 

 

In other words, they’re able to digitally render a scene for humans using cameras operating in what, to humans, would be complete “darkness.” 

To do this, the researchers used a monochromatic camera sensitive to visible and near-infrared light to acquire an image dataset of printed images of faces. 

These images were gathered under multispectral illumination spanning standard visible red, green, blue and infrared wavelengths. 

The researchers then optimized a convolutional neural network with a U-Net-like architecture — a specialized convolutional neural network first developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg — to predict visible spectrum images from near-infrared images.

On the left, visible spectrum ground truth image composed of red, green and blue input images. On the right, predicted reconstructions for UNet-GAN, UNet and linear regression using three infrared input images. Source: Browne, et al. 

The system was trained using NVIDIA GPUs and 140 images of human faces for training, 40 for validation and 20 for testing.  

The result: the team successfully recreated color portraits of people taken by an infrared camera in darkened rooms. In other words, they created systems that could “see” color images in the dark.  

To be sure, these systems aren’t yet ready for general purpose use. These systems would need to be trained to predict the color of different kinds of objects — such as flowers or faces.

Nevertheless, the study could one day lead to night vision systems able to see color, just as we do in daylight, or allow scientists to study biological samples sensitive to visible light.

Featured image source: Browne, et al. 

The post A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision appeared first on NVIDIA Blog.

Read More

GFN Thursday Gears Up With More Electronic Arts Games on GeForce NOW

This GFN Thursday delivers more gr-EA-t games as two new titles from Electronic Arts join the GeForce NOW library.

Gamers can now enjoy Need for Speed HEAT  and Plants vs. Zombies Garden Warfare 2 streaming from GeForce NOW to underpowered PCs, Macs, Chromebooks, SHIELD TV and mobile devices.

It’s all part of the eight  total games coming to the cloud, starting your weekend off right.

Newest Additions From Electronic Arts

Get ready to play more beloved hits from EA this week.

Need For Speed Heat on GeForce NOW
The Electronic Arts collection expands this week with two new titles streaming on GeForce NOW, including Need for Speed HEAT.

Hustle by day and risk it all at night in Need for Speed HEAT (Steam and Origin). Compete and level up in the daytime race scene, then use the prize money to customize cars and ramp up the action in illicit, nighttime street races that build your reputation as you go up against the Cops swarming the city.

Ready the Peashooters and prepare for plant-based battle against zombies in Plants vs Zombies Garden Warfare 2 (Origin). This time, bring the fight to the zombies and help the plants reclaim a zombie-filled Suburbia from the clutches of Dr. Zomboss.

Stream these new additions and more Electronic Arts games across all your devices with unrivaled performance from the cloud and latency so low that it feels local by upgrading to the power of a GeForce NOW RTX 3080 membership.

All of the Games Coming This Week

Rach Simulator on GeForce NOW
Yippee ki-yay, gamers. Stream the immersive open-world title Ranch Simulator on GeForce NOW today.

In addition, members can look for the eight total new games ready to stream this week:

And, in case you missed it, members have been loving the new, instant-play free game demos streaming on GeForce NOW. Try out some of the hit titles streaming on the service and the top tech that comes with Priority and RTX 3080 membership features, like RTX in Ghostrunner and DLSS in Chorus, before purchasing the full PC versions.

Jump in with the newest instant play free demo arriving this week with Terraformers: First Steps on Mars – the prologue to the game Terraformers – before the full game releases next week.

Speaking of jumping in, we’ve got a question to start your weekend gaming off. Let us know your answer on Twitter or in the comments below.

The post GFN Thursday Gears Up With More Electronic Arts Games on GeForce NOW appeared first on NVIDIA Blog.

Read More

MLCommons’ David Kanter, NVIDIA’s David Galvez on Improving AI with Publicly Accessible Datasets

In deep learning and machine learning, having a large enough dataset is key to training a system and getting it to produce results.

So what does a ML researcher do when there just isn’t enough publicly accessible data?

Enter the MLCommons Association, a global engineering consortium with the aim of making ML better for everyone.

MLCommons recently announced the general availability of the People’s Speech Dataset, a 30,000 hour English-language conversational speech dataset, and the Multilingual Spoken Words Corpus, an audio speech dataset with over 340,000 keywords in 50 languages, to help advance ML research.

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with David Kanter, founder and executive director of MLCommons, and NVIDIA senior AI developer technology engineer David Galvez, about the democratization of access to speech technology and how ML Commons is helping advance the research and development of machine learning for everyone.

You Might Also Like

Take Note: Otter.ai CEO Sam Liang on Bringing Live Captions to a Meeting Near You

Remote work has made us more reliant on virtual conferencing platforms, including Zoom, Skype and Microsoft Teams. Sam Liang, CEO of Otter.ai, explains how his company enhances the virtual meeting experience for all users.

Lilt CEO Spence Green Talks Removing Language Barriers in Business

When large organizations require translation services, there’s no room for the amusing errors often produced by automated apps. Lilt CEO Spence Green aims to correct that using a human-in-the-loop process to achieve fast, accurate and affordable translation.

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives. Dr. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges, and the fun, involved in teaching machines to listen.

Subscribe to the AI Podcast: Now available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast Better: Have a few minutes to spare? Fill out our listener survey.

The post MLCommons’ David Kanter, NVIDIA’s David Galvez on Improving AI with Publicly Accessible Datasets appeared first on NVIDIA Blog.

Read More

Rock On: Scientists Use AI to Improve Sequestering Carbon Underground

A team of scientists have created a new AI-based tool to help lock up greenhouse gases like CO2 in porous rock formations faster and more precisely than ever before.

Carbon capture technology, also referred to as carbon sequestration, is a climate change mitigation method that redirects CO2 emitted from power plants back underground. While doing so, scientists must avoid excessive pressure buildup caused by injecting CO2 into the rock, which can fracture geological formations and leak carbon into aquifers above the site, or even into the atmosphere.

A new neural operator architecture named U-FNO simulates pressure levels during carbon storage in a fraction of a second while doubling accuracy on certain tasks, helping scientists find optimal injection rates and sites. It was unveiled this week in a study published in Advances in Water Resources, with co-authors from Stanford University, California Institute of Technology, Purdue University and NVIDIA.

Carbon capture and storage is one of few methods that industries such as refining, cement and steel could use to decarbonize and achieve emission reduction goals. Over a hundred carbon capture and storage facilities are under construction worldwide.

U-FNO will be used to accelerate carbon storage predictions for ExxonMobil, which funded the study.

“Reservoir simulators are intensive computer models that engineers and scientists use to study multiphase flows and other complex physical phenomena in the subsurface geology of the earth,” said James V. White, subsurface carbon storage manager at ExxonMobil. “Machine learning techniques such as those used in this work provide a robust pathway to quantifying uncertainties in large-scale subsurface flow models such as carbon capture and sequestration and ultimately facilitate better decision-making.”

How Carbon Storage Scientists Use Machine Learning

Scientists use carbon storage simulations to select the right injection sites and rates, control pressure buildup, maximize storage efficiency and ensure the injection activity doesn’t fracture the rock formation. For a successful storage project, it’s also important to understand the carbon dioxide plume — the spread of CO2 through the ground.

Traditional simulators for carbon sequestration are time-consuming and computationally expensive. Machine learning models provide similar accuracy levels while dramatically shrinking the time and costs required.

Based on the U-Net neural network and Fourier neural operator architecture, known as FNO, U-FNO provides more accurate predictions of gas saturation and pressure buildup. Compared to using a state-of-the-art convolutional neural network for the task, U-FNO is twice as accurate while requiring just a third of the training data.

“Our machine learning method for scientific modeling is fundamentally different from standard neural networks, where we typically work with images of a fixed resolution,” said paper co-author Anima Anandkumar, director of machine learning research at NVIDIA and Bren professor in the Computing + Mathematical Sciences Department at Caltech. “In scientific modeling, we have varying resolutions depending on how and where we sample. Our model can generalize well across different resolutions without the need for re-training, achieving enormous speedups.”

Trained U-FNO models are available in a web application to provide real-time predictions for carbon storage projects.

“Recent innovations in AI, with techniques such as FNOs, can accelerate computations by orders of magnitude, taking an important step in helping scale carbon capture and storage technologies,” said Ranveer Chandra, managing director of research for industry at Microsoft and collaborator on the Northern Lights initiative, a full-scale carbon capture and storage project in Norway. “Our model-parallel FNO can scale to realistic 3D problem sizes using the distributed memory of many NVIDIA Tensor Core GPUs.”

Novel Neural Operators Accelerate CO2 Storage Predictions 

U-FNO enables scientists to simulate how pressure levels will build up and where CO2 will spread throughout the 30 years of injection. GPU acceleration with U-FNO makes it possible to run these 30-year simulations in a hundredth of a second on a single NVIDIA A100 Tensor Core GPU, instead of 10 minutes using traditional methods.

With GPU-accelerated machine learning, researchers can now also rapidly simulate many injection locations. Without this tool, choosing sites is like a shot in the dark.

The U-FNO model focuses on modeling plume migration and pressure during the injection process — when there’s the highest risk of overshooting the amount of CO2 injected. It was developed using NVIDIA A100 GPUs in the Sherlock computing cluster at Stanford.

“For net zero to be achievable, we will need low-emission energy sources as well as  negative-emissions technologies, such as carbon capture and storage,” said Farah Hariri, a collaborator on U-FNO and technical lead on climate change mitigation projects for NVIDIA’s Earth-2, which will be the world’s first AI digital twin supercomputer. “By applying Fourier neural operators to carbon storage, we showed how AI can help accelerate the process of climate change mitigation. Earth-2 will leverage those techniques.”

Read more about U-FNO on the NVIDIA Technical Blog.

Earth-2 will use FNO-like models to tackle challenges in climate science and contribute to global climate change mitigation efforts. Learn more about Earth-2 and AI models used for climate science in NVIDIA founder and CEO Jensen Huang’s GTC keynote address:

The post Rock On: Scientists Use AI to Improve Sequestering Carbon Underground appeared first on NVIDIA Blog.

Read More

Try This Out: GFN Thursday Delivers Instant-Play Game Demos on GeForce NOW

GeForce NOW is about bringing new experiences to gamers.

This GFN Thursday introduces game demos to GeForce NOW. Members can now try out some of the hit games streaming on the service before purchasing the full PC version — including some finalists from the 2021 Epic MegaJam.

Plus, look for six games ready to stream from the GeForce NOW library starting today.

In addition, the 2.0.39 app update is rolling out for PC and Mac with a few fixes to improve the experience.

Dive In to Cloud Gaming With Demos

GeForce NOW supports new ways to play and is now offering free game demos to help gamers discover titles to play on the cloud — easy to find in the “Instant Play Free Demos” row.

Gamers can stream these demos before purchasing the full PC versions from popular stores like Steam, Epic Games Store, Ubisoft Connect, GoG and more. The demos are hosted on GeForce NOW, allowing members to check them out instantly — just click to play!

The first wave of demos, with more to come, includes: Chorus, Ghostrunner, Inscryption, Diplomacy Is Not an Option and The RiftBreaker Prologue.

Members can even get a taste of the full GeForce NOW experience with fantastic Priority and RTX 3080 membership features like RTX in Ghostrunner and DLSS in Chorus.

On top of these great titles, demos of some finalists from the 2021 Epic MegaJam will be brought straight from Unreal Engine to the cloud.

Zoom and nyoom to help BotiBoi gather as many files as possible and upload them to the server before the inevitable system crash in Boti Boi by the Purple Team. Assist a user by keeping files organized for fast access as seeking beeBots in Microwasp Seekers by Partly Atomic.

Keep an eye out for updates on demos coming to the cloud on GFN Thursdays and in the GeForce NOW app.

Get Your Game On 

TUNIC on GeForce NOW
Play as a small fox on a big adventure in TUNIC, now streaming through both Steam and Epic Games Store. 

Ready to jump into a weekend full of gaming?

GFN Thursday always comes with a new batch of games joining the GeForce NOW library. Check out these six titles ready to stream this week:

Finally, last week GFN Thursday announced that Star Control: Origins would be coming to the cloud later in April. The game is already available to stream on GeForce NOW.

With all these great games available to try out, we’ve got a question for you this week. Let us know on Twitter or in the comments below.

The post Try This Out: GFN Thursday Delivers Instant-Play Game Demos on GeForce NOW appeared first on NVIDIA Blog.

Read More

Fast and Luxurious: The Intelligent NIO ET7 EV Built on NVIDIA DRIVE Orin Arrives

Meet the electric vehicle that’s quick-witted and fully outfitted.

Last week, NIO began deliveries of its highly anticipated ET7 fully electric vehicle, in Hefei, China. The full-size luxury sedan is the first production vehicle built on the NIO Adam supercomputer, powered by four NVIDIA DRIVE Orin systems-on-a-chip (SoCs).

The production launch of its flagship sedan follows a blockbuster year for NIO. In 2021, the EV maker delivered 91,429 vehicles, more than quadrupling sales from 2019.

The software-defined ET7 bounds past current model capabilities, boasting more than 620 miles of battery range and an impressive 0-to-60 mph in under 4 seconds.

With the DRIVE Orin-powered Adam, the ET7’s centralized, high-performance compute architecture powers advanced AI features and allows continuous over-the-air upgrades. As a result, the intelligent vehicle redefines the customer experience, with an AI-enhanced cockpit and point-to-point autonomous driving capabilities.

Sensors on the bottom of the sleek ET7 detect the road surface in real time so the vehicle can automatically adjust the suspension, creating a smoother, more luxurious ride.

The opulent interior and immersive augmented reality digital cockpit inside the sedan interact with the driver through voice recognition and driver monitoring. The sedan comes standard with over 100 configurations for comfort, safety and smart technologies.

Peak Performance

The ET7 outperforms in both drive quality and AI compute.

The NIO Adam supercomputer is one of the most powerful platforms to run in a vehicle, achieving more than 1,000 trillion operations per second (TOPS) of performance.

At its core is DRIVE Orin, the world’s most advanced autonomous vehicle processor. It delivers up to 254 TOPS to simultaneously run a high number of deep neural networks and applications while achieving systematic safety standards such as ISO 26262 ASIL-D.

By integrating multiple DRIVE Orin SoCs, Adam meets the diversity and redundancies necessary for safe autonomous operation.

On the Horizon

Following the start of ET7 deliveries, NIO is slated to launch a mid-sized performance sedan, the ET5 — also built on the Adam supercomputer — in September.

NIO plans to enter global markets with the ET7 in Germany, Denmark, Sweden and Netherlands later this year. With a goal of bringing one of the most advanced AI platforms to more customers, NIO intends to have vehicle offerings in 25 countries and regions by 2025.

With the ET7 now entering the market, customers can enjoy a software-defined experience that’s as fast as it is luxurious.

The post Fast and Luxurious: The Intelligent NIO ET7 EV Built on NVIDIA DRIVE Orin Arrives appeared first on NVIDIA Blog.

Read More