April Showers Bring 23 New GeForce NOW Games Including ‘Have a Nice Death’

April Showers Bring 23 New GeForce NOW Games Including ‘Have a Nice Death’

It’s another rewarding GFN Thursday, with 23 new games for April on top of 11 joining the cloud this week and a new Marvel’s Midnight Suns reward now available first for GeForce NOW Premium members.

Newark RT 4080 SuperPOD GeForce NOW
There are dozens of us…dozens!

Newark, N.J., is next to complete its upgrade to RTX 4080 SuperPODs, making it the 12th region worldwide to bring new performance to Ultimate members.

GeForce NOW on SHIELD TV is being updated for a more consistent experience across Android and TV devices. Update 6.00 has begun rolling out to SHIELD TV owners this week.

Plus, work is underway to bring the initial batch of Xbox first-party games and features to GeForce NOW.

Teamwork Makes the Dream Work

GeForce NOW and Microsoft

Last month, we announced a partnership with Microsoft to bring Xbox Game Studios PC games to the GeForce NOW library, including titles from Bethesda, Mojang Studios and Activision, pending closure of Microsoft’s acquisition. It’s a shared commitment to giving gamers more choice and enabling PC gamers to play their favorite games anywhere.

Since then the teams at both companies have been collaborating on delivering a best-in-class cloud gaming experience that PC gamers have come to expect, delivering a seamless experience across any device, whether playing locally or in the cloud.

We’re making progress, and in future GFN Thursdays we will provide an update on onboarding of individual titles from Microsoft’s incredibly rich catalog of first party PC games. Stay tuned to our GFN Thursday updates for the latest.

Medieval Marvel

Marvels Midnight Suns Reward on GeForce NOW
Fight among the legends in Captain Marvel’s Medieval Marvel suit.

Starting today, Premium GeForce NOW members can claim their marvel-ous new reward. Marvel’s Midnight Suns, the tactical role-playing game from the creators of XCOM, has been praised for its immersive game play and cutting-edge visuals with support for DLSS 3 technology on top of RTX-powered ray tracing.

With the game’s first downloadable content, called The Good, The Bad, and The Undead, fans were thrilled to welcome Deadpool to the roster. This week, members can get their free reward to secure Captain Marvel’s Medieval Marvel suit.

Ultimate and Priority members can visit the GeForce NOW Rewards portal today and update the settings to start receiving special offers and in-game goodies. Better hurry, as this reward is available on a first-come, first-served basis only through Saturday, May 6.

April is FOOL of Games

Have a Nice Death on GeForce NOW
Death Inc. opens a new branch in the cloud.

No joke, kick the weekend off right by streaming Have a Nice Death. Restore order in this darkly charming 2D action game from Gearbox while playing as an overworked Death whose employees at Death Inc. have run rampant as caretakers of souls. Hack and slash through numerous minions and bosses in each department at the company, using unique weapons and spells.

This leads the 11 new games joining the cloud this week:

  • 9 Years of Shadows (New release on Steam)
  • Terra Nil (New release on Steam, March 28)
  • Gripper (New release on Steam, March 29)
  • Smalland: Survive the Wilds (New release on Steam, March 29)
  • DREDGE (New release on Steam, March 30)
  • Ravenbound (New release on Steam, March 30)
  • The Great War: Western Front (New release on Steam, March 30)
  • Troublemaker (New release on Steam, March 31)
  • Have a Nice Death (Steam)
  • Tower of Fantasy (Steam)
  • Tunche (Free on Epic Games Store)

Plus, look forward to the rest of April:

  • Meet Your Maker (New release on Steam, April 4)
  • Road 96: Mile 0 (New release on Steam, April 4)
  • TerraScape (New release on Steam, April 5)
  • Curse of the Sea Rats (New release on Steam, April 6)
  • Ravenswatch (New release on Steam, April 6)
  • Supplice (New release on Steam, April 6)
  • DE-EXIT – Eternal Matters (New release on Steam, April 14)
  • Survival: Fountain of Youth (New release on Steam, April 19)
  • Tin Hearts (New release on Steam, April 20)
  • Dead Island 2 (New Release on Epic Games Store, April 21)
  • Afterimage (New release on Steam, April 25)
  • Roots of Pacha (New release on Steam, April 25)
  • Bramble: The Mountain King (New release on Steam, April 27)
  • 11-11 Memories Retold (Steam)
  • canVERSE (Steam)
  • Teardown (Steam)
  • Get Even (Steam)
  • Little Nightmares (Steam)
  • Little Nightmares II (Steam)
  • The Dark Pictures Anthology: Man of Medan (Steam)
  • The Dark Pictures Anthology: Little Hope (Steam)
  • The Dark Pictures Anthology: House of Ashes (Steam)
  • The Dark Pictures Anthology: The Devil in Me (Steam)

More March Madness

On top of the 19 games announced in March, nine extra ones joined the GeForce NOW library this month, including this week’s additions 9 Years of Shadows, Terra Nil, Gripper, Troublemaker, Have a Nice Death, Tunche, as well as:

System Shock didn’t make it in March due to a shift in its release date, nor did Chess Ultra due to a technical issue.

With so many titles streaming from the cloud, what game will you play next? Let us know in the comments below, on Twitter or on Facebook.

Read More

Blender Update 3.5 Fuels 3D Content Creation, Powered by NVIDIA GeForce RTX GPUs

Blender Update 3.5 Fuels 3D Content Creation, Powered by NVIDIA GeForce RTX GPUs

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

It’s a celebration, creators!

Blender, the world’s most popular 3D creation suite — free and open source — released its major version 3.5 update. Expected to have a profound impact on 3D creative workflows, this latest release features support for Open Shading Language (OSL) shaders with the NVIDIA OptiX ray-tracing engine.

Plus, 3D artist and filmmaker Pablo Reche Beltrán, aka AuraProds, joins the 50th edition of the In the NVIDIA Studio series this week to share his Jurassic Park-inspired short film. Thank you to the artists who’ve contributed to the series, those who influence and inspire each day and those who will do so in the future.

Finally, enter the Watch ‘n Learn Giveaway hosted by creative community 80LV for a chance to win a powerful GeForce RTX 4080 GPU. Enter by watching an Omniverse for creators GTC session, filling out this form, and tagging #GTC23 and @NVIDIAOmniverse on social media by Thursday, March 30.

Better Renders in Blender 3.5

Supporting OSL shaders with NVIDIA OptiX, the Blender update 3.5 now enables 3D artists to use shaders in Cycles and render them on an NVIDIA RTX GPU. Previously, such a workflow was limited to CPU rendering only.

Blender update 3.5 supports Open Shading Language shaders with NVIDIA OptiX. Image courtesy of Sprite Fright, studio.blender.org.

This saves an extraordinary amount of time for artists, as scenes that use OSL can be completely rendered 36x faster than with a CPU alone.

Blender 3.5 also delivers updates to creative workflows in animation and rigging, nodes and physics, modeling and more.

Read the full release notes.

Dino-mite Renders Never Go Extinct

3D artist AuraProds fondly remembers watching the blockbuster movie Jurassic Park in theaters as a kid, wishing to one day recreate a memorable scene in which a giant T-Rex frightens the main characters who are huddled together in a car. Unlike that scary moment, however, the artist’s AuraProds video came together in a rather cute way.

“The concept art was made by my five- and eight-year-old cousins,” said AuraProds. “They drew the dinosaurs and inspired me to turn them into 3D.”

Based in Almería, a small city in southern Spain, AuraProds was perfectly situated to capture video footage in the nearby town of Tabernas, where Hollywood directors often shoot western movies. With the requisite footage captured, AuraProds modeled dinosaurs to populate the scene.

 

His preferred 3D app is Blender. “No doubt,” said AuraProds. ”I really like it because I can bring to 3D any idea in my head with a simple workflow.”

The artist modeled and sculpted each dinosaur by hand, using Blender Cycles RTX-accelerated OptiX ray tracing in the viewport for interactive, photorealistic modeling, thanks to his GeForce RTX 4080 GPU.

Detailed sculpting was done in Blender.

Satisfied with the models, AuraProds experimented with a variety of textures before moving on to rig and animate his models. Motion-blur visual effects were applied quickly with accelerated rendering and NVIDIA NanoVBD for easier rendering of volumes.

Geo nodes can add organic style and customization to Blender scenes and animation.

RTX-accelerated OptiX ray tracing in Blender Cycles allowed AuraProds to quickly export fully rendered files to Blackmagic Design’s DaVinci Resolve software.

Visual effects were applied and rendered in Blender.

Here, AuraProds RTX GPU dramatically accelerated his compositing workflows. GPU-accelerated color grading, video editing and color scopes were applied with ease. And decoding with NVDEC, also GPU accelerated, enabled buttery-smooth playback and scrubbing of high-resolution footage.

“NVIDIA RTX GPUs are the only technology that could handle the massive amount of polygons for this project. I know from my own experience how reliable the GPUs are.”’ — AuraProds.

The RTX 4080 GPU sped AuraProds use of new AI video editing tools, delivering up to a 2x increase in AI performance over the previous generation. Performance boosts were also applied to existing RTX-accelerated AI features — including automatic tagging of clips and tracking of effects, SpeedWarp for smooth slow motion and Video Super Resolution.

AuraProds then applied several AI features to achieve his desired visual effect. He wrapped up the project by deploying the RTX 4080 GPU’s dual AV1 video encoders — cutting the export time in half.

“I got to bring that aesthetic and memory to this new digital age with today’s AI tools,” said AuraProds.

3D artist AuraProds.

For more of AuraProds video content, check out VELOX — a short 3D film shot entirely with a green screen — made by scanning an entire desert and creating several 3D spaceships over two months, available on his YouTube channel.

Experienced and aspiring content creators can discover exclusive step-by-step tutorials from industry-leading artists, inspiring community showcases and more on the NVIDIA Studio YouTube channel, which includes a curated Blender playlist.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Get updates directly in your inbox by subscribing to the Studio newsletter

Read More

Ubisoft’s Yves Jacquier on How Generative AI Will Revolutionize Gaming

Ubisoft’s Yves Jacquier on How Generative AI Will Revolutionize Gaming

Tools like ChatGPT have awakened the world to the potential of generative AI. Now, much more is coming.

On the latest episode of the NVIDIA AI Podcast, Yves Jacquier, executive director of Ubisoft La Forge, shares valuable insights into the transformative potential of generative AI in the gaming industry. With over two decades of experience in technology innovation, science and R&D management across various sectors, Jacquier’s comprehensive expertise makes him a true visionary in the field.

During his conversation with podcast host Noah Kravitz, Jacquier highlighted how generative AI, which enables computers to create unique content such as images, text and music, is already revolutionizing the gaming sector. By designing new levels, characters and items, and generating realistic graphics and soundscapes, this cutting-edge technology offers countless opportunities for more immersive and engaging experiences.

As the driving force behind Ubisoft La Forge, Jacquier plays a crucial role in shaping the company’s academic R&D strategy. Key milestones include establishing a chair in AI deep learning in 2011 and founding Ubisoft La Forge, the first lab in the gaming industry dedicated to applied academic research. This research is being translated into state-of-the-art gaming experiences.

Jacquier expressed confidence that generative AI will play a vital role in sculpting the gaming industry and providing unparalleled gaming experiences for enthusiasts around the world.

Related Articles

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI

Partners at Sequoia Capital, Pat Grady and Sonya Huang, discuss their thought-provoking essay, “Generative AI: A Creative New World.” The authors explore the potential of generative AI to unlock new realms of creativity and expression, while also addressing the challenges and ethical implications of this technology. They also provide insights into generative AI’s future.

Real or Not Real? Attorney Steven Frank Employs Deep Learning to Authenticate Art

Steven Frank, a partner at law firm Morgan Lewis, specializes in intellectual property and commercial technology law. He is part of a husband-wife duo that utilized convolutional neural networks to authenticate masterpieces, including da Vinci’s Salvador Mundi, with the aid of AI.

GANTheftAuto: Harrison Kinsley on AI-Crafted Gaming Environments

While humans playing games against machines is a familiar concept, computers can now develop games for people to enjoy. Programming aficionado and social media influencer Harrison Kinsley devised GANTheftAuto, an AI-driven neural network that produces a playable segment of the iconic video game Grand Theft Auto V.

Subscribe, Review and Follow NVIDIA AI on Twitter

If you found this episode insightful, subscribe to the NVIDIA AI Podcast on your preferred podcast platform and leave a rating and review. Stay connected with @NVIDIAAI on Twitter.

Read More

GFN Thursday Celebrates 1,500+ Games and Their Journey to GeForce NOW

GFN Thursday Celebrates 1,500+ Games and Their Journey to GeForce NOW

Gamers love games — as do the people who make them.

GeForce NOW streams over 1,500 games from the cloud, and with the Game Developers Conference in full swing this week, today’s GFN Thursday celebrates all things games: the tech behind them, the tools that bring them to the cloud, the ways to play them and the new ones being added to the library this week.

Developers use a host of NVIDIA resources to deliver the best in PC cloud gaming experiences. CD PROJEKT RED, one of many developers to tap into these resources, recently announced a new update coming to Cyberpunk 2077 on April 11 — including a new technology preview for Ray Tracing: Overdrive Mode that enables full ray tracing on GeForce RTX 40 Series GPUs and RTX 4080 SuperPODs.

GeForce NOW SuperPOD Servers
RTX 4080 performance lights up Sofia, Bulgaria.

In addition, members in and around Sofia, Bulgaria, can now experience the best of GeForce NOW Ultimate cloud gaming. It’s the latest city to roll out RTX 4080 gaming rigs to GeForce NOW servers around the globe.

Plus, with five new games joining the cloud this week, and an upcoming marvel-ous reward, GeForce NOW members can look forward to a busy weekend of streaming goodness.

Developers, Mount Up!

GDC presents the ideal time to spotlight GeForce NOW tools that enable developers to seamlessly bring their games to the cloud. NVIDIA tools, software development kits (SDKs) and partner engines together enable the production of stunning real-time content that uses AI and ray tracing. And bringing these games to billions of non-PC devices is as simple as checking an opt-in box.

GeForce NOW taps into existing game stores, allowing game developers to reap the benefits of a rapidly growing audience without the hassle of developing for another platform. This means zero port work to bring games to the cloud. Users don’t have to buy games for another platform and can play them on many of the devices they already own.

Developers who want to do more have access to the GeForce NOW Developer Platform — an SDK and toolset empowering integration of, interaction with and testing on the NVIDIA cloud gaming service. It allows developers to enhance their games to run more seamlessly, add cloud gaming into their stores and launchers, and let users connect their accounts and libraries to GeForce NOW.

The SDK is a set of APIs, runtimes, samples and documentation that allows games to query for cloud execution and enable virtual touchscreens; launchers to trigger cloud streaming of a specified game; and GeForce NOW and publisher backends to facilitate account linking and game library ownership syncing, already available for Steam and Ubisoft games.

Content developers have a slew of opportunities to bring their virtual worlds and interactive experiences to users in unique ways, powered by the cloud.

Metaverse services company Improbable will use NVIDIA cloud gaming infrastructure for an interactive, live, invite-only experience that will accommodate up to 10,000 guests. Other recent developer events included the DAF Trucks virtual experience, where potential customers took the newest DAF truck for a test drive in a simulated world, with PixelMob’s Euro Truck Simulator 2 providing the virtual playground.

Furthermore, CD PROJEKT RED will be delivering full ray tracing, aka path tracing, to Cyberpunk 2077. Such effects were previously only possible for film and TV. With the power of a GeForce RTX 4080 gaming rig in the cloud, Ultimate members will be able to stream the new technology preview for the Ray Tracing: Overdrive Mode coming to Cyberpunk 2077 across devices — even Macs — no matter the game’s system requirements.

Get Ready for a Marvel-ous Week

Marvel Midnight Suns Reward
GeForce NOW Premium members can get this Marvel-ous reward for free first.

GeForce NOW Ultimate members have been enjoying Marvel’s Midnight Suns’ ultra-smooth, cinematic game play thanks to DLSS 3 technology support on top of RTX-powered ray tracing, which together enable graphics breakthroughs.

Now, members can fight among the legends with Captain Marvel’s Medieval Marvel suit in a free reward, which will become available at the end of the month — first to Premium members who are opted into GeForce NOW rewards. This reward is only available until May 6, so upgrade to an Ultimate or Priority membership today and opt into rewards to get first access.

Next, on to the five new games hitting GeForce NOW week for a happy weekend:

And with that, we’ve got a question to end this GFN Thursday:

Read More

‘Cyberpunk 2077’ Brings Beautiful Path-Traced Visuals to GDC

‘Cyberpunk 2077’ Brings Beautiful Path-Traced Visuals to GDC

Game developer CD PROJEKT RED today at the Game Developers Conference in San Francisco unveiled a technology preview for Cyberpunk 2077 with path tracing, coming April 11.

Path tracing, also known as full ray tracing, accurately simulates light throughout an entire scene. It’s used by visual effects artists to create film and TV graphics that are indistinguishable from reality. But until the arrival of GeForce RTX GPUs with RT Cores, and the AI-powered acceleration of NVIDIA DLSS, real-time video game path tracing was impossible because it is extremely GPU intensive.

“This not only gives better visuals to the players but also has the promise to revolutionize the entire pipeline of how games are being created,” said Pawel Kozlowski, a senior technology developer engineer at NVIDIA.

This technology preview, Cyberpunk 2077’s Ray Tracing: Overdrive Mode, is a sneak peak into the future of full ray tracing. With full ray tracing, now practically all light sources cast physically correct soft shadows. Natural colored lighting also bounces multiple times throughout Cyberpunk 2077’s world, creating more realistic indirect lighting and occlusion.

Cyberpunk 2077, previously an early adopter of ray tracing, becomes the latest modern blockbuster title to harness real-time path tracing. Coming shortly after path tracing for Minecraft, Portal and Quake II, it underscores a wave of adoption in motion.

Like with ray tracing, it’s expected many more will follow. And the influence on video games is just the start, as real-time path tracing holds promise for many design industries.

Decades of Research Uncorked

Decades in the making, real-time path tracing is indeed a big leap in gaming graphics.

While long used in computer-generated imagery for movies, path tracing there took place in offline rendering farms, often requiring hours to render a single frame.

In gaming, which requires fast frame rates, rendering needs to happen in about 0.016 seconds.

Since the 1970s, video games have relied on rasterization techniques (see below). More recently, in 2018, NVIDIA introduced RTX GPUs to support ray tracing. Path tracing is the final frontier for the most physically accurate lighting and shadows.

Path tracing has been one of the main lighting algorithms used in offline rendering farms and computer graphics in films for years. It wasn’t until GeForce RTX 40 series and DLSS 3 was available that it was possible to bring path tracing to real-time graphics.

Cyberpunk 2077 also taps into Shader Execution Reordering — available for use on the NVIDIA Ada Lovelace architecture generation — which optimizes GPU workloads, enabling more efficient path-traced lighting.

Accelerated by DLSS 3  

DLSS 3 complements groundbreaking advancements in path tracing and harnesses modern AI — built on GPU-accelerated deep learning, a form of neural networking — as a powerful gaming performance multiplier. DLSS allows games to render 1/8th of the pixels, then uses AI and GeForce RTX Tensor Cores to reconstruct the rest, dramatically multiplying frame rates, while delivering crisp, high-quality images that rival native resolution.

Running on Ada Lovelace advances — launched with GeForce RTX 40 Series GPUs — DLSS 3 multiplies frame rates, maintaining image quality and responsiveness in games.

Powerful Tools Now Available 

For game developers, NVIDIA today at GDC announced the availability of the RTX+ Path Tracing SDK 1.0. The package of technologies includes DLSS 3, Shader Execution Reordering (SER), RTX Direct Illumination (RTXDI) and NVIDIA Real-Time Denoisers (NRD).

Learn more about full RTX path tracing

And catch up on all the breakthroughs in generative AI and the metaverse by joining us at GTC this week.

Read More

A Revolution Rendered in Real Time: NVIDIA Accelerates Neural Graphics at GDC

A Revolution Rendered in Real Time: NVIDIA Accelerates Neural Graphics at GDC

Gamers wanted better graphics. GPUs delivered. Those GPUs became the key to the world-changing AI revolution. Now gamers are reaping the benefits.

At GDC 2023 in San Francisco this week, the gaming industry’s premier developers conference, NVIDIA made a series of announcements, including new games and game development tools that promise to accelerate innovations at the intersection of neural networking and graphics, or neural graphics revolution.

DLSS 3 harnesses modern AI — built on GPU-accelerated deep learning, a form of neural networking — as a powerful gaming performance multiplier. DLSS allows games to render 1/8th of the pixels, then uses AI and GeForce RTX Tensor Cores to reconstruct the rest, dramatically multiplying frame rates, while delivering crisp, high-quality images that rival native resolution.

It’s just one example of how gamers are benefiting from the advancements in AI supercomputing showcased at this week’s NVIDIA GTC technology conference, which is running concurrently with GDC. And game developers are adopting DLSS at a break-neck pace.

DLSS complements ground-breaking advancements in ray tracing — a technology long used by filmmakers — to bring richer, more immersive visual experiences to gamers in real time.

Thanks, in part, to DLSS, real-time ray tracing, considered by many to be impossible in 2017, exploded onto the gaming scene with the debut of NVIDIA RTX in 2018. Ray tracing is now everywhere in games.

At GDC, game developer CD PROJEKT RED announced Cyberpunk 2077 will activate full ray tracing, also called path tracing, with the upcoming technology preview of Ray Tracing: Overdrive Mode in Cyberpunk 2077.

Cyberpunk 2077, previously an early adopter of ray tracing, becomes the latest modern blockbuster title to harness real-time path tracing following  Minecraft, Portal and Quake II.

New DLSS 3 AAA Games Coming

During GDC, NVIDIA also announced the addition of DLSS 3 support for even more popular AAA games, including Diablo IV, Forza Horizon 5 and Redfall. 

  • Diablo IV, the latest installment of the genre-defining Diablo franchise with multiple games, will launch on June 6 with DLSS 3, with ray tracing coming post-launch.
  • Forza Horizon 5, named the best open-world racing game of all time, will update to DLSS 3 on March 28. 
  • Redfall, Bethesda’s highly anticipated, open-world, co-op first-person shooter from Arkane Austin is launching on May 2 with DLSS 3, with ray tracing coming post-launch.

DLSS is now available in 270+ games and apps, and DLSS 3 is multiplying performance in 28 released games and has been adopted 7x faster than DLSS 2 in the first six months of their respective launches.

DLSS Frame Generation Now Publicly Available for Developers

NVIDIA announced DLSS Frame Generation is now publicly available for developers to integrate into their games and applications.

The public release of DLSS Frame Generation plug-ins will allow even more developers to adopt the framerate-boosting technology.

DLSS Frame Generation is now available via NVIDIA Streamline, an open-source, cross-vendor framework that simplifies the integration of super-resolution technologies in 3D games and apps.

For all the details, dig into our full coverage on GeForce News.

Unreal Engine 5.2 Integration to Speed Up DLSS 3 Adoption

At GDC, NVIDIA and Epic announced the integration of DLSS 3 into the popular Unreal Engine game engine.

Unreal Engine is an open and advanced real-time 3D creation tool that gives game developers and creators the freedom and control to deliver cutting-edge 3D content, interactive experiences and immersive virtual worlds.

A DLSS 3 plug-in will debut in UE 5.2, making it more straightforward for any developer to accelerate the performance of their games and applications, further accelerating the adoption of DLSS.

Cyberpunk 2077: A Showcase for What’s Next

CD PROJEKT RED showcases a technology preview of path tracing with Cyberpunk 2077.

Path tracing, also known as full ray tracing, allows developers to create cinematic experiences. Simulating the physics of light, using ray tracing as part of a neural graphics system, it’s capable of photorealism in 3D settings for more dynamic lighting and shadows.

GeForce gamers will be able to activate full ray tracing with the upcoming technology preview of Ray Tracing: Overdrive Mode on April 11.

These advancements are anchored in NVIDIA RTX technologies. To bring these incredible effects to life, CD PROJEKT RED and NVIDIA have worked hand in hand to integrate NVIDIA DLSS 3 and introduce new optimizations for this entirely new, fully ray-traced pipeline.

NVIDIA Shader Execution Reordering helps GPUs execute incoherent workloads boosting performance; NVIDIA Real-Time Denoisers have been used to improve performance and image quality.

As a result, with full ray tracing, now practically all light sources cast physically correct soft shadows. Natural colored lighting also bounces multiple times throughout Cyberpunk 2077’s world, creating more realistic indirect lighting and occlusion.

More to Come

Cyberpunk 2077 is a case study of how GPUs have unlocked the AI revolution and will bring great experiences to PC gamers for years to come.

An expanding game roster, game engine support and continued improvements in performance and image quality are securing NVIDIA DLSS as a landmark technology for the neural graphics revolution in PC gaming.

Read More

AI Opener: OpenAI’s Sutskever in Conversation With Jensen Huang

AI Opener: OpenAI’s Sutskever in Conversation With Jensen Huang

Like old friends catching up over coffee, two industry icons reflected on how modern AI got its start, where it’s at today and where it needs to go next.

Jensen Huang, founder and CEO of NVIDIA, interviewed AI pioneer Ilya Sutskever in a fireside chat at GTC. The talk was recorded a day after the launch of GPT-4, the most powerful AI model to date from OpenAI, the research company Sutskever co-founded.

They talked at length about GPT-4 and its forerunners, including ChatGPT. That generative AI model, though only a few months old, is already the most popular computer application in history.

Their conversation touched on the capabilities, limits and inner workings of the deep neural networks that are capturing the imaginations of hundreds of millions of users.

Compared to ChatGPT, GPT-4 marks a “pretty substantial improvement across many dimensions,” said Sutskever, noting the new model can read images as well as text.

“In some future version, [users] might get a diagram back” in response to a query, he said.

Under the Hood With GPT

“There’s a misunderstanding that ChatGPT is one large language model, but there’s a system around it,” said Huang.

In a sign of that complexity, Sutskever said OpenAI uses two levels of training.

The first stage focuses on accurately predicting the next word in a series. Here, “what the neural net learns is some representation of the process that produced the text, and that’s a projection of the world,” he said.

The second “is where we communicate to the neural network what we want, including guardrails … so it becomes more reliable and precise,” he added.

Present at the Creation

While he’s at the swirling center of modern AI today, Sutskever was also present at its creation.

In 2012, he was among the first to show the power of deep neural networks trained on massive datasets. In an academic contest, the AlexNet model he demonstrated with AI pioneers Geoff Hinton and Alex Krizhevsky recognized images faster than a human could.

Huang referred to their work as the Big Bang of AI.

The results “broke the record by such a large margin, it was clear there was a discontinuity here,” Huang said.

The Power of Parallel Processing

Part of that breakthrough came from the parallel processing the team applied to its model with GPUs.

“The ImageNet dataset and a convolutional neural network were a great fit for GPUs that made it unbelievably fast to train something unprecedented,” Sutskever said.

Another image from the fireside chat between Ilya Sutskever of OpenAI and Jensen Huang.

That early work ran on a few GeForce GTX 5080 GPUs in a University of Toronto lab. Today, tens of thousands of the latest NVIDIA A100 and H100 Tensor Core GPUs in the Microsoft Azure cloud service handle training and inference on models like ChatGPT.

“In the 10 years we’ve known each other, the models you’ve trained [have grown by] about a million times,” Huang said. “No one in computer science would have believed the computation done in that time would be a million times larger.”

“I had a very strong belief that bigger is better, and a goal at OpenAI was to scale,” said Sutskever.

A Billion Words

Along the way, the two shared a laugh.

“Humans hear a billion words in a lifetime,” Sutskever said.

“Does that include the words in my own head,” Huang shot back.

“Make it 2 billion,” Sutskever deadpanned.

The Future of AI

They ended their nearly hour-long talk discussing the outlook for AI.

Asked if GPT-4 has reasoning capabilities, Sutskever suggested the term is hard to define and the capability may still be on the horizon.

“We’ll keep seeing systems that astound us with what they can do,” he said. “The frontier is in reliability, getting to a point where we can trust what it can do, and that if it doesn’t know something, it says so,” he added.

“Your body of work is incredible … truly remarkable,” said Huang in closing the session. “This has been one of the best beyond Ph.D. descriptions of the state of the art of large language models,” he said.

To get all the news from GTC, watch the keynote below.

Read More

It Takes a Village: 100+ NVIDIA MLOps and AI Platform Partners Help Enterprises Move AI Into Production

It Takes a Village: 100+ NVIDIA MLOps and AI Platform Partners Help Enterprises Move AI Into Production

Building AI applications is hard. Putting them to use across a business can be even harder.

Less than one-third of enterprises that have begun adopting AI actually have it in production, according to a recent IDC survey.

Businesses often realize the full complexity of operationalizing AI just prior to launching an application. Problems discovered so late can seem insurmountable, so the deployment effort is often stalled and forgotten.

To help enterprises get AI deployments across the finish line, more than 100 machine learning operations (MLOps) software providers are working with NVIDIA. These MLOps pioneers provide a broad array of solutions to support businesses in optimizing their AI workflows for both existing operational pipelines and ones built from scratch.

Many NVIDIA MLOps and AI platform ecosystem partners as well as DGX-Ready Software partners, including Canonical, ClearML, Dataiku, Domino Data Lab, Run:ai and Weights & Biases, are building solutions that integrate with NVIDIA-accelerated infrastructure and software to meet the needs of enterprises operationalizing AI.

NVIDIA cloud service provider partners Amazon Web Services, Google Cloud, Azure, Oracle Cloud as well as other partners around the globe, such as Alibaba Cloud, also provide MLOps solutions to streamline AI deployments.

NVIDIA’s leading MLOps software partners are verified and certified for use with the NVIDIA AI Enterprise software suite, which provides an end-to-end platform for creating and accelerating production AI. Paired with NVIDIA AI Enterprise, the tools from NVIDIA’s MLOps partners help businesses develop and deploy AI successfully.

Enterprises can get AI up and running with help from these and other NVIDIA MLOps and AI platform partners:

  • Canonical: Aims to accelerate at-scale AI deployments while making open source accessible for AI development. Canonical announced that Charmed Kubeflow is now certified as part of the DGX-Ready Software program, both on single-node and multi-node deployments of NVIDIA DGX systems. Designed to automate machine learning workflows, Charmed Kubeflow creates a reliable application layer where models can be moved to production.
  • ClearML: Delivers a unified, open-source platform for continuous machine learning — from experiment management and orchestration to increased performance and ML production — trusted by teams at 1,300 enterprises worldwide. With ClearML, enterprises can orchestrate and schedule jobs on personalized compute fabric. Whether on premises or in the cloud, businesses can enjoy enhanced visibility over infrastructure usage while reducing compute, hardware and resource spend to optimize cost and performance. Now certified to run NVIDIA AI Enterprise, ClearML’s MLOps platform is more efficient across workflows, enabling greater optimization for GPU power.
  • Dataiku: As the platform for Everyday AI, Dataiku enables data and domain experts to work together to build AI into their daily operations. Dataiku is now certified as part of the NVIDIA DGX-Ready Software program, which allows enterprises to confidently use Dataiku’s MLOps capabilities along with NVIDIA DGX AI supercomputers.
  • Domino Data Lab: Offers a single pane of glass that enables the world’s most sophisticated companies to run data science and machine learning workloads in any compute cluster — in any cloud or on premises in all regions. Domino Cloud, a new fully managed MLOps platform-as-a-service, is now available for fast and easy data science at scale. Certified to run on NVIDIA AI Enterprise last year, Domino Data Lab’s platform mitigates deployment risks and ensures reliable, high-performance integration with NVIDIA AI.
  • Run:ai: Functions as a foundational layer within enterprises’ MLOps and AI Infrastructure stacks through its AI computing platform, Atlas. The platform’s automated resource management capabilities allow organizations to properly align resources across different MLOps platforms and tools running on top of Run:ai Atlas. Certified to offer NVIDIA AI Enterprise, Run:ai is also fully integrating NVIDIA Triton Inference Server, maximizing the utilization and value of GPUs in AI-powered environments.
  • Weights & Biases (W&B): Helps machine learning teams build better models, faster. With just a few lines of code, practitioners can instantly debug, compare and reproduce their models — all while collaborating with their teammates. W&B is trusted by more than 500,000 machine learning practitioners from leading companies and research organizations around the world. Now validated to offer NVIDIA AI Enterprise, W&B looks to accelerate deep learning workloads across computer vision, natural language processing and generative AI.

NVIDIA cloud service provider partners have integrated MLOps into their platforms that provide NVIDIA accelerated computing and software for data processing, wrangling, training and inference:

  • Amazon Web Services: Amazon SageMaker for MLOps helps developers automate and standardize processes throughout the machine learning lifecycle, using NVIDIA accelerated computing. This increases productivity by training, testing, troubleshooting, deploying and governing ML models.
  • Google Cloud: Vertex AI is a fully managed ML platform that helps fast-track ML deployments by bringing together a broad set of purpose-built capabilities. Vertex AI’s end-to-end MLOps capabilities make it easier to train, orchestrate, deploy and manage ML at scale, using NVIDIA GPUs optimized for a wide variety of AI workloads. Vertex AI also supports leading-edge solutions such as the NVIDIA Merlin framework, which maximizes performance and simplifies model deployment at scale. Google Cloud and NVIDIA collaborated to add Triton Inference Server as a backend on Vertex AI Prediction, Google Cloud’s fully managed model-serving platform.
  • Azure: The Azure Machine Learning cloud platform is accelerated by NVIDIA and unifies ML model development and operations (DevOps). It applies DevOps principles and practices — like continuous integration, delivery and deployment — to the machine learning process, with the goal of speeding experimentation, development and deployment of Azure machine learning models into production. It provides quality assurance through built-in responsible AI tools to help ML professionals develop fair, explainable and responsible models.
  • Oracle Cloud: Oracle Cloud Infrastructure (OCI) AI Services is a collection of services with prebuilt machine learning models that make it easier for developers to apply NVIDIA-accelerated AI to applications and business operations. Teams within an organization can reuse the models, datasets and data labels across services. OCI AI Services makes it possible for developers to easily add machine learning to apps without slowing down application development.
  • Alibaba Cloud: Alibaba Cloud Machine Learning Platform for AI provides an all-in-one machine learning service featuring low user technical skills requirements, but with high performance results. Accelerated by NVIDIA, the Alibaba Cloud platform enables enterprises to quickly establish and deploy machine learning experiments to achieve business objectives.

Learn more about NVIDIA MLOps partners and their work at NVIDIA GTC, a global conference for the era of AI and the metaverse, running online through Thursday, March 23.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote in replay:

 

Read More

NVIDIA to Bring AI to Every Industry, CEO Says

NVIDIA to Bring AI to Every Industry, CEO Says

ChatGPT is just the start.

With computing now advancing at what he called “lightspeed,” NVIDIA founder and CEO Jensen Huang today announced a broad set of partnerships with Google, Microsoft, Oracle and a range of leading businesses that bring new AI, simulation and collaboration capabilities to every industry.

“The warp drive engine is accelerated computing, and the energy source is AI,” Huang said in his keynote at the company’s GTC conference. “The impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.”

In a sweeping 78-minute presentation anchoring the four-day event, Huang outlined how NVIDIA and its partners are offering everything from training to deployment for cutting-edge AI services. He announced new semiconductors and software libraries to enable fresh breakthroughs. And Huang revealed a complete set of systems and services for startups and enterprises racing to put these innovations to work on a global scale.

Huang punctuated his talk with vivid examples of this ecosystem at work. He announced NVIDIA and Microsoft will connect hundreds of millions of Microsoft 365 and Azure users to a platform for building and operating hyperrealistic virtual worlds. He offered a peek at how Amazon is using sophisticated simulation capabilities to train new autonomous warehouse robots. He touched on the rise of a new generation of wildly popular generative AI services such as ChatGPT.

And underscoring the foundational nature of NVIDIA’s innovations, Huang detailed how, together with ASML, TSMC and Synopsis, NVIDIA computational lithography breakthroughs will help make a new generation of efficient, powerful 2-nm semiconductors possible.

The arrival of accelerated computing and AI come just in time, with Moore’s Law slowing and industries tackling powerful dynamics —sustainability, generative AI, and digitalization, Huang said. “Industrial companies are racing to digitalize and reinvent into software-driven tech companies — to be the disruptor and not the disrupted,” Huang said.

Acceleration lets companies meet these challenges. “Acceleration is the best way to reclaim power and achieve sustainability and Net Zero,” Huang said.

GTC: The Premier AI Conference

GTC, now in its 14th year, has become one of the world’s most important AI gatherings. This week’s conference features 650 talks from leaders such as Demis Hassabis of DeepMind, Valeri Taylor of Argonne Labs, Scott Belsky of Adobe, Paul Debevec of Netflix, Thomas Schulthess of ETH Zurich and a special fireside chat between Huang and Ilya Sutskever, co-founder of OpenAI, the creator of ChatGPT.

More than 250,000 registered attendees will dig into sessions on everything from restoring the lost Roman mosaics of 2,000 years ago to building the factories of the future, from exploring the universe with a new generation of massive telescopes to rearranging molecules to accelerate drug discovery, to more than 70 talks on generative AI.

The iPhone Moment of AI

NVIDIA’s technologies are fundamental to AI, with Huang recounting how NVIDIA was there at the very beginning of the generative AI revolution. Back in 2016 he hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer — the engine behind the large language model breakthrough powering ChatGPT.

Launched late last year, ChatGPT went mainstream almost instantaneously, attracting over 100 million users, making it the fastest-growing application in history. “We are at the iPhone moment of AI,” Huang said.

NVIDIA DGX supercomputers, originally used as an AI research instrument, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers.

“DGX supercomputers are modern AI factories,” Huang said.

NVIDIA H100, Grace Hopper, Grace, for Data Centers

Deploying LLMs like ChatGPT are a significant new inference workload, Huang said.  For large-language-model inference, like ChatGPT, Huang announced a new GPU — the H100 NVL with dual-GPU NVLink.

Based on NVIDIA’s Hopper architecture, H100 features a Transformer Engine designed to process models such as the GPT model that powers ChatGPT. Compared to HGX A100 for GPT-3 processing, a standard server with four pairs of H100 with dual-GPU NVLink is up to 10x faster.

“H100 can reduce large language model processing costs by an order of magnitude,” Huang said.

Meanwhile, over the past decade, cloud computing has grown 20% annually into a $1 trillion industry, Huang said. NVIDIA designed the Grace CPU for an AI- and cloud-first world, where AI workloads are GPU accelerated. Grace is sampling now, Huang said.

NVIDIA’s new superchip, Grace Hopper, connects the Grace CPU and Hopper GPU over a high-speed 900GB/sec coherent chip-to-chip interface. Grace Hopper is ideal for processing giant datasets like AI databases for recommender systems and large language models, Huang explained.

“Customers want to build AI databases several orders of magnitude larger,” Huang said. “Grace Hopper is the ideal engine.”

DGX the Blueprint for AI Infrastructure

The latest version of DGX features eight NVIDIA H100 GPUs linked together to work as one giant GPU. “NVIDIA DGX H100 is the blueprint for customers building AI infrastructure worldwide,” Huang said, sharing that NVIDIA DGX H100 is now in full production.

H100 AI supercomputers are already coming online.

Oracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs

Additionally, Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs.

This follows Microsoft Azure’s private preview announcement last week for its H100 virtual machine, ND H100 v5.

Meta has now deployed its H100-powered Grand Teton AI supercomputer internally for its AI production and research teams.

And OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research.

Other partners making H100 available include Cirrascale and CoreWeave, both which announced general availability today. Additionally, Google Cloud, Lambda, Paperspace and Vult are planning to offer H100.

And servers and systems featuring NVIDIA H100 GPUs are available from leading server makers including Atos, Cisco, Dell Technologies,  GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.

DGX Cloud: Bringing AI to Every Company, Instantly

And to speed DGX capabilities to startups and enterprises racing to build new products and develop AI strategies, Huang announced NVIDIA DGX Cloud, through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure to bring NVIDIA DGX AI supercomputers “to every company, from a browser.”

DGX Cloud is optimized to run NVIDIA AI Enterprise, the world’s leading acceleration software suite for end-to-end development and deployment of AI. “DGX Cloud offers customers the best of NVIDIA AI and the best of the world’s leading cloud service providers,” Huang said.

NVIDIA is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud and more.

This partnership brings NVIDIA’s ecosystem to cloud service providers while amplifying NVIDIA’s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis, ensuring they can quickly and easily scale the development of large, multi-node training workloads.

Supercharging Generative AI

To accelerate the work of those seeking to harness generative AI, Huang announced NVIDIA AI Foundations, a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks.

AI Foundations services include NVIDIA NeMo for building custom language text-to-text generative models; Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content; and BioNeMo, to help researchers in the $2 trillion drug discovery industry.

Adobe is partnering with NVIDIA to build a set of next-generation AI capabilities for the future of creativity.

Getty Images is collaborating with NVIDIA to train responsible generative text-to-image and text-to-video foundation models.

Shutterstock is working with NVIDIA to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets.

Accelerating Medical Advances

And NVIDIA announced Amgen is accelerating drug discovery services with BioNeMo. In addition, Alchemab Therapeutics, AstraZeneca, Evozyne, Innophore and Insilico are all early access users of BioNemo.

BioNeMo helps researchers create, fine-tune and serve custom models with their proprietary data, Huang explained.

Huang also announced that NVIDIA and Medtronic, the world’s largest healthcare technology provider, are partnering to build an AI platform for software-defined medical devices. The partnership will create a common platform for Medtronic systems, ranging from surgical navigation to robotic-assisted surgery.

And today Medtronic announced that its GI Genius system, with AI for early detection of colon cancer, is built on NVIDIA Holoscan, a software library for real-time sensor processing systems, and will ship around the end of this year.

“The world’s $250 billion medical instruments market is being transformed,” Huang said.

Speeding Deployment of Generative AI Applications

To help companies deploy rapidly emerging generative AI models, Huang announced inference platforms for AI video, image generation, LLM deployment and recommender inference. They combine NVIDIA’s full stack of inference software with the latest NVIDIA Ada, Hopper and Grace Hopper processors — including the NVIDIA L4 Tensor Core GPU and the NVIDIA H100 NVL GPU, both launched today.

• NVIDIA L4 for AI Video can deliver 120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency.

• NVIDIA L40 for Image Generation is optimized for graphics and AI-enabled 2D, video and 3D image generation.

• NVIDIA H100 NVL for Large Language Model Deployment is ideal for deploying massive LLMs like ChatGPT at scale.

• And NVIDIA Grace Hopper for Recommendation Models is ideal for graph recommendation models, vector databases and graph neural networks.

Google Cloud is the first cloud service provider to offer L4 to customers with the launch of its new G2 virtual machines, available in private preview today. Google is also integrating L4 into its Vertex AI model store.

Microsoft, NVIDIA to Bring Omniverse to ‘Hundreds of Millions’

Unveiling a second cloud service to speed unprecedented simulation and collaboration capabilities to enterprises, Huang announced NVIDIA is partnering with Microsoft to bring NVIDIA Omniverse Cloud, a fully managed cloud service, to the world’s industries.

“Microsoft and NVIDIA are bringing Omnivese to hundreds of millions of Microsoft 365 and Azure users,” Huang said, also unveiling new NVIDIA OVX servers and a new generation of workstations powered by NVIDIA RTX Ada Generation GPUs and Intel’s newest CPUs optimized for NVIDIA Omniverse.

To show the extraordinary capabilities of Omniverse, NVIDIA’s open platform built for 3D design collaboration and digital twin simulation, Huang shared a video showing how NVIDIA Isaac Sim, NVIDIA’s robotics simulation and synthetic generation platform, built on Omniverse, is helping Amazon save time and money with full-fidelity digital twins.

It shows how Amazon is working to choreograph the movements of Proteus, Amazon’s first fully autonomous warehouse robot, as it moves bins of products from one place to another in Amazon’s cavernous warehouses alongside humans and other robots.

Digitizing the $3 Trillion Auto Industry

Illustrating the scale of Omniverse’s reach and capabilities, Huang dug into Omniverse’s role in digitalizing the $3 trillion auto industry. By 2030, auto manufacturers will build 300 factories to make 200 million electric vehicles, Huang said, and battery makers are building 100 more megafactories. “Digitalization will enhance the industry’s efficiency, productivity and speed,” Huang said.

Touching on Omniverse’s adoption across the industry, Huang said Lotus is using Omniverse to virtually assemble welding stations. Mercedes-Benz uses Omniverse to build, optimize and plan assembly lines for new models. Rimac and Lucid Motors use Omniverse to build digital stores from actual design data that faithfully represent their cars.

Working with Idealworks, BMW uses Isaac Sim in Omniverse to generate synthetic data and scenarios to train factory robots. And BMW is using Omniverse to plan operations across factories worldwide and is building a new electric-vehicle factory, completely in Omniverse, two years before the plant opens, Huang said.

Separately. NVIDIA today announced that BYD, the world’s leading manufacturer of new energy vehicles NEVs, will extend its use of the NVIDIA DRIVE Orin centralized compute platform in a broader range of its NEVs.

Accelerating Semiconductor Breakthroughs

Enabling semiconductor leaders such as ASML, TSMC and Synopsis to accelerate the design and manufacture of a new generation of chips as current production processes near the limits of what physics makes possible, Huang announced NVIDIA cuLitho, a breakthrough that brings accelerated computing to the field of computational lithography.

The new NVIDIA cuLitho software library for computational lithography is being integrated by TSMC, the world’s leading foundry, as well as electronic design automation leader Synopsys into their software, manufacturing processes and systems for the latest-generation NVIDIA Hopper architecture GPUs.

Chip-making equipment provider ASML is working closely with NVIDIA on GPUs and cuLitho, and plans to integrate support for GPUs into all of their computational lithography software products. With lithography at the limits of physics, NVIDIA’s introduction of cuLitho enables the industry to go to 2nm and beyond, Huang said.

“The chip industry is the foundation of nearly every industry,” Huang said.

Accelerating the World’s Largest Companies

Companies around the world are on board with Huang’s vision.

Telecom giant AT&T uses NVIDIA AI to more efficiently process data and is testing Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk.

American Express, the U.S. Postal Service, Microsoft Office and Teams, and Amazon are among the 40,000 customers using the high-performance NVIDIA TensorRT inference optimizer and runtime, and NVIDIA Triton, a multi-framework data center inference serving software.

Uber uses Triton to serve hundreds of thousands of ETA predictions per second.

And with over 60 million daily users, Roblox uses Triton to serve models for game recommendations, build avatars, and moderate content and marketplace ads.

Microsoft, Tencent and Baidu are all adopting NVIDIA CV-CUDA for AI computer vision. The technology, in open beta, optimizes pre- and post-processing, delivering 4x savings in cost and energy.

Helping Do the Impossible

Wrapping up his talk, Huang thanked NVIDIA’s systems, cloud and software partners, as well as researchers, scientists and employees.

NVIDIA has updated 100 acceleration libraries, including cuQuantum and the newly open-sourced CUDA Quantum for quantum computing, cuOpt for combinatorial optimization, and cuLitho for computational lithography, Huang announced.

The global NVIDIA ecosystem, Huang reported, now spans 4 million developers, 40,000 companies and 14,000 startups in NVIDIA Inception.

“Together,” Huang said. “We are helping the world do the impossible.”

Read More

Fresh-Faced AI: NVIDIA Avatar Solutions Enhance Customer Service and Virtual Assistants

Fresh-Faced AI: NVIDIA Avatar Solutions Enhance Customer Service and Virtual Assistants

Companies across industries are looking to use interactive avatars to enhance digital experiences. But creating them is a complex, time-consuming process requiring state-of-the-art AI models that can see, hear, understand and communicate with end users.

To ease this process, NVIDIA is providing creators and developers with real-time AI solutions through Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native microservices for end-to-end development of interactive avatars. In collaboration with early-access partners, NVIDIA is delivering improvements that will provide users with the tools they need to easily design and deploy various kinds of avatars, from interactive chatbots to intelligent digital humans.

AT&T and Quantiphi are among the first to experience how Omniverse ACE can help increase employee productivity and enhance customer service experiences.

Omniverse ACE users can now seamlessly integrate NVIDIA AI into their applications, including Riva for speech AI, NeMo service for natural language understanding, and Omniverse Audio2Face or Live Portrait for AI-powered 2D and 3D character animation.

With the latest improvements to Omniverse ACE, teams can also deploy advanced avatars across web conferencing and customer service use cases by integrating domain-specific NVIDIA AI workflows like Tokkio and Maxine.

Early Partners and Customers Develop AI-Driven Digital Humans

AT&T is planning to use Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk. Working with Quantiphi, one of NVIDIA’s service delivery partners, AT&T is developing interactive avatars that can provide 24/7 support in local languages across regions. This is helping the company reduce costs while providing a better experience for its employees worldwide.

In addition to customer service, AT&T is planning to build and develop digital humans for various use cases across the company.

“Quantiphi and NVIDIA have been collaborating to make customer experience more immersive by combining the power of large language models, graphics and recommender systems,” said Siddharth Kotwal, global head of NVIDIA Practice at Quantiphi. “NVIDIA’s Tokkio framework has made it easier to build, deploy and personalize AI-powered digital assistants or avatars for our enterprise customers. The process of seamlessly integrating automatic speech recognition, conversational agents and information retrieval systems with real-time animation has been simplified.”

Leading professional-services company Deloitte is also working with NVIDIA to help enterprises deploy transformative applications. Deloitte’s latest hybrid-cloud offerings — which consist of NVIDIA AI and Omniverse services and platforms, including Omniverse ACE — will be added to the Deloitte Center for AI Computing.

An Advanced, Streamlined Solution for Deploying Avatars

Omniverse ACE provides all the necessary tools so users can streamline the development process for realistic, intelligent avatars. Teams can also customize pre-built AI avatar workflows to suit their needs with applications like NVIDIA Tokkio. Additionally, Omniverse ACE is bringing new improvements to existing microservices.

Learn more about NVIDIA Omniverse ACE and register to join the early-access program, available now for developers.

Dive into the art of AI avatars at GTC, a global conference for the era of AI and the metaverse. Join sessions with NVIDIA and industry experts, and watch the GTC keynote below:

Read More