NVIDIA Studio Creators Take Collaboration to Bone-Chilling New Heights

NVIDIA Studio Creators Take Collaboration to Bone-Chilling New Heights

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

NVIDIA Omniverse, a pillar of the NVIDIA Studio suite of tools and apps built for creators, helps interconnect 3D artists’ workflows by replacing linear pipelines with live-sync creation.

This week’s In the NVIDIA Studio artists specializing in 3D, Gianluca Squillace and Pasquale Scionti, benefitted from just that — in their individual work and in collaborating to construct the final scene for their project, Cold Inside Diorama.

 

The artists set out to build a Viking scene that conveyed a cold, glacial mood — uniting Squillace’s character with Scionti’s environment while maintaining their individual styles. Such a workflow, which used to require endless exports and imports, was made simple with Omniverse and Universal Scene Description (USD), the open and extensible ecosystem for describing, composing, simulating and collaborating within 3D worlds.

It Takes Character

Powered by a GeForce RTX 4080 GPU, Squillace started with rich reference research to create the character. He took several realistic and stylized images, concepts and photos of characters and weapons to define all the details.

After defining the character, Squillace built a quick blockout in the ZBrush tool, before sculpting all the details to reach the definitive high-poly model. The character’s hairstyle and other aspects underwent several tests in this step of the process before arriving at the final version.

Squillace then moved on to the retopology and UV phase in Autodesk Maya, optimizing models for movements and deformations, which he said allows excellent control of the polygons with quad draw and simple management of the UVs. It also enables mirroring different UV shells and saving space by improving the final quality.

High-poly models take time to develop.

The artists brought the completed high- and low-poly into Adobe Substance 3D Painter for the bake and texturing phase. Squillace had already defined the main colors with polypaint in ZBrush, so he used the materials present in Painter to stylize different metals, leathers and woods and achieve the final textures.

 

After completing a quick rig and an animation idle in Maya, the animation in video games when the main character remains still, the artist took advantage of the USD file format to instantly pull the animation into NVIDIA Omniverse USD Composer (formerly known as Create) to see the results in real time.

“With the connectors plug-in installed in Substance 3D Painter and Maya, I could edit textures and animation in real time and immediately see the results in Omniverse USD Composer, without exporting extra files or having to close and reopen the software,” Squillace said. “It was really a great improvement in terms of workflow and speed.”

He next felt the benefit of working with USD when seamlessly importing files into Marmoset Toolbag for character-only renders.

 

Squillace took full advantage of his RTX GPU in nearly every step of production. “I was really impressed by the fast calculation of ray-traced lighting in the Marmoset Toolbag scene, which lets me make a lot of real-time renders and videos with an outstanding final result,” he said.

A Hostile Environment Built in a Friendly One

Another artist worked in parallel. Scionti built the cold, harsh environment using Omniverse’s collaboration capabilities and USD files. This combination enabled him to integrate his files with Squillace’s — and see edits in real time.

Scionti said he always starts his work by establishing its mood. With the cold, glacial tone set for this piece, he modeled some of the scene elements using Autodesk 3ds Max before importing them into Adobe Substance 3D Painter, where he created unique materials.

For some additional materials, he used Quixel software and completed the composition and design using Quixel Megascans. As with all his work, in this piece Scionti intentionally left room for interpretation, letting the audience imagine their own story.

The scene was then finalized with composition mood and lighting. Accelerated by his GeForce RTX 3090 GPU, Unreal Engine 5.1 and Lumen with hardware ray tracing helped Scionti achieve a higher level of realism with intricate details. Nanite meshes with improved virtualized geometry were useful to generate high-polygon models for close-up details. In the lighting phase, the artist used sun and sky with volumetric fog and high dynamic range.

 

“My GPU gives me so many realistic details in real time,” Scionti said.

The duo then brought Squillace’s work into Scionti’s Unreal Engine scene to integrate the character into the snowy environment. With their scene complete, the artists enjoyed the final render and reflected on its creation.

A stunning, emotion-evoking scene, built in Omniverse USD Composer.

“NVIDIA Omniverse was the center of experimentation for this project — it allowed me to work with different software at the same time, increasing the production speed of the final character,” Squillace said. “I think the system provides enormous potential, especially if combined with the standard production workflow of a 3D asset.”

 

Scionti added that Omniverse is “a great software to collaborate with other people around the globe and interact in real time on the same project.”

3D artists Gianluca Squillace and Pasquale Scionti.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Get started with Omniverse and learn more on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Powering the Future: Next Step in Siemens, NVIDIA Collaboration Showcased With FREYR Virtual Factory Demos

Powering the Future: Next Step in Siemens, NVIDIA Collaboration Showcased With FREYR Virtual Factory Demos

At the Hannover Messe trade show this week, Siemens unveiled a digital model of next-generation FREYR Battery factories that was developed using NVIDIA technology.

The model was created in part to highlight a strategic partnership announced Monday by Siemens and FREYR, with Siemens becoming FREYR’s preferred supplier in automation technology, enabling the Norway-based group to scale up production and maximize plant efficiency.

Built by Siemens, the demo uses the NVIDIA Omniverse development platform to provide an immersive experience of the FREYR factories and follows the joint vision for an industrial metaverse unveiled last year by Siemens and NVIDIA.

Displayed as part of an industrial metaverse experience in the Siemens booth during Hannover Messe 2023, the world’s largest industrial technology trade show, the demos incorporate operational data from the FREYR factory in Norway.

Highlighting the integration between Siemens Xcelerator and NVIDIA Omniverse, the demo features 3D representations of the infrastructure, plant, machinery, equipment, human ergonomics, safety information, robots, automated guided vehicles, and detailed product and production simulations.

These technologies will help FREYR to meet surging demand for high-density, cost-effective battery cells for stationary energy storage, electric mobility and marine applications.

Amid growing worldwide sustainability initiatives and the rapid electrification of transportation, the battery industry is projected to grow to $400 billion by 2030. Battery cell manufacturing is a critical step in the battery value chain, with manufacturers investing billions of dollars in new battery-cell plants to meet this new demand.

In the demo, Siemens shows a vision for how teams can harness comprehensive digital twins in the industrial metaverse using models of existing and future plants.

Within moments, FREYR can set up a meeting with potential investors or customers to take place within the digital FREYR plant in Norway and explore the facility’s exterior before entering to view current production processes at work.

The striking interior flythrough instantly conveys the facility’s size and scale. The real-time, physically accurate simulation shows how machines and robots inside the factory move, and can even simulate complex processes. Sensors capturing machine information allow real-time performance visualization and ergonomic assessments.

The demo also demonstrates how the model can be used for production planning, highlighting how a plant manager can rapidly evaluate plant performance using a custom Siemens application, which provides at a glance an overview of the facility’s operation.

From there, the manager initiates a Microsoft Teams meeting with colleagues at a manufacturing “cell” — which places key people, machines and supplies in one strategic location — inside the virtual factory.

The team can then examine a robotic arm experiencing low-cycle-time issues, access machine performance data, identify specific cycle-time problems and view a live video stream with accompanying sensor data on machine performance.

This showcase at Hannover Messe is only the beginning, as more industries embrace and implement the industrial metaverse.

Learn more about NVIDIA Omniverse and our partnership with Siemens.

Read More

New GeForce RTX 4070 GPU Dramatically Accelerates Creativity

New GeForce RTX 4070 GPU Dramatically Accelerates Creativity

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The GeForce RTX 4070 GPU, the latest in the 40 Series lineup, is available today starting at $599.

It comes backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 popular creative apps; and exclusive NVIDIA Studio apps like Omniverse, Broadcast, Canvas and RTX Remix.

VLC media player added RTX Video Super Resolution to automatically upscale video to 4K resolution and beyond. GeForce RTX 40 and 30 Series GPU owners will immediately benefit from improved virtual fidelity.

The RTX Remix runtime for remastering classic games is now available as open source on GitHub. It empowers the mod development community to extend Remix’s game compatibility and feature set.

Creators can install the new April NVIDIA Studio Driver, supporting these latest updates and more, available for download today.

Plus, see how NVIDIA’s Hauler piece came to life using the Omniverse USD Composer app, this week In the NVIDIA Studio.

A Creator’s Dream

The GeForce RTX 4070 joins NVIDIA’s lineup of GPUs, featuring 12GB of ultra-fast GDDR6X VRAM with advancements of the NVIDIA Ada Lovelace architecture, primed to supercharge all content-creation workflows.

Introducing the GeForce RTX 4070.

Like other GeForce RTX 40 Series GPUs, the GeForce RTX 4070 is much more efficient than previous-generation products, using 23% less power than the GeForce RTX 3070 Ti. Negligible amounts of power are used when the GPU is idle, or used for web browsing or watching videos, thanks to power-consumption enhancements in the GeForce RTX 40 Series.  

Tested on NVIDIA GeForce RTX 4070 and 3070 Ti GPUs. Omniverse testing measured FPS of 2K viewport average across five different scenes. Livestreaming quality measured as BD-SNR using NVENC 7th Gen with H.264, and NVENC 8th Gen with AV1, both using P7 (max quality) preset. Stable Diffusion testing measured number of images generated within 2 minute time frame for each GPU.

3D modelers rendering 4K scenes with the AI-powered DLSS 3 technology in NVIDIA Omniverse, a platform for creating and operating metaverse applications, can expect 2.8x faster performance than with the GeForce RTX 3070 Ti.

Broadcasters deploying the eighth-generation NVIDIA video encoder, NVENC, with support for AV1, will  enjoy 40% better efficiency. Livestreams will appear as if bitrate was increased by 40% — a big boost in image quality for popular broadcast apps like OBS Studio.

Plus, AI enthusiasts using the Stable Diffusion deep learning model can generate detailed images conditioned on text descriptions 1.4x faster with the GeForce RTX 4070.

RTX Video Super Resolution Comes to VLC Media Player

VLC media player, a popular, free and open-source multimedia player for Windows PCs, has added RTX Video Super Resolution to intelligently upscale video to 4K resolutions and beyond.

Owners of RTX 40 and 30 Series GPUs can now play local video on VLC with crisper edges and noticeably reduced compression artifacts.

RTX 30 and 40 Series GPU owners will immediately benefit from improved virtual fidelity in VLC.

Access is quick and easy. Begin by downloading the full version of VLC. Once installed, open the NVIDIA Control Panel, navigate to Adjust video image settings and enable Super Resolution under RTX video enhancement.

Ready for RTX Remix Runtime

RTX Remix, the newest app in the NVIDIA Studio suite, is a revolutionary modding tool used to enhance classic DirectX 8 and 9 games with full ray tracing, AI-enhanced textures and other graphics improvements.

Remix is composed of two core components: a creator toolkit and a custom, open-source runtime.

The Remix creator toolkit, built on Omniverse and used to develop the hit remastered game Portal With RTX, allows modders to assign new assets and lights within their scene and use AI tools to rebuild the look of any asset. The RTX Remix creator toolkit will be available in early access soon.

Remix runtime captures every element of a game scene, replacing assets at playback and injecting RTX technologies, such as path tracing, DLSS 3 and Reflex, into the game. Mod developers are already using the RTX Remix runtime from Portal With RTX to create experimental remastered scenes in numerous classic games.

Remix runtime is ready to download on GitHub. Learn more about Remix runtime and its potential to transform game development.

The Remarkable Rocket Man

NVIDIA’s project Hauler launched to test the sheer amount of instancing that could occur in a unified project.

Instancing, aka referencing, refers to the mass duplication of a model with minor variations, essential for populating massive 3D worlds with many detailed objects while maintaining realism. It’s useful for creating scenes, say, in a forest with hundreds of trees, or in this instance, in outer space with thousands of asteroids.

 

Lead NVIDIA artist Rogelio Olguin, who has over 25 years of experience in the gaming industry, teamed up with colleagues to build a massive scene, rich with asteroids. Olguin loves a good challenge.

“I feel that one of the trappings of artists at times is that if you lose exploration and learning, you will just get stuck,” said the artist. “It’s important to explore and work on areas you’re not good at as an artist to improve.”    

Key concept art for Project “Hauler.”

The team gathered reference materials from movies, shows and anime before putting it all together in a shared board for the team to add and edit. The collaborative board included thumbnail sketch storyboards made in Adobe Photoshop and preliminary concept art.

Concept art started in USD Composer and completed in Adobe Photoshop.

The vast number of asteroids were procedurally created in SideFX Houdini, taking advantage of the RTX-accelerated Karma XPU renderer, which enables fast rendering of complex 3D models and simulations. It was all made possible by his GeForce RTX GPU.

It takes incredible skill and patience to model a single asteroid, much less several hundred.

Next, Maxon’s ZBrush was used to sculpt the main asteroids. The Drop 3D function was especially helpful in increasing the density of local meshes while maintaining high-resolution details.

“Hauler” concept art features realistic shadows, textures and lighting.

Then came the animation phase in SideFX Houdini, building animations of the large and tiny cloud-based asteroids alike to ensure constant movement and rotation.

 

Olguin stressed the importance of his GeForce RTX 40 Series GPU. “Virtually every part of the process relied on RTX GPU technology, including path-traced rendering,” he said. “Without RTX acceleration, this piece would have been impossible.”

“Hauler” was brought together in Omniverse USD Composer.

With all components in place, the team worked in Omniverse USD Composer (formerly known as Create) to assemble complex, physically accurate simulations. They collaboratively edited 3D scenes in real time with ease.

“Having a beast of a machine has sped up what I can do. I would dread going back to a slower machine.” — Rogelio Olguin. 

“The primary benefit of using Omniverse USD Composer was being able to quickly see what I was doing, and place and compose our shots quickly, which made this so simple to work on,“ said Olguin.

USD Composer works with the Universal Scene Description (USD) format, enabling artists to choose their 3D app of choice. It supports Autodesk Maya, SideFX Houdini, Trimble SketchUp and more. It also removes pipeline bottlenecks, so there’s no need for artists to constantly download, reformat, upload and download again.

After all, using USD Composer to collaborate in 3D is just comet sense.

Stay tuned for more updates on Hauler. For more artistic inspiration from Olguin, check out his ArtStation page.

Lead NVIDIA artist Rogelio Olguin.

AV1 encoding support is coming to YouTube via OBS. GeForce RTX GPU owners broadcasting on YouTube can expect increased streaming quality. This feature is currently in beta and will be available as a general release in the near future.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

A Gripping New Adventure: GeForce NOW Brings Titles From Bandai Namco Europe to the Cloud, Including ‘Little Nightmares’ Series

A Gripping New Adventure: GeForce NOW Brings Titles From Bandai Namco Europe to the Cloud, Including ‘Little Nightmares’ Series

A new adventure with publisher Bandai Namco Europe kicks off this GFN Thursday. Some of its popular titles lead seven new games joining the cloud this week.

Plus, gamers can play them on more devices than ever, with native 4K streaming for GeForce NOW available on select LG Smart TVs.

Better Together

Little Nightmares 2 on GeForce NOW
Look forward to more Bandai Namco titles on GeForce NOW.

Bandai Namco is no stranger to delivering hit games. And GeForce NOW delivers high-performance cloud game streaming, making it the ultimate way to play the publisher’s high-quality titles.

“Our collaboration with NVIDIA will allow more players to enjoy Bandai Namco titles like ‘Little Nightmares’ across their devices, thanks to the power of the GeForce NOW cloud,” said Anthony Macare, senior director of digital business and customer experience at Bandai Namco Europe.

It’s the perfect time to jump into the Little Nightmares series, the critically acclaimed puzzle-platformer games set in a dark, nightmarish world. Confront childhood fears and help Six, a brave young girl escape The Maw — a vast, mysterious vessel inhabited by corrupted souls looking for their next meal— in Little Nightmares. 

Then play as Mono, a boy with a paper bag on his head looking for answers, in Little Nightmares II. With Six as a guide, face a host of new threats on the way to discover the dark secrets of The Signal Tower. The Enhanced Edition of Little Nightmares II adds an extra layer of eerie realism with RTX ON, which  Ultimate and Priority members can select after launching the game from Steam.

Little Nightmares and Little Nightmares II, Get Even and 11-11 Memories Retold are the first of many Bandai Namco titles arriving to the cloud this week.

Life’s Good in 4K

4K LG Streaming GeForce NOW
Screen time never looked so sharp.

Head on over to gaming on the big screen with LG Smart TVs. The Gaming Shelf on LG 2020-2022 TVs and the Game Quick Card on 2023 TVs already prominently feature GeForce NOW titles right on the home page, providing members with effortless discoverability to stream over 1,000 games directly from their LG Smart TVs.

To further enhance the user experience for Ultimate members, native 4K streaming through GeForce NOW is freshly available on select LG 2023 Smart TVs such as the flagship OLED B3, C3 and G3 models. Ultimate members in over 80 supported countries can jump right into their favorite PC titles in pixel-perfect 4K resolution with exclusive access to GeForce NOW’s fastest servers and eight-hour gaming sessions. Members won’t ever need to leave the spot in front of their LG TVs, except maybe to stock up on more snacks.

LG Smart TVs feature lifelike picture quality and high refresh rates so members can enjoy stunning games like Dying Light 2 and Cyberpunk 2077 in 4K — no need for a console nor worries about hardware requirements. It’s one of the easiest ways to try out the new Ray Tracing: Overdrive Mode update for Cyberpunk 2077, which adds full ray tracing, or path tracing, further enhancing the visual fidelity of Night City’s neon-soaked streets and beyond. The update is available now, so upgrade to Ultimate today to check out all the streaming goodness.

Gimme Gimme More

New ways to stream, plus seven new games this week. Here’s the full list:

  • MORDHAU (New release on Epic Games Store, free on April 13)
  • DE-EXIT – Eternal Matters (New release on Steam, April 14)
  • 11-11 Memories Retold (Steam)
  • canVERSE (Steam)
  • Get Even (Steam)
  • Little Nightmares (Steam)
  • Little Nightmares II (Steam)

Finally, let the fun and games begin, starting by answering this week’s GFN Thursday question in the comments below, or on Facebook and Twitter.

Read More

The New Standard in Gaming: GeForce RTX Gamers Embrace Ray Tracing, DLSS in Record Numbers 

The New Standard in Gaming: GeForce RTX Gamers Embrace Ray Tracing, DLSS in Record Numbers 

Creating a map requires masterful geographical knowledge, artistic skill and evolving technologies that have taken people from using hand-drawn sketches to satellite imagery. Just as important, changes need to be navigated in the way people consume maps, from paper charts to GPS navigation and interactive online charts.

The way people think about video games is changing, too. Today, 83% of GeForce RTX 40 Series desktop gamers with RTX-capable games enable ray tracing, and 79% turn on DLSS, showcasing the widespread adoption of these revolutionary technologies.1

They’re also widely adopted among prior RTX 30 Series and 20 Series owners; 56% and 43% turn on ray tracing, while 71% and 68% turn on DLSS, respectively.

At the core of this story: For any media — from movies and music to maps and magazines — technologies don’t just define the way people create content, they redefine how it’s consumed, ultimately becoming integral parts of its transformation.

That’s why it’s essential to consider both ray tracing and DLSS when evaluating an RTX 40 Series upgrade — which is how today’s gamers achieve the best graphics and performance.

A New Standard in Gaming

NVIDIA introduced neural rendering with its Turing architecture in the RTX 20 Series five years ago. Neural rendering combines two complementary breakthroughs — real-time ray tracing and DLSS — with the shading techniques long a staple of real-time graphics.

Powered by dedicated Tensor Cores for AI and RT Cores for ray tracing, RTX and DLSS have transformed the gaming industry. The initial implementations were a first critical step. Back in 2018, 37% of RTX 20 Series gamers embraced ray tracing, and 26% turned on DLSS.2

Fast forward to the present, and the GeForce RTX 40 Series — the third generation of RTX — has established ray tracing and DLSS as the new standard in gaming.

Ray Tracing Delivers Unparalleled Immersion

Ray tracing performance has significantly improved thanks to advancements like Shader Execution Reordering, cutting-edge Opacity Micromap and Displaced Micro-Mesh engines.

This week Cyberpunk 2077’s Ray Tracing: Overdrive Mode technology preview showcases the evolution of ray tracing into full ray tracing, offering enhanced real-time lighting, shadows and reflections.

These innovations enable even the most demanding games to simultaneously implement multiple ray-tracing effects or even full ray tracing, aka path tracing, for unparalleled realism and immersion.

DLSS Delivers Significant Performance Boosts

Meanwhile, DLSS Super Resolution has evolved to deliver significant performance boosts without compromising image quality. Ongoing neural network updates and model training on an NVIDIA supercomputer — with over an exaflop of AI processing power — continue to improve image fidelity and motion quality.

With DLSS 3, AI-powered Frame Generation creates new high-quality frames for smoother gameplay while maintaining great responsiveness through NVIDIA Reflex. AI can now generate seven out of every eight pixels through a combination of Super Resolution and Frame Generation.

A Potent Combination

This potent combination of third-generation ray tracing and DLSS has resulted in a staggering 16x leap in ray-tracing operations per pixel over the past five years. And where performance goes, PC gamers and game developers follow.

Adoption rates have soared, with over 400 RTX games and applications available and more on the horizon. DLSS 3, in particular, has seen a 7x faster adoption rate than its predecessor.

The popularity of ray tracing and DLSS outpaces other configurations, such as 4K, which is used by 28% of RTX 40 Series desktop gamers, and 144Hz or higher monitors, used by 62%.

As a result, judging a great game like Cyberpunk 2077 with DLSS and ray tracing off doesn’t demonstrate how a large and growing number of gamers will experience this game.

As DLSS and ray tracing continue to redefine gaming experiences, raster-only numbers no longer paint the picture gamers are seeing. This is only natural because, as PC gamers know, the full picture is looking better than ever.

1 Data from millions of RTX gamers who played RTX capable games in February 2023 shows 79% of 40 Series gamers, 71% of 30 Series gamers and 68% of 20 Series gamers turn DLSS on. 83% of 40 Series gamers, 56% of 30 Series gamers and 43% of 20 Series gamers turn ray tracing on.​
2 Data from RTX 20 Series gamers who played RTX-capable games in 2018​.

Read More

How GlüxKind Created Ella, the AI-Powered Smart Stroller 

How GlüxKind Created Ella, the AI-Powered Smart Stroller 

Imagine a stroller that can drive itself, help users up hills, brake on slopes and provide alerts of potential hazards. That’s what GlüxKind has done with Ella, an award-winning smart stroller that uses the NVIDIA Jetson edge AI and robotics platform to power its AI features.

Kevin Huang and Anne Hunger are the co-founders of GlüxKind, a Vancouver-based startup that aims to make parenting easier with AI. They’re also married and have a child together who inspired them to create Ella.

In this episode of the NVIDIA AI Podcast, host Noah Kravitz talks to Huang and Hunger about their journey from being consumers looking for a better stroller to becoming entrepreneurs who built one.

They discuss how NVIDIA Jetson enables Ella’s self-driving capabilities, object detection, voice control and other features that make it stand out from other strollers.

The pair also share their vision for the future of smart baby gear and how they hope to improve the lives of parents and caregivers around the world.

Additional resources:

You Might Also Like

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI
Pat Grady and Sonya Huang, partners at Sequoia Capital, to discuss their recent essay, “Generative AI: A Creative New World.” The authors delve into the potential of generative AI to enable new forms of creativity and expression, as well as the challenges and ethical considerations of this technology. They also offer insights into the future of generative AI.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe, Review and Follow NVIDIA AI on Twitter
If you enjoyed this episode, subscribe to the NVIDIA AI Podcast on your favorite podcast platform and leave a rating and review. Follow @NVIDIAAI on Twitter or email the AI Podcast team to get in touch.

Read More

Gaming on the Go: GeForce NOW Gives Members More Ways to Play

Gaming on the Go: GeForce NOW Gives Members More Ways to Play

This GFN Thursday explores the many ways GeForce NOW members can play their favorite PC games across the devices they know and love.

Plus, seven new games join the GeForce NOW library this week.

More Ways to Play

Touch games on GeForce NOW
GeForce NOW makes gaming on the go good to go.

GeForce NOW is the ultimate platform for gamers who want to play across more devices than their PC. Thanks to the power of the cloud, game progress can be paused and picked up across any device, whether crashing on the couch with a cell phone or traveling with a tablet.

Stream GeForce NOW on mobile without a controller using enhanced mobile touch controls enabled for games like Genshin Impact, the popular free-to-play, open-world, action role-playing game from HoYoverse. Members get access to updates as they release, including the upcoming version 3.6, A Parade of Providence.” It’s available to stream next week, and brings a new event and characters sure to delight Genshin Impact fans.

Or stream on the go with GeForce NOW-recommended gamepads — including the Backbone One and the Razer Kishi — which work with Android and iOS devices to further enhance the cloud gaming mobile experience with added comfort. These devices are perfect for extended gaming sessions of up to six hours for Priority members and up to eight hours for Ultimate members.

And with the ability to stream from a high-powered RTX gaming rig in the cloud, GeForce NOW is the only way to play graphics-intensive games like Cyberpunk 2077 and Marvel’s Guardians of the Galaxy on mobile at up to 120 frames per second with ultra-low latency for Ultimate members.

So whether on a tablet, TV, Mac, Chromebook or phone, GeForce NOW members are covered with high-performance cloud streaming. Level up to an Ultimate or Priority membership today to experience all the benefits of PC gaming on the go.

So Fresh

Ravenswatch on GeForce NOW
Band together with fallen heroes of old folk tales and legends to take on the Nightmare.

As always, members can experience new games immediately from the cloud this week, without worrying about download times or system specs. Titles including Ravenswatch, Meet Your Maker, Road 96: Mile 0, TerraScape and Curse of the Sea Rats are all gamepad compatible for gaming on the go.

Plus, popular sci-fi MMORPG Tower of Fantasy brings a boatload of new content, including an all-new map and underwater request missions where players can explore everything from the upper levels of the Grand Sea Island to the deep waters of Dragon Breath Volcano.

It comes on top of the seven games available this week:

  • Road 96: Mile 0 (New release on Steam)
  • Meet Your Maker (New release on Steam)
  • TerraScape (New release on Steam)
  • Curse of the Sea Rats (New release on Steam, April 6)
  • Ravenswatch (New release on Steam, April 6)
  • Supplice (New release on Steam, April 6)
  • Teardown (Steam)

Free members can now claim their Marvel’s Midnight Suns reward. Check the rewards portal to claim Captain Marvel’s Medieval Marvel suit by Saturday, May 6.

Finally, we’ve got our question of the week to wrap up this GFN Thursday. Let us know what device keeps you connected to the cloud in the comments below, on Twitter or Facebook.

Read More

NVIDIA Takes Inference to New Heights Across MLPerf Tests

NVIDIA Takes Inference to New Heights Across MLPerf Tests

MLPerf remains the definitive measurement for AI performance as an independent, third-party benchmark. NVIDIA’s AI platform has consistently shown leadership across both training and inference since the inception of MLPerf, including the MLPerf Inference 3.0 benchmarks released today.

“Three years ago when we introduced A100, the AI world was dominated by computer vision. Generative AI has arrived,” said NVIDIA founder and CEO Jensen Huang.

“This is exactly why we built Hopper, specifically optimized for GPT with the Transformer Engine. Today’s MLPerf 3.0 highlights Hopper delivering 4x more performance than A100.

“The next level of Generative AI requires new AI infrastructure to train large language models with great energy efficiency. Customers are ramping Hopper at scale, building AI infrastructure with tens of thousands of Hopper GPUs connected by NVIDIA NVLink and InfiniBand.

“The industry is working hard on new advances in safe and trustworthy Generative AI. Hopper is enabling this essential work,” he said.

The latest MLPerf results show NVIDIA taking AI inference to new levels of performance and efficiency from the cloud to the edge.

Specifically, NVIDIA H100 Tensor Core GPUs running in DGX H100 systems delivered the highest performance in every test of AI inference, the job of running neural networks in production. Thanks to software optimizations, the GPUs delivered up to 54% performance gains from their debut in September.

In healthcare, H100 GPUs delivered a 31% performance increase since September on 3D-UNet, the MLPerf benchmark for medical imaging.

H100 GPU AI inference performance on MLPerf workloads

Powered by its Transformer Engine, the H100 GPU, based on the Hopper architecture, excelled on BERT, a transformer-based large language model that paved the way for today’s broad use of generative AI.

Generative AI lets users quickly create text, images, 3D models and more. It’s a capability companies from startups to cloud service providers are rapidly adopting to enable new business models and accelerate existing ones.

Hundreds of millions of people are now using generative AI tools like ChatGPT — also a transformer model — expecting instant responses.

At this iPhone moment of AI, performance on inference is vital. Deep learning is now being deployed nearly everywhere, driving an insatiable need for inference performance from factory floors to online recommendation systems.

L4 GPUs Speed Out of the Gate

NVIDIA L4 Tensor Core GPUs made their debut in the MLPerf tests at over 3x the speed of prior-generation T4 GPUs. Packaged in a low-profile form factor, these accelerators are designed to deliver high throughput and low latency in almost any server.

L4 GPUs ran all MLPerf workloads. Thanks to their support for the key FP8 format, their results were particularly stunning on the performance-hungry BERT model.

NVIDIA L4 GPU AI inference performance on MLPerf workloads

In addition to stellar AI performance, L4 GPUs deliver up to 10x faster image decode, up to 3.2x faster video processing and over 4x faster graphics and real-time rendering performance.

Announced two weeks ago at GTC, these accelerators are already available from major systems makers and cloud service providers. L4 GPUs are the latest addition to NVIDIA’s portfolio of AI inference platforms launched at GTC.

Software, Networks Shine in System Test

NVIDIA’s full-stack AI platform showed its leadership in a new MLPerf test.

The so-called network-division benchmark streams data to a remote inference server. It reflects the popular scenario of enterprise users running AI jobs in the cloud with data stored behind corporate firewalls.

On BERT, remote NVIDIA DGX A100 systems delivered up to 96% of their maximum local performance, slowed in part because they needed to wait for CPUs to complete some tasks. On the ResNet-50 test for computer vision, handled solely by GPUs, they hit the full 100%.

Both results are thanks, in large part, to NVIDIA Quantum Infiniband networking, NVIDIA ConnectX SmartNICs and software such as NVIDIA GPUDirect.

Orin Shows 3.2x Gains at the Edge

Separately, the NVIDIA Jetson AGX Orin system-on-module delivered gains of up to 63% in energy efficiency and 81% in performance compared with its results a year ago. Jetson AGX Orin supplies inference when AI is needed in confined spaces at low power levels, including on systems powered by batteries.

Jetson AGX Orin AI inference performance on MLPerf benchmarks

For applications needing even smaller modules drawing less power, the Jetson Orin NX 16G shined in its debut in the benchmarks. It delivered up to 3.2x the performance of the prior-generation Jetson Xavier NX processor.

A Broad NVIDIA AI Ecosystem

The MLPerf results show NVIDIA AI is backed by the industry’s broadest ecosystem in machine learning.

Ten companies submitted results on the NVIDIA platform in this round. They came from the Microsoft Azure cloud service and system makers including ASUS, Dell Technologies, GIGABYTE, H3C, Lenovo, Nettrix, Supermicro and xFusion.

Their work shows users can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.

NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors. Results in the latest round demonstrate that the performance they deliver today will grow with the NVIDIA platform.

Users Need Versatile Performance

NVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing. Its versatile performance and efficiency make users the real winners.

Real-world applications typically employ many neural networks of different kinds that often need to deliver answers in real time.

For example, an AI application may need to understand a user’s spoken request, classify an image, make a recommendation and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.

The MLPerf benchmarks cover these and other popular AI workloads. That’s why the tests ensure IT decision makers will get performance that’s dependable and flexible to deploy.

Users can rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Arm, Baidu, Facebook AI, Google, Harvard, Intel, Microsoft, Stanford and the University of Toronto.

Software You Can Use

The software layer of the NVIDIA AI platform, NVIDIA AI Enterprise,  ensures users get optimized performance from their infrastructure investments as well as the enterprise-grade support, security and reliability required to run AI in the corporate data center.

All the software used for these tests is available from the MLPerf repository, so anyone can get these world-class results.

Optimizations are continuously folded into containers available on NGC, NVIDIA’s catalog for GPU-accelerated software. The catalog hosts NVIDIA TensorRT, used by every submission in this round to optimize AI inference.

Read this technical blog for a deeper dive into the optimizations fueling NVIDIA’s MLPerf performance and efficiency.

Read More

NVIDIA Honors Partners Helping Industries Harness AI to Transform Business

NVIDIA Honors Partners Helping Industries Harness AI to Transform Business

NVIDIA today recognized a dozen partners in the Americas for their work enabling customers to build and deploy AI applications across a broad range of industries.

NVIDIA Partner Network (NPN) Americas Partner of the Year awards were given out to companies in 13 categories covering AI, consulting, distribution, education, healthcare, integration, networking, the public sector, rising star, service delivery, software and the Canadian market.  A new award category created this year recognizes growing AI adoption in retail, as leaders begin to introduce new AI-powered services addressing customer service, loss prevention and restocking analytics.

“NVIDIA’s commitment to driving innovation in AI has created new opportunities for partners to help customers leverage cutting-edge technology to reduce costs, grow opportunities and solve business challenges,” said Rob Enderle, president and principal analyst at the Enderle Group. “The winners of the 2023 NPN awards reflect a diverse group of AI business experts that have showcased deep knowledge in delivering transformative solutions to customers across a range of industries.”

The 2023 NPN award winners for the Americas are:

  • Arrow ElectronicsDistribution Partner of the Year. Recognized for providing end-to-end NVIDIA AI technologies across a variety of industries, such as manufacturing, retail, healthcare and robotics, to help organizations drive accelerated computing and robotics strategies via on-prem, hybrid cloud and intelligent edge solutions, and through Arrow’s Autonomous Machines Center of Excellence.
  • Cambridge ComputerHigher Education Partner of the Year. Recognized for the third consecutive year for its continued focus on providing NVIDIA AI solutions to the education, life sciences and research computing sectors.
  • CDW Software Partner of the Year. Recognized for deploying NVIDIA AI and visualization solutions to customers from a broad range of industries and adopting deep industry expertise for end-to-end customer support.
  • CDW CanadaCanadian Partner of the Year. Recognized for providing IT solutions that enable the nation’s leading vendors to offer customized solutions with NVIDIA technology, meeting the needs of each client.
  • Deloitte Consulting Partner of the Year. Recognized for the third consecutive year for creating new AI markets for clients by expanding AI investments in solutions developed with NVIDIA across enterprise AI, as well as expanding into new offerings with generative AI and NVIDIA DGX Cloud.
  • FedData Technology SolutionsRising Star Partner of the Year. Recognized for NVIDIA DGX-based design wins with key federal customers and emerging work with the NVIDIA Omniverse platform for building and operating metaverse applications.
  • InsightRetail Partner of the Year. Recognized for its deep understanding of the industry, ecosystem partnerships and the ability to orchestrate best-in-class solutions to bring real-time speed and predictability to retailers, enabling intelligent stores, intelligent quick-service restaurants, intelligent supply chain and omni-channel management.
  • LambdaSolution Integration Partner of the Year. Recognized for the third consecutive year for its commitment to providing end-to-end NVIDIA solutions, both on premises and in the cloud, across industries including higher education and research, the federal and public sectors, and healthcare and life sciences.
  • Mark IIIHealthcare Partner of the Year. Recognized for its unique team and deep understanding of the NVIDIA portfolio, which provides academic medical centers, research institutions, healthcare systems and life sciences organizations with NVIDIA infrastructure, software and cloud technologies to build out AI, HPC and simulation Centers of Excellence.
  • Microway Public Sector Partner of the Year. Recognized for its technical depth and engineering focus on servicing the public sector using technologies across the NVIDIA portfolio, including high performance computing and other specializations.
  • Quantiphi Service Delivery Partner of the Year. Recognized for the second consecutive year for its commitment to driving adoption of NVIDIA products in areas like generative AI services with customized large language models, digital avatars, edge computing, medical imaging and data science, as well as its expertise in helping customers build and deploy AI solutions at scale.
  • World Wide TechnologyAI Solution Provider of the  Year. Recognized for its leadership in driving adoption of the NVIDIA portfolio of AI and accelerated computing solutions, as well as its continued investments in AI infrastructure for large language models, computer vision, Omniverse-based digital twins, and customer testing and labs in the WWT Advanced Technology Center.
  • World Wide Technology Networking Partner of the Year. Recognized for its expertise driving NVIDIA high-performance networking solutions to support accelerated computing environments across multiple industries and AI solutions.

This year’s awards arrive as AI adoption is rapidly expanding across industries, unlocking new opportunities and accelerating discovery in healthcare, finance, business services and more. As AI models become more complex, the 2023 NPN Award winners are expert partners that can help enterprises develop and deploy AI in production using the infrastructure that best aligns with their operations.

Learn how to join the NPN, or find your local NPN partner.

Read More

Video Editor Patrick Stirling Invents Custom Effect for DaVinci Resolve Software

Video Editor Patrick Stirling Invents Custom Effect for DaVinci Resolve Software

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

AI-powered technology in creative apps, once considered nice to have, is quickly becoming essential for aspiring and experienced content creators.

Video editor Patrick Stirling used the Magic Mask feature in Blackmagic Design’s DaVinci Resolve software to create a custom effect that creates textured animations of people, this week In the NVIDIA Studio.

“I wanted to use ‘Magic Mask’ to replace subjects with textured, simplified, cut-out versions of themselves,” said the artist. “This style is reminiscent of construction-paper creations that viewers might have played with in childhood, keeping the energy of a scene while also pulling attention away from any specific features of the subject.”

Stirling’s effect creates textured, animated characters.

Stirling’s original attempts to implement this effect were cut short due to the limitations of his six-year-old system. So Stirling built his first custom PC — equipped with a GeForce RTX 4080 GPU — to tackle the challenge. The difference was night and day, he said.

Stirling’s effect on full display in DaVinci Resolve.

“I was able to find and maintain a creative flow so much more easily when I didn’t feel like I was constantly running into a wall and waiting for my system to catch up,” said Stirling.

“While the raw power of RTX GPUs is incredible, the work NVIDIA does to improve working in DaVinci Resolve, specifically, is really impressive. It’s extremely reassuring to know that I have the power to build complex effects.” — Patrick Stirling

The AI-powered Magic Mask feature, which allows quick selection of objects and people in a scene, was accelerated by his RTX 4080 GPU, delivering up to a 2x increase in AI performance over the previous generation. “The GPU also provides the power the DaVinci Neural Engine needs for some of these really cool effects,” said Stirling.

Stirling opened a short clip within the RTX GPU-accelerated Fusion page in DaVinci Resolve, a node-based workflow with hundreds of 2D and 3D tools. Nodes are popular as they make video editing a completely procedural process — allowing for non-linear, non-destructive workflows.

He viewed edits in real time using two windows opened side by side, with original footage on the left and node modifications on the right.

Original footage and node-based modifications, side by side.

Stirling then drew blue lines to apply Magic Mask to each surface on the subject that he wanted to layer. As its name suggests, Magic Mask works like magic, but it’s not perfect. When the effect masked more than the extended jacket layer, Stirling drew a secondary red line to designate what not to capture in that area.

The suit-jacket layer is masked as intended.

He applied similar techniques to the dress shirt, hands, beard, hair and facial skin. The artist then added generic colored backgrounds with Background nodes on each layer to complete his 2D character.

Textures provide contrast to the scene.

Stirling used Merge nodes to combine background and foreground images. He deployed the Fast Noise node to create two types of textures for the 2D man and the real-life footage, providing more contrast for the visual.

Organizing nodes is important to this creative workflow.

Stirling then added a color corrector to tweak saturation, his RTX GPU accelerating the process. He completed his video editing by combining the Magic Mask effect and all remaining nodes — Background, Merge and Fast Noise.

“DaVinci Resolve and the GeForce RTX 4080 feel like a perfect fit,” said Stirling.

When it’s time to wrap up the project, Stirling can deploy the RTX 4080 GPU’s dual AV1 video encoders — which would cut export times in half.

Stirling encourages aspiring content creators to “stay curious” and “not ignore the value of connecting with other creative people.”

“Regularly being around people doing the same kind of work as you will constantly expose new methods and approaches for your own creative projects,” he said.

Video editor Patrick Stirling.

Check out Stirling’s YouTube channel for DaVinci Resolve tutorials.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More