Concept Artist Pablo Muñoz Gómez Enlivens Fantasy Creatures ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Concept artist Pablo Muñoz Gómez dives In the NVIDIA Studio this week, showcasing artwork that depicts a fantastical myth.

Gómez, a creator based in Australia, is equally passionate about helping digital artists, teaching 3D classes and running the Zbrush guides website with his creative specialties: concept and character artistry.

“For me, everything starts with a story,” Muñoz Gómez said.

His 3D Forest Creature contains a fascinating myth. “The story of the forest creature is rather simple … a very small fantasy character that lives in the forest and spends his life balancing rocks, the larger the stones he manages to balance and stack on top of each other, the larger he’ll grow and the more invisible he’ll become. Eventually, he’ll reach a colossal size and disappear.”

3D Forest Creature sketch by Pablo Muñoz Gómez.

Gómez begins his journey in a 2D app, Krita, with a preliminary sketch. The idea is to figure out how many 3D assets will be needed while adding a little bit of color as reference for the palette later on.

Next, Gómez moves to Zbrush, where he uses custom brushes to sculpt basic models for the creature, rocks and plants. It’s the first of multiple leaps in his 2D to 3D workflow, detailed in this two-part 3D Forest Creature tutorial.

 

Gómez then turns to Adobe Substance 3D Painter to apply various colors and materials directly to his 3D models. Here, the benefits of NVIDIA RTX acceleration shine. NVIDIA Iray technology in the viewport enables Gómez to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by his GeForce RTX 3090 GPU.

Building and applying custom photorealistic textures in Adobe Substance 3D Sampler.

“Since I switched to the GeForce RTX 3090, I’m simply able to spend more time in the creative stages.”

Seeking further customization for his background, Gómez downloads and imports a grass asset from the Substance 3D asset library into Substance 3D Sampler, adjusting a few sliders to create a photorealistic material. RTX-exclusive interactive ray tracing lets Gómez apply realistic wear-and-tear effects in real time, powered by his GPU.

3D workflows can be incredibly demanding. As Gómez notes, the right GPU allows him to focus on content creation. “Since I switched to the GeForce RTX 3090, I’m simply able to spend more time in the ‘creative stages’ and testing things to refine my concept when I don’t have to wait for a render or worry about optimizing a scene so I can see it in real time,” he said.

Getting close to exporting final renders.

Gómez sets up his scene in Marmoset 4, critically changing the denoiser from CPU to GPU. Doing so unlocks real-time ray tracing and smooth visuals in the viewport while he works. This can be done by accessing the Lighting then Ray Tracing selections in the main menu and changing the denoiser from CPU to GPU.

With the scene in a good place after some edits, Gómez generates his renders.

 

He makes final composition, lighting and color correction in Adobe Photoshop. With the addition of a new background, the scene is complete.

Thankfully, the 3D Forest Creature hasn’t disappeared … yet!

More 3D to Explore

Gómez has created several tutorials demonstrating 3D content creation techniques to aspiring artists. Check out this one on how to build a 3D scene from scratch.

Part one of the Studio Session, Creating Stunning 3D Crystals, offers an inside look at sketching and concepting in Krita and modeling in Zbrush, while part two focuses on baking in Adobe Substance 3D Painter and texturing in Marmoset Toolbag 4.

Generally, low-polygon models for 3D workflows are great to work with on hardware that can’t handle high-poly counts. Gómez’s Studio Session, Creating a 3D Low-Poly Floating Island, demonstrates how to build low-poly models like his Floating Island within Zbrush and touch up in Adobe Photoshop.

However, with the graphics horsepower and AI benefits of NVIDIA RTX and GeForce RTX GPUs, 3D artists can work with high-polygon models quickly and easily.

Learning how to create in 3D takes ingenuity, notes Gómez: “You become more resourceful making your tools work for you in the way you want, even if that means finding a better tool to solve a particular process.” But with enough practice, as seen from the variety of Gómez’s portfolio, the results can be stunning.

Concept artist Pablo Muñoz Gómez.

Gómez is the founder of ZBrushGuides and the 3DConceptArtist academy. View his courses, tutorials, projects and more on his website.

Follow NVIDIA Studio on Facebook, Twitter and Instagram. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Concept Artist Pablo Muñoz Gómez Enlivens Fantasy Creatures ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Broom, Broom: WeRide Revs Up Self-Driving Street Sweepers Powered by NVIDIA

When it comes to safety, efficiency and sustainability, autonomous vehicles are delivering a clean sweep.

Autonomous vehicle company and NVIDIA Inception member WeRide this month began a public road pilot of its Robo Street Sweepers. The vehicles, designed to perform round-the-clock cleaning services, are built on the high-performance, energy-efficient compute of NVIDIA.

The fleet of 50 vehicles is sweeping, sprinkling and spraying disinfectant in Guangzhou, China, all without a human driver at the wheel. The robo-sweepers run on a cloud-based fleet management platform that automatically schedules and dispatches vehicles using real-time information on daily traffic and routes.

Street sweeping is a critical municipal service. In addition to keeping dirt and debris off the road, it helps ensure trash and hazardous materials don’t flow into storm drains and pollute local waterways.

As cities grow, applying autonomous driving technology to street cleaning vehicles enables these fleets to run more efficiently and maintain cleaner and healthier public spaces.

Sweep Smarts

While street sweepers typically operate at lower speeds and in more constrained environments than robotaxis, trucks or other autonomous vehicles, they still require robust AI compute to safely operate.

Street cleaning vehicles must be able to drive in dense urban traffic, as well as in low-visibility conditions, such as nighttime and early morning. In addition, they have to detect and classify objects in the road as they clean.

To do so without a human at the wheel, these vehicles must process massive amounts of data from onboard sensors in real time. Redundant and diverse deep neural networks (DNNs) must work together to accurately perceive relevant information from this sensor data.

As a high-performance, software-defined AI compute platform, NVIDIA’s solution is designed to handle the large number of applications and DNNs that run simultaneously in autonomous vehicles, while achieving systemic safety standards.

A Model Lineup

The WeRide Robo Street Sweepers are the latest in the company’s stable of autonomous vehicles and its second purpose-built and mass-produced self-driving vehicle model.

WeRide has been developing autonomous technology on NVIDIA since 2017, building robotaxis, mini robobuses and robovans with the goal of accelerating intelligent urban transportation.

Its robotaxis have already provided more than 350,000 rides for 180,000 passengers since 2019, while its mini robobuses began pilot operations to the public in January.

The company is currently building its next-generation self-driving solutions on NVIDIA DRIVE Orin, using the high-performance AI compute platform to commercialize its autonomous lineup.

And with the addition of these latest vehicles, WeRide’s fleets are set to make a clean sweep.

The post Broom, Broom: WeRide Revs Up Self-Driving Street Sweepers Powered by NVIDIA appeared first on NVIDIA Blog.

Read More

Urban Jungle: AI-Generated Endangered Species Mix With Times Square’s Nightlife

Bengal tigers, red pandas and mountain gorillas are among the world’s most familiar endangered species, but tens of thousands of others — like the Karpathos frog, the Perote deer mouse or the Mekong giant catfish — are largely unknown.

Typically perceived as lacking star quality, these species are now roaming massive billboards in one of the world’s busiest destinations. An AI-powered initiative is spotlighting lesser-known endangered creatures on Times Square billboards this month, nightly in the few minutes before midnight across nearly 100 screens.

The project, dubbed Critically Extant, uses AI to illustrate the limited public data available on critically endangered flora and fauna. It’s the first deep learning art display in the Times Square Arts program’s decade-long history.

“A neural network can only create images based on what it’s seen in training data, and there’s very little information online about some of these critically endangered species,” said artist Sofia Crespo, who created the work with support from Meta Open Arts, using NVIDIA GPUs for AI training and inference. “This project is ultimately about representation — for us to recognize that we are biased towards some species versus others.”

Artwork courtesy of Sofia Crespo

These biases in representation have implications on the effort and funding given to save different species. Research has shown that a small subset of endangered species that are considered charismatic, cute or marketable receive more funding than they need, while most others receive little to no support.

When endangered species of any size — such as insects, fungi or plants — are left without conservation resources, they’re more vulnerable to extinction, contributing to a severe loss of biodiversity that makes ecosystems and food webs less resilient.

Intentionally Imperfect Portraits

The AI model, created by Crespo and collaborator Feileacan McCormick, was trained on a paired dataset of nearly 3 million nature images and text describing around 10,000 species. But this still wasn’t enough data to create true-to-life portraits of the less popular endangered species.

So the deep learning model, a generative adversarial network, does the best it can, guessing the features of a given endangered species based on related species. Due to the limited source data, many of the AI-generated creatures have a different color or body shape than their real-life counterparts — and that’s the point.

“Part of the project was relying on the open-source data that’s available right now,” said Crespo. “If that’s all the data we have, and species go extinct, what kind of knowledge and imagination do we have about the world that was lost?”

Critically Extant features more than 30 species, including amphibians, birds, fish, flowering plants, fungi and insects. After feeding species names to the generative AI model, Crespo animated and processed the synthetic images further to create the final moving portraits.

 

The AI model behind this project was trained using a cluster of NVIDIA Tensor Core GPUs. Crespo used a desktop NVIDIA RTX A6000 GPU for what she called “lightning-quick” inference.

AI in the Public Square

Critically Extant’s Times Square display premiered on May 1 and will be shown nightly through the end of the month.

Image by Michael Hull/Times Square Arts

The three-minute display features all 30+ specimens in a randomized arrangement that shifts every 30 seconds or so. Crespo said that using the NVIDIA RTX A6000 GPU was essential to generate the high-resolution images needed to span dozens of digital billboards.

Crespo and McCormick, who run an ecology and AI-focused studio, also enhanced the art display with an AI-generated soundtrack trained on a diverse range of animal sounds.

“The idea is to show diversity with many creatures, and overwhelm the audience with creatures that look very different from one another,” Crespo said.

The project began as an exhibition on Instagram, with the goal of adding representation of critically endangered species to social media conversations. At Times Square, the work will reach an audience of hundreds of thousands more.

“Crespo’s work brings the natural world directly into the center of the very urban environment at odds with these at-risk species, and nods to the human changes that will be required to save them,” reads the Times Square Arts post.

Crespo and McCormick have showcased their work at NVIDIA GTC, most recently an AI-generated fragment of coral reef titled Beneath the Neural Waves.

Learn more about AI artwork by Crespo and McCormick on the NVIDIA AI Art Gallery, and catch Critically Extant in Times Square through May 31.

Times Square images courtesy of Times Square Arts, photographed by Michael Hull. Artwork by Sofia Crespo. 

The post Urban Jungle: AI-Generated Endangered Species Mix With Times Square’s Nightlife appeared first on NVIDIA Blog.

Read More

GFN Thursday Gets Groovy As ‘Evil Dead: The Game’ Marks 1,300 Games on GeForce NOW

Good. Bad. You’re the Guy With the Gun this GFN Thursday.

Get ready for some horrifyingly good fun with Evil Dead: The Game streaming on GeForce NOW tomorrow at release. It’s the 1,300th game to join GeForce NOW, joining on Friday the 13th.

And it’s part of eight total games joining the GeForce NOW library this week.

Hail to the King, Baby

Step into the shoes of Ash Williams and friends from the iconic Evil Dead franchise in Evil Dead: The Game (Epic Games Store), streaming on GeForce NOW at release tomorrow.

Work together in a game loaded with over-the-top co-op and PvP action across nearly all your devices. Grab your boomsticks, chainsaws and cleavers to fight against the armies of darkness, even on a Mac. Or take control of the Kandarian Demon to hunt the heroes by possessing Deadites, the environment and even the survivors themselves with a mobile phone.

For RTX 3080 members, the horror comes to life with realistic visuals and a physics-based gore system, enhanced by NVIDIA DLSS – the groundbreaking AI rendering technology that increases graphics performance by boosting frame rates and generating beautiful, sharp images.

Plus, everything is better in 4K. Whether you’re tearing a Deadite in two with Ash’s chainsaw hand or flying through the map as the Kandarian Demon, RTX 3080 members playing from the PC and Mac apps can bring the bloodshed in all its glory, streaming at up to 4K resolution and 60 frames per second.

There’s No Time Like Playtime

Brigandine The Legend of Runersia on GeForce NOW
Become a ruler, command knights and monsters, and outplay your enemies in Brigandine The Legend of Runersia.

Not a spooky fan? That’s okay. There’s fun for everyone with eight new games streaming this week:

  • Achilles: Legends Untold (New release on Steam)
  • Brigandine The Legend of Runersia (New release on Steam)
  • Neptunia x SENRAN KAGURA: Ninja Wars (New release on Steam)
  • Songs of Conquest (New release on Steam and Epic Games Store)
  • Cepheus Protocol Anthology (New release on Steam, May 13)
  • Evil Dead: The Game (New release on Epic Games Store, May 13)
  • Pogostuck: Rage With Your Friends (Steam)
  • Yet Another Zombie Defense HD (Steam)

With the armies of darkness upon us this weekend, we’ve got a question for you. Let us know how your chances are looking on Twitter or in the comments below.

The post GFN Thursday Gets Groovy As ‘Evil Dead: The Game’ Marks 1,300 Games on GeForce NOW appeared first on NVIDIA Blog.

Read More

Creator Karen X. Cheng Brings Keen AI for Design ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.

The future of content creation is in AI. This week In the NVIDIA Studio, discover how AI-assisted painting is bringing a new level of inspiration to the next generation of artists.

San Francisco-based creator Karen X. Cheng is on the forefront of using AI to design amazing visuals. Her innovative work produces eye-catching effects to social media videos for brands like Adobe, Beats by Dre and Instagram.

Cheng’s work bridges the gap between emerging technologies and creative imagery, and her inspiration can come from anywhere. “I usually get ideas when I’m observing things — whether that’s taking a walk or scrolling in my feed and seeing something cool,” she said. “Then, I’ll start jotting down ideas and sketching them out. I’ve got a messy notebook full of ideas.”

When inspiration hits, it’s important to have the right tools. Cheng’s ASUS Zenbook Pro Duo — an NVIDIA Studio laptop that comes equipped with up to a GeForce RTX 3080 GPU — gives her the power she needs to create anywhere.

Paired with the NVIDIA Canvas app, a free download available to anyone with an NVIDIA RTX or GeForce RTX GPU, Cheng can easily create and share photorealistic imagery. Canvas is powered by the GauGAN2 AI model and accelerated by Tensor Cores found exclusively on RTX GPUs.

“I never had much drawing skill before, so I feel like I have art superpowers.”

The app uses AI to interpret basic lines and shapes, translating them into realistic landscape images and textures. Artists of all skill levels can use this advanced AI to quickly turn simple brushstrokes into realistic images, speeding up concept exploration and allowing for increased iteration, while freeing up valuable time to visualize ideas.

 

“I’m excited to use NVIDIA Canvas to be able to sketch out the exact landscapes I’m looking for,” said Cheng. “This is the perfect sketch to communicate your vision to an art director or location scout. I never had much drawing skill before, so I feel like I have art superpowers with this thing.”

Powered by GauGAN2, Canvas turns Cheng’s scribbles into gorgeous landscapes.

Cheng plans to put these superpowers to the test in an Instagram live stream on Thursday, May 12, where she and her AI Sketchpad collaborator Don Allen Stevenson III will race to paint viewer challenges using Canvas.

The free Canvas app is updated regularly, adding new materials, styles and more.

Tune in to contribute, and download NVIDIA Canvas to see how easy it is to paint by AI.

With AI, Anything Is Possible

Empowering scribble-turn-van Gogh painting abilities is just one of the ways that NVIDIA Studio is transforming creative technology through AI.

NVIDIA Broadcast uses AI running on RTX GPUs to improve audio and video for broadcasters and live streamers. The newest version can run multiple neural networks to apply background removal, blur and auto-frame for webcams, and remove noise from incoming and outgoing sound.

3D artists can take advantage of AI denoising in Autodesk Maya and Blender software, refine color detail across high-resolution RAW images with Lightroom’s Enhance Details tool, enable smooth slow motion with retained b-frames using DaVinci Resolve’s SpeedWarp and more.

NVIDIA AI researchers are working on new models and methods to fuel the next generation of creativity. At GTC this year, NVIDIA debuted Instant NeRF technology, which uses AI models to transform 2D images into high-resolution 3D scenes, nearly instantly.

Instant NeRF is an emerging AI technology that Cheng already plans to implement. She and her collaborators have started experimenting with bringing 2D scenes to 3D life.

More AI Tools In the NVIDIA Studio

AI is being used to tackle complex and incredibly challenging problems. Creators can benefit from the same AI technology that’s applied to healthcare, automotive, robotics and countless other fields.

The NVIDIA Studio YouTube channel offers a wide range of tips and tricks, tutorials and sessions for beginning to advanced users.

CGMatter hosts Studio speedhack tutorials for beginners, showing how to use AI viewport denoising and AI render denoising in Blender.

Many of the most popular creative applications from Adobe have AI-powered features to speed up and improve the creative process.

Neural Filters in Photoshop, Auto Reframe and Scene Edit Detection in Premiere Pro, and Image to Material in Substance 3D all make creating quicker and easier through the power of AI.

Follow NVIDIA Studio on Instagram, Twitter and Facebook; access tutorials on the Studio YouTube channel; and get updates directly in your inbox by signing up for the NVIDIA Studio newsletter.

The post Creator Karen X. Cheng Brings Keen AI for Design ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

More Freedom on the Freeway: AI Lifts Malaysia’s Toll Barriers

Working as an aerospace engineer in Malaysia, Chee How Lim dreamed of building a startup that could really take off. Today his company, Tapway, is riding a wave of computer vision and AI adoption in Southeast Asia.

A call for help in 2019 with video analytics led to the Kuala Lumpur-based company’s biggest project to date.

Malaysia’s largest operator of toll highways, PLUS, wanted to reduce congestion for its more than 1.5 million daily travelers. A national plan called for enabling car, taxi, bus and truck traffic to flow freely across multiple lanes — but that posed several big challenges.

Unsnarling Traffic Jams

The highways charge five classes of tolls depending on vehicle type. Drivers pay using four different systems, and often enter the highway using one payment system, then exit using another, making it hard to track vehicles.

Dedicated lanes for different vehicle classes forced users to stop, slowing traffic, so too booth operators could identify the specific vehicle. Even then some drivers scammed the system, exchanging cards on the highway to get lower tolls.

“We showed them how with computer vision — just a camera and AI — you could solve all that,” said Lim.

AI Smooths the Flow

Using NVIDIA GPUs and software, Tapway trained and ran AI models that could read a vehicle’s license plate and detect its class, make and color in just 50 milliseconds, about one tenth of one eye blink — even if it’s traveling at up to 40 kilometers/hour while approaching a toll plaza.

Tapway’s VehicleTrack software works in all light and weather conditions with a consistent 97 percent accuracy. And thanks in part to NVIDIA Triton Inference Server, a single GPU can manage up to 50 simultaneous video streams.

PLUS has installed 577 cameras so far, and plans to expand to nearly 900 in 92 toll plazas to meet its goal of freely flowing traffic.

Inside a Computer Vision System

Under the hood, the system depends on smart AI models trained in the cloud on a network of NVIDIA A100 and V100 Tensor Core GPUs.

They use a dataset of up to 100,000 images to prepare a new model for a Tapway customer in a few hours, a huge improvement on a CPU-based system that used to take several days, Lim said.

But the real magic comes with inference, running those models in production to process up to 28,800 images a minute on edge servers using NVIDIA A10, A30 and T4 GPUs.

Software Makes it Sing

Tapway uses the NVIDIA DeepStream software development kit to build its computer vision apps, NVIDIA TensorRT to keep its AI models lean and fast, and Triton to play traffic cop, directing AI inference jobs.

“Triton is a real lifesaver for us,” said Lim. “We had some scaling problems doing inference and multithreading on our own and couldn’t scale beyond 12 video streams in a server, but with Triton we easily handle 20 and we’ve tested it on up to 50 simultaneous streams,” he said.

In February, Tapway officially became an NVIDIA Metropolis partner. The program gives companies in intelligent video analytics early access to technology and expertise.

“We had to pass stress tests in areas like multistreaming and security — that helped us strengthen our product offering — and from a business perspective it’s a way to be recognized and further establish ourselves as an AI expert in the region,” Lim said.

AI Covers the Waterfront

Since its start in 2014, Tapway has deployed 3,000 sensors in 500 locations throughout Malaysia and Singapore. Off the road, they help malls and retailers understand customer shopping habits, and now the company is gearing up to help manufacturers like the region’s car makers and palm oil producers inspect products for quality control.

“The demand has never been better, there are a lot of vision challenges in the world, and quite a few exciting projects we hope to land soon,” he said.

To learn more, watch Lim’s talk at GTC (free with registration). And download this free e-book to learn how NVIDIA Metropolis is helping build smarter and safer spaces around the world.

 

The post More Freedom on the Freeway: AI Lifts Malaysia’s Toll Barriers appeared first on NVIDIA Blog.

Read More

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world.

On the latest episode of the AI Podcast, Waabi CEO and founder Raquel Urtasun joins NVIDIA’s Katie Burke Washabaugh to talk about the role simulation technology plays in developing production-level autonomous vehicles.

Waabi is an autonomous-vehicle system startup that uses powerful, high-fidelity simulation to run multiple scenarios simultaneously and tailor training to rare and dangerous situations that are difficult to encounter in the real world.

Urtasun is also a professor of Computer Science at the University of Toronto. Before starting Waabi, she led the Uber Advanced Technologies Group as chief scientist and head of research and development.

You Might Also Like

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives. Dr. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges, and the fun, involved in teaching machines to listen.

Subscribe to the AI Podcast: Now available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Have a few minutes to spare? Fill out our listener survey.

The post Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive appeared first on NVIDIA Blog.

Read More

GFN Thursday Caught in 4K: 27 Games Arriving on GeForce NOW in May, Alongside 4K Streaming to PC and Mac Apps

Enjoy the finer things in life. May is looking pixel perfect for GeForce NOW gamers.

RTX 3080 members can now take their games to the next level, streaming at 4K resolution on the GeForce NOW PC and Mac native apps — joining 4K support in the living room with SHIELD TV.

There’s also a list of 10 titles ready to play today, led by Star Wars Battlefront II, Star Wars Jedi: Fallen Order and Star Wars: Squadrons — all part of the 27 total games joining the GeForce NOW library in May.

Play in 4K Today

GeForce NOW is always upgrading. As of today, RTX 3080 members playing from the PC and Mac native apps can stream at 4K resolution at 60 frames per second.

Play in 4K on GeForce NOW
Stream in 4K on GeForce NOW with an RTX 3080 membership.

4K streaming gets a boost by NVIDIA DLSS, groundbreaking AI rendering technology that increases graphics performance using dedicated Tensor Core AI processors on RTX GPUs. DLSS taps into the power of a deep learning neural network to boost frame rates and generate beautiful, sharp images for games.

RTX 3080 members enjoy the benefits of ultra-low latency that rivals native console gaming. They can also enable customized in-game graphics settings like RTX ON, taking games to a cinematic level, and experience the maximized eight-hour play sessions.

On top of this, GeForce NOW is leveling up mobile gamers with support for more 120Hz devices capable of streaming at 120 FPS with RTX 3080 memberships. Newly supported devices include the Samsung Galaxy S22 and S22 Ultra, Galaxy Z Fold3 and Flip3, and OnePlus 9 Pro.

Visit our RTX 3080 setup guides for more information on 4K resolution on PC and macOS, and check out all of the new 120Hz Android devices.

Fan-Favorite Star Wars Games, Now in 4K

This week delivers three new games from Electronic Arts, and the far reaches of the galaxy, to the cloud — all streaming at up to 4K quality with a GeForce NOW RTX 3080 membership.

Be the hero in Star Wars Battlefront II (Steam and Origin, Rated T for Teen by the ESRB). Experience rich multiplayer battlegrounds of all three eras — prequel, classic and new trilogy — or rise as a new hero and discover a gripping single-player story spanning 30 years.

A galaxy-spanning adventure awaits in Star Wars Jedi: Fallen Order (Steam and Origin, Rated T for Teen by the ESRB). Rebuild the Jedi Order by developing powerful Force abilities and mastering the art of the lightsaber — all while staying one step ahead of the Empire in this third-person action-adventure title.

Speed through the stars and master the art of starfighter combat in Star Wars: Squadrons (Steam and Origin, Rated T for Teen by the ESRB). Buckle up and feel the adrenaline of first-person, multiplayer space dogfights alongside your squadron in this authentic piloting experience.

Play all of these fantastic titles, now streaming to PC, Mac, Chromebook, mobile devices and more with GeForce NOW.

Mayday, Mayday – May Games Inbound

We’re kicking the month off with May’s new gaming arrivals.

Our new “Instant Play Free Demos” row just got a little bit bigger. Members can now try out Backbone: Prologue streaming on the cloud before jumping into the full title.

Members can also dive into 27 total games coming in May, with 10 titles leading the pack this week.

The following are ready to stream today:

Plus, it’s evil. It’s dead. Evil Dead: The Game (Epic Games Store) is coming to GeForce NOW this month.

Evil Dead: The Game on GeForce NOW
Get ready to play ‘Evil Dead: The Game’ this month – groovy!

Step into the shoes of Ash Williams or his friends from the iconic Evil Dead franchise in a game loaded with co-op and PVP multiplayer action. Survive as a team of four and brandish your short barrel shotgun, chainsaw, cleavers and more against the armies of darkness. Or take control of the Kandarian Demon and seek to swallow their souls — all at up to 1440p and 120 FPS or 4K at 60 FPS on PC and Mac, and up to 4K on SHIELD TV with an RTX 3080 membership.

Coming in May:

  • Brigandine The Legend of Runersia (New release on Steam, May 11)
  • Neptunia x SENRAN KAGURA: Ninja Wars (New release on Steam, May 11)
  • Cepheus Protocol Anthology (New release on Steam, May 13)
  • Evil Dead: The Game (New release on Epic Games Store, May 13)
  • Old World (New release on Steam, May 19)
  • Vampire: The Masquerade Swansong (New release on Epic Games Store, May 19)
  • Crossfire: Legion (New release on Steam, May 24)
  • Out There: Oceans of Time (New release on Steam, May 26)
  • My Time at Sandrock (New release on Steam, May 26)
  • Turbo Sloths (New release on Steam, May 27)
  • Pogostuck: Rage With Your Friends (Steam)
  • Raji: An Ancient Epic (Steam and Epic Games Store)
  • Star Conflict (Steam)
  • THE KING OF FIGHTERS XV (Steam and Epic Games Store)
  • The Planet Crafter (Steam)
  • The Political Machine 2020 (Steam)
  • Yet Another Zombie Defense HD (Steam)

A Lotta Extra From April

On top of the titles announced in April, an extra 22 ended up coming to the cloud. Check out the additional games that were added last month:

The game Cities in Motion 2 (Steam) was also announced in April, but didn’t quite make it.

Finally, we’ve got a question for you. With all of these new games, we’re pretty sure we know the answer to this one. Let us know on Twitter or in the comments below.

 

The post GFN Thursday Caught in 4K: 27 Games Arriving on GeForce NOW in May, Alongside 4K Streaming to PC and Mac Apps appeared first on NVIDIA Blog.

Read More

Setting AIs on SIGGRAPH: Top Academic Researchers Collaborate With NVIDIA to Tackle Graphics’ Greatest Challenges

NVIDIA’s latest academic collaborations in graphics research have produced a reinforcement learning model that smoothly simulates athletic moves, ultra-thin holographic glasses for virtual reality, and a real-time rendering technique for objects illuminated by hidden light sources.

These projects — and over a dozen more — will be on display at SIGGRAPH 2022, taking place Aug. 8-11 in Vancouver and online. NVIDIA researchers have 16 technical papers accepted at the conference, representing work with 14 universities including Dartmouth College, Stanford University, the Swiss Federal Institute of Technology Lausanne and Tel Aviv University.

The papers span the breadth of graphics research, with advancements in neural content creation tools, display and human perception, the mathematical foundations of computer graphics and neural rendering.

Neural Tool for Multi-Skilled Simulated Characters

When a reinforcement learning model is used to develop a physics-based animated character, the AI typically learns just one skill at a time: walking, running or perhaps cartwheeling. But researchers from UC Berkeley, the University of Toronto and NVIDIA have created a framework that enables AI to learn a whole repertoire of skills — demonstrated above with a warrior character who can wield a sword, use a shield and get back up after a fall.

Achieving these smooth, life-like motions for animated characters is usually tedious and labor intensive, with developers starting from scratch to train the AI for each new task. As outlined in this paper, the research team allowed the reinforcement learning AI to reuse previously learned skills to respond to new scenarios, improving efficiency and reducing the need for additional motion data.

Tools like this one can be used by creators in animation, robotics, gaming and therapeutics. At SIGGRAPH, NVIDIA researchers will also present papers about 3D neural tools for surface reconstruction from point clouds and interactive shape editing, plus 2D tools for AI to better understand gaps in vector sketches and improve the visual quality of time-lapse videos.

Bringing Virtual Reality to Lightweight Glasses 

Most virtual reality users access 3D digital worlds by putting on bulky head-mounted displays, but researchers are working on lightweight alternatives that resemble standard eyeglasses.

A collaboration between NVIDIA and Stanford researchers has packed the technology needed for 3D holographic images into a wearable display just a couple millimeters thick. The 2.5-millimeter display is less than half the size of other thin VR displays, known as pancake lenses, which use a technique called folded optics that can only support 2D images.

The researchers accomplished this feat by approaching display quality and display size as a computational problem, and co-designing the optics with an AI-powered algorithm.

While prior VR displays require distance between a magnifying eyepiece and a display panel to create a hologram, this new design uses a spatial light modulator, a tool that can create holograms right in front of the user’s eyes, without needing this gap. Additional components — a pupil-replicating waveguide and geometric phase lens — further reduce the device’s bulkiness.

It’s one of two VR collaborations between Stanford and NVIDIA at the conference, with another paper proposing a new computer-generated holography framework that improves image quality while optimizing bandwidth usage. A third paper in this field of display and perception research, co-authored with New York University and Princeton University scientists, measures how rendering quality affects the speed at which users react to on-screen information.

Lightbulb Moment: New Levels of Real-Time Lighting Complexity

Accurately simulating the pathways of light in a scene in real time has always been considered the “holy grail” of graphics. Work detailed in a paper by the University of Utah’s School of Computing and NVIDIA is raising the bar, introducing a path resampling algorithm that enables real-time rendering of scenes with complex lighting, including hidden light sources.

Think of walking into a dim room, with a glass vase on a table illuminated indirectly by a street lamp located outside. The glossy surface creates a long light path, with rays bouncing many times between the light source and the viewer’s eye. Computing these light paths is usually too complex for real-time applications like games, so it’s mostly done for films or other offline rendering applications.

This paper highlights the use of statistical resampling techniques — where the algorithm reuses computations thousands of times while tracing these complex light paths — during rendering to approximate the light paths efficiently in real time. The researchers applied the algorithm to a classic challenging scene in computer graphics, pictured below: an indirectly lit set of teapots made of metal, ceramic and glass.

Related NVIDIA-authored papers at SIGGRAPH include a new sampling strategy for inverse volume rendering, a novel mathematical representation for 2D shape manipulation, software to create samplers with improved uniformity for rendering and other applications, and a way to turn biased rendering algorithms into more efficient unbiased ones.

Neural Rendering: NeRFs, GANs Power Synthetic Scenes

Neural rendering algorithms learn from real-world data to create synthetic images — and NVIDIA research projects are developing state-of-the-art tools to do so in 2D and 3D.

In 2D, the StyleGAN-NADA model, developed in collaboration with Tel Aviv University, generates images with specific styles based on a user’s text prompts, without requiring example images for reference. For instance, a user could generate vintage car images, turn their dog into a painting or transform houses to huts:

And in 3D, researchers at NVIDIA and the University of Toronto are developing tools that can support the creation of large-scale virtual worlds. Instant neural graphics primitives, the NVIDIA paper behind the popular Instant NeRF tool, will be presented at SIGGRAPH.

NeRFs, 3D scenes based on a collection of 2D images, are just one capability of the neural graphics primitives technique. It can be used to represent any complex spatial information, with applications including image compression, highly accurate representations of 3D shapes and ultra-high resolution images.

This work pairs with a University of Toronto collaboration that compresses 3D neural graphics primitives just as JPEG is used to compress 2D images. This can help users store and share 3D maps and entertainment experiences between small devices like phones and robots.

There are more than 300 NVIDIA researchers around the globe, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about NVIDIA Research.

The post Setting AIs on SIGGRAPH: Top Academic Researchers Collaborate With NVIDIA to Tackle Graphics’ Greatest Challenges appeared first on NVIDIA Blog.

Read More

‘In the NVIDIA Studio’ Welcomes Concept Designer Yangtian Li

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

This week In the NVIDIA Studio, we welcome Yangtian Li, a senior concept artist at Singularity6.

Li is a concept designer and illustrator who has worked on some of the biggest video game franchises, including Call of Duty, Magic: the Gathering and Vainglory. Her artwork also appears in book illustrations and magazines.

Dreams of the Past (left) and Elf Archer by Yangtian Li.

Li’s impressive portfolio features character portraits of strong, graceful, empowered women. Their backstories and elegance are inspired by her own life experiences and world travels.

Snake Witch by Yangtian Li.

Now based in Seattle, Li’s artistic journey began in Chengdu, China. Her hometown serves as the inspiration behind her extraordinary portrait, Snake Witch. This unique and provoking work is based on tribal black magic from Chinese folklore. Li drew and painted the piece, powered by a GeForce RTX GPU and the NVIDIA Studio platform.

Up close, viewers can feel the allure of the Snake Witch.

Snake Witch is a product of Li’s fascination with black magic, or “Gu,” where tribal witch doctors would gather toxic creatures, use their venom to make poison and practice the dark arts. “I always thought the stories were fascinating, so I wanted to do a take,” Li said. “Snakes are more appealing to me, and it helps to create some interesting compositions.”

Li chooses to keep it simple before diving in on the details.

After primarily working in 2D, Li moves to 3D to support her concept process. She adds, “having a high-end GPU really helps speed up the process when it comes to using Blender, Zbrush and so on.” These speed-ups, she says, are particularly noticeable with GPU-accelerated rendering in Blender Cycles 3.0, achieving results that are over 5x faster with a GeForce RTX laptop GPU compared to a MacBook Pro M1 Max or CPU alone.

 

With the Snake Witch’s character foundation in a good place, Li used the Liquify filter to subtly distort her subject’s facial features.

Liquify is one of over 30 GPU-accelerated features in Adobe Photoshop, like AI-powered Neural Filters, that help artists explore creative ideas and make complex adjustments in seconds.

 

“The more life experiences you have, your understanding of the world evolves, and that will be reflected in your art.”

Li uses adjustment layers in the coloring phase of her process, allowing for non-destructive edits while trying to achieve the correct color tone.

If unsatisfied with an adjustment, Li can simply delete it while the original image remains intact.

Finally, Li adjusts lighting, using the Opacity feature to block light from right of the image, adding a modicum of realistic flair.

The devil is in the details, or in this case, the Snake Witch.

Remaining in a productive flow state is critical for Li, as it is for many artists, and her GeForce RTX GPU allows her to spend more time in that magical creative zone where ideas come to life faster and more naturally.

Li goes into greater detail on how she created Snake Witch in her Studio Session. This three-part series includes her processes for initial sketching, color and detail optimization, and finishing touches.

Previously, Li has worked as a senior concept artist and designer at Amazon Game Studios, Niantic and Treyarch, among others.

Check out Li’s portfolio and favorite projects on Instagram.

Accelerating Adobe Creators In the NVIDIA Studio

More resources are available to creators seeking additional NVIDIA Studio features and optimizations that accelerate Adobe creative apps.

Follow a step-by-step tutorial in Photoshop that details how to apply a texture from a photo to a 3D visualization render.

Learn how to work significantly faster in Adobe Lightroom by utilizing AI-powered masking tools; Select Subject and Select Sky.

Follow NVIDIA Studio on Facebook, Twitter and Instagram, access tutorials on the Studio YouTube channel, and get updates directly in your inbox by joining the NVIDIA Studio newsletter.

The post ‘In the NVIDIA Studio’ Welcomes Concept Designer Yangtian Li appeared first on NVIDIA Blog.

Read More