Get in Gear: ‘Forza Motorsport’ Races Onto GeForce NOW

Get in Gear: ‘Forza Motorsport’ Races Onto GeForce NOW

Put the pedal to the metal this GFN Thursday as Forza Motorsport leads 23 new games in the cloud.

Plus, Acer’s Predator Connect 6E is the newest addition to the GeForce NOW Recommended program, with easy cloud gaming quality-of-service (QoS) settings built in to give Ultimate members the best streaming experience.

No Breaks, No Limits, No Downloads

Take the pole position thanks to the cloud. Turn 10 Studios’ Forza Motorsport joins the GeForce NOW library this week.

The realistic racing sim features over 500 realistically rendered cars across 20 dynamic and world-famous tracks, each with dynamic time-of-day, weather and driving conditions, so no two laps will ever be the same. Unlock more than 800 performance upgrades and outbuild the competition, either online or against new, highly competitive AI racers in the single-player Builders Cup Career Mode.

Stream every turn at GeForce quality on nearly any device and max out image quality thanks to the cloud. Ultimate members can get in gear at up to 4K resolution andat up to 120 frames per second for the most realistic driving experience.

Need for Speed

Acer Predator Connect W6 router for GeForce NOW
Better together.

Say hello to the newest addition to the GeForce NOW Recommended program.

GeForce NOW members have access to the best cloud streaming experience, and Acer’s newly released Predator Connect W6 wireless router is built to support it, providing the ultrafast, stable gaming environment needed for 4K cloud streaming.

NVIDIA and Acer have collaborated to create a best-in-class streaming experience, creating a special QoS option in the Predator Connect that prioritizes cloud gaming network traffic for maximized speed. The software underwent six months of rigorous testing, ensuring it can consistently deliver the high-performance offerings of a GeForce NOW Ultimate membership, including 4K 120 fps gaming with ultra-low latency.

The Predator Connect W6 also includes tri-band network support with the latest wireless technologies, like WiFi 6E. Pair it with a GeForce NOW Ultimate membership for an unrivaled cloud gaming experience.

Play On

Star Trek Infinite on GeForce NOW
Live long and prosper in the cloud.

Get the weekend started with the new weekly games list:

  • Forza Motorsport (New release on Steam, Xbox and available on PC Game Pass, Oct. 12)
  • From Space (New release on Xbox, available on PC Game Pass, Oct. 12)
  • Hotel: A Resort Simulator (New release on Steam, Oct. 12)
  • Saltsea Chronicles (New release on Steam, Oct. 12)
  • Star Trek: Infinite (New release on Steam, Oct. 12)
  • Tribe: Primitive Builder (New release on Steam, Oct. 12)
  • Lords of the Fallen (New release on Steam and Epic Games Store, Oct. 13)
  • Bad North (Xbox, available on Microsoft Store)
  • Call of the Sea (Xbox, available on Microsoft Store
  • For The King (Xbox, available on Microsoft Store)
  • Golf With Your Friends (Xbox, available on PC Game Pass)
  • Metro Simulator 2 (Steam)
  • Moonbreaker (Steam)
  • Narita Boy (Xbox, available on Microsoft Store)
  • Rubber Bandits (Xbox, available on PC Game Pass)
  • Sifu (Xbox, available on Microsoft Store)
  • Star Renegades (Xbox, available on Microsoft Store)
  • Streets of Rogue (Xbox, available on Microsoft Store)
  • Supraland (Xbox, available on Microsoft Store)
  • Supraland Six Inches Under (Epic Games Store)
  • The Surge (Xbox, available on Microsoft Store)
  • Tiny Football (Steam)
  • Yes, Your Grace (Xbox, available on Microsoft Store)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

 

Read More

Take the Wheel: NVIDIA NeMo SteerLM Lets Companies Customize a Model’s Responses During Inference

Take the Wheel: NVIDIA NeMo SteerLM Lets Companies Customize a Model’s Responses During Inference

Developers have a new AI-powered steering wheel to help them hug the road while they drive powerful large language models (LLMs) to their desired locations.

NVIDIA NeMo SteerLM lets companies define knobs to dial in a model’s responses as it’s running in production, a process called inference. Unlike current methods for customizing an LLM, it lets a single training run create one model that can serve dozens or even hundreds of use cases, saving time and money.

NVIDIA researchers created SteerLM to teach AI models what users care about, like road signs to follow in their particular use cases or markets. These user-defined attributes can gauge nearly anything — for example, the degree of helpfulness or humor in the model’s responses.

One Model, Many Uses

The result is a new level of flexibility.

With SteerLM, users define all the attributes they want and embed them in a single model. Then they can choose the combination they need for a given use case while the model is running.

For example, a custom model can now be tuned during inference to the unique needs of, say, an accounting, sales or engineering department or a vertical market.

The method also enables a continuous improvement cycle. Responses from a custom model can serve as data for a future training run that dials the model into new levels of usefulness.

Saving Time and Money

To date, fitting a generative AI model to the needs of a specific application has been the equivalent of rebuilding an engine’s transmission. Developers had to painstakingly label datasets, write lots of new code, adjust the hyperparameters under the hood of the neural network and retrain the model several times.

SteerLM replaces those complex, time-consuming processes with three simple steps:

  • Using a basic set of prompts, responses and desired attributes, customize an AI model that predicts how those attributes will perform.
  • Automatically generating a dataset using this model.
  • Training the model with the dataset using standard supervised fine-tuning techniques.

Many Enterprise Use Cases

Developers can adapt SteerLM to nearly any enterprise use case that requires generating text.

With SteerLM, a company might produce a single chatbot it can tailor in real time to customers’ changing attitudes, demographics or circumstances in the many vertical markets or geographies it serves.

SteerLM also enables a single LLM to act as a flexible writing co-pilot for an entire corporation.

For example, lawyers can modify their model during inference to adopt a formal style for their legal communications. Or marketing staff can dial in a more conversational style for their audience.

Game On With SteerLM

To show the potential of SteerLM, NVIDIA demonstrated it on one of its classic applications — gaming (see the video below).

Today, some games pack dozens of non-playable characters — characters that the player can’t control — which mechanically repeat prerecorded text, regardless of the user or situation.

SteerLM makes these characters come alive, responding with more personality and emotion to players’ prompts. It’s a tool game developers can use to unlock unique new experiences for every player.

The Genesis of SteerLM

The concept behind the new method arrived unexpectedly.

“I woke up early one morning with this idea, so I jumped up and wrote it down,” recalled Yi Dong, an applied research scientist at NVIDIA who initiated the work on SteerLM.

While building a prototype, he realized a popular model-conditioning technique could also be part of the method. Once all the pieces came together and his experiment worked, the team helped articulate the method in four simple steps.

It’s the latest advance in model customization, a hot area in AI research.

“It’s a challenging field, a kind of holy grail for making AI more closely reflect a human perspective — and I love a new challenge,” said the researcher, who earned a Ph.D. in computational neuroscience at Johns Hopkins University, then worked on machine learning algorithms in finance before joining NVIDIA.

Get Hands on the Wheel

SteerLM is available as open-source software for developers to try out today. They can also get details on how to experiment with a Llama-2-13b model customized using the SteerLM method.

For users who want full enterprise security and support, SteerLM will be integrated into NVIDIA NeMo, a rich framework for building, customizing and deploying large generative AI models.

The SteerLM method works on all models supported on NeMo, including popular community-built pretrained LLMs such as Llama-2 and BLOOM.

Read a technical blog to learn more about SteerLM.

See notice regarding software product information.

Read More

MAXimum AI Performance: Latest Adobe Updates Accelerated by NVIDIA GPUs Improve Workflows for Millions of Creatives

MAXimum AI Performance: Latest Adobe Updates Accelerated by NVIDIA GPUs Improve Workflows for Millions of Creatives

Generative AI is helping creatives across many industries bring ideas to life at unprecedented speed.

This technology will be on display at Adobe MAX, running through Thursday, Oct. 12, in person and virtually.

Adobe is putting the power of generative AI into the hands of creators with the release of Adobe Firefly. Using NVIDIA GPUs, Adobe is bringing new opportunities for artists and more looking to accelerate generative AI — unleashing generative AI enhancements for millions of users. Firefly is now available as a standalone app and integrated with other Adobe apps.

Recent updates to Adobe’s most popular apps — including for Adobe Premiere Pro, Lightroom, After Effects and Substance 3D Stager, Modeler and Sampler — bring new AI features to creators. And GeForce RTX and NVIDIA RTX GPUs help accelerate these apps and AI effects, providing massive time savings.

Video editors can use AI to improve dialogue quality with the Enhance Speech (beta) function and work faster with GPU accelerated decoding of ARRIRAW camera original digital film clips up to 60% faster on RTX GPUs compared to on an Apple MacBook Pro 16 M2 Max in Premiere Pro. Plus, take advantage of improved rotoscoping quality with the Next-Gen Roto Brush (version 3.0) feature now available in After Effects.

Photographers and 2D artists now have new Lens Blur effects in Lightroom, complementing ongoing optimizations that improve performance in its Select Object, Select People and Select Sky features.

These advanced features are further enhanced by NVIDIA Studio Drivers, free for RTX GPU owners, which add performance and reliability. The October Studio Driver is available for download now.

Finally, 3D artist SouthernShotty returns to In the NVIDIA Studio to share his 3D montage of a mix of beautifully hand-crafted worlds — built with Adobe apps and Blender and featuring AI-powered workflows accelerated by his GeForce RTX 4090 Laptop GPU.

MAXimizing Creativity

Adobe Creative Cloud and Substance 3D apps run fastest on NVIDIA RTX GPUs — and recent updates show continued time-saving performance gains.

Tested on NVIDIA Studio laptops with GeForce RTX 4050 and 4090 Laptop GPUs with Intel Core i9 13th Gen; MacBook Pro 14″ with M2 Pro; and MacBook Pro 16″ with M2 Max. Performance measures total time to apply Enhanced Speech effect to video clip within Adobe Premiere Pro.

Premiere Pro’s Enhance Speech (beta) feature, currently in beta, uses AI to remove noise and improve the quality of dialogue clips so that they sound professionally recorded. Tasks are completed 8x faster with a GeForce RTX 4090 Laptop GPU compared to MacBook Pro 16 with M2 Max.

Tested on NVIDIA Studio laptops with GeForce RTX 4050 and 4090 Laptop GPUs with Intel Core i9 13th Gen; MacBook Pro 14″ with M2 Pro; and MacBook Pro 16″ with M2 Max. Performance measures total time to apply export ARRIRAW footage within Adobe Premiere Pro.

Premiere Pro professionals use ARRIRAW footage — the only format that fully retains a camera’s natural color response and great exposure latitude. ARRIRAW video exports can be done 1.6x faster on GeForce RTX 4090 Laptop GPUs than on the MacBook Pro 16 with M2 Max.

Additionally, After Effects users can access the Next-Gen Roto Brush feature in beta, powered by a brand-new AI model. It’s ideal for isolating subjects such as overlapping limbs, hair and other transparencies more easily, saving time.

RTX GPUs shine in 3D workloads. Substance 3D Stager’s new AI-powered, GPU-accelerated denoiser allows almost instantaneous photorealistic rendering.

Substance 3D Modeler’s recent Hardware Ray Tracing in Capture Mode capability uses NVIDIA technology to export high-quality screenshots 2.4x faster than before.

Meanwhile, Substance 3D Sampler’s AI UpScale feature increases detail for low-quality textures and its Image to Material feature makes it easier to create high-quality materials from a single photograph.

Lens Blur in Adobe Lightroom.

Photographers have long used the popular Super Resolution feature in Adobe Camera Raw, which is supported by Photoshop, and gives 3x faster performance on a GeForce RTX 4090 Laptop GPU compared to a MacBook Pro 16 M2 Max. Now, Lightroom users have AI-driven capabilities with the Lens Blur feature for applying realistic lens blur effects, Point Color for precise color adjustments to speed up color correction, and High Dynamic Range Output for edits and renders in an HDR color space.

Adobe Firefly Glows #76B900

Adobe Firefly provides users with generative AI features, utilizing NVIDIA GPUs in the cloud.

Firefly features such as Generative Fill — to add, remove and expand content in Photoshop, and Generative Expand to expand scenes with generative content — help complete tasks instantly in Adobe Photoshop.

Adobe Firefly-powered feature Generative Fill in Adobe Photoshop.

Adobe Illustrator offers the Generative Recolor feature, which enables graphic designers to explore a wide variety of colors, palettes and themes in their work without having to do tedious manual recoloring. Discovering the perfect combination of colors now takes just a few seconds.

Adobe Firefly-powered feature Generative Recolor in Adobe Illustrator.

Adobe Express offers the Text to Image feature to create incredible imagery from standard prompts, and the Text Effects feature helps stylize standard text for use in creating flyers, resumes, social media reels and more.

These powerful AI capabilities were developed with the creative community in mind — guided by AI ethics principles of content and data transparency — to ensure ethically and morally responsible output.

NVIDIA technology will continue to support new Adobe Firefly-powered features from the cloud as they become available to photographers, illustrators, designers, video editors, 3D artists and more.

MAXed Out AI Fun

Independent filmmaker and artist SouthernShotty knows the challenges of producing content alone and how daunting the process can be.

SouthernShotty’s artwork invokes childlike emotions with impressive visuals.

“I’m a big fan of the NVIDIA Studio Driver support, because it adds stability and reliability.” – SouthernShotty

As such, SouthernShotty is always looking for tools and techniques to ease the creative process. To accelerate his workflow, he combined new Adobe AI capabilities accelerated by his GeForce RTX 4090 GPU to achieve incredible efficiency.

The artist kept his 3D models fairly simple, focusing on textures to ensure that the world would match his vision. He deployed one of his favorite features, the AI-powered Image to Material in Adobe Substance 3D Sampler, to convert images to physically based rendering textures.

 

Applying textures in Blender.

“It’s so fast that I can pretty much preview my entire scene in real time and see the final result before I ever hit the render button.” – SouthernShotty

RTX-accelerated light and ambient occlusion baking allowed SouthernShotty to realize the desired visual effect in seconds.

His RTX GPU continued to play an essential role as he used Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport for interactive, photorealistic rendering.

As the 3D montage progresses, the main character appears and reappears in several new environments. Each new location is featured for only a second or two, but SouthernShotty still needed to create a fully fleshed out environment for each.

Normally this would take a substantial amount of time, but an AI assist from Adobe Firefly helped speed the process.

Adobe is committed to developing generative AI responsibly, with creators at the center.

SouthernShotty opened the app, entered “fantasy mushroom forest” as the text prompt and then made minor adjustments by tinkering with the digital art, golden hour, for lighting, and wide-angle settings for composition. When satisfied with the result, he downloaded the image for further editing in Photoshop.

An entirely new image is generated in minutes with Adobe Firefly, powered by GeForce RTX GPUs.

SouthernShotty then used the AI-powered Generative Fill feature to remove unwanted background elements. He used the Neural Filters optimization to color match a castle element added in the background, then used Generative Fill again to effortlessly blend the castle in with the trees.

Finally, SouthernShotty used the Neural Filters optimization in the new Lens Blur feature to add depth to the scene — first exporting depth as a separate layer and then editing in Blender to complete the scene.

Editing the depth map in Blender.

“My entire process was sprinkled with GPU-acceleration and AI-enabled features,” said SouthernShotty. “In Blender, the GeForce RTX 4090 GPU accelerated everything — but especially the live render view in my viewport, which was crucial to visualizing my scenes.”

Check out SouthernShotty’s YouTube channel for Blender tutorials on characters, animation, rigging and more.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Keeping an AI on Quakes: Researchers Unveil Deep Learning Model to Improve Forecasts

Keeping an AI on Quakes: Researchers Unveil Deep Learning Model to Improve Forecasts

A research team is aiming to shake up the status quo for earthquake models.

Researchers from the Universities of California at Berkeley and Santa Cruz, and the Technical University of Munich recently released a paper describing a new model that delivers deep learning to earthquake forecasting.

Dubbed RECAST, the model can use larger datasets and offer greater flexibility than the current model standard, ETAS, which has improved only incrementally since its development in 1988, it argues.

The paper’s authors — Kelian Dascher-Cousineau, Oleksandr Shchur, Emily Brodsky and Stephan Günnemann — trained the model on NVIDIA GPU workstations.

“There’s a whole field of research that explores how to improve ETAS,” said Dacher-Cousineau, a postdoctoral researcher at UC Berkeley. “It’s an immensely useful model that has been used a lot, but it’s been frustratingly hard to improve on it.”

AI Drives Seismology Ahead 

The promise of RECAST is that its model flexibility, self-learning capability and ability to scale will enable it to interpret larger datasets and make better predictions during earthquake sequences, he said.

Model advances with improved forecasts could help agencies such as the U.S. Geological Survey and its counterparts elsewhere offer better information to those who need to know. Firefighters and other first responders entering damaged buildings, for example, could benefit from more reliable forecasts on aftershocks.

“There’s a ton of room for improvement within the forecasting side of things. And for a variety of reasons, our community hasn’t really dove into the machine learning side of things, partly because of being conservative and partly because these are really impactful decisions,” said Dacher-Cousineau.

RECAST Model Moves the Needle

While past work on aftershock predictions has relied on statistical models, this doesn’t scale to handle the larger datasets becoming available from an explosion of newly enhanced data capabilities, according to the researchers.

The RECAST model architecture builds on developments in neural temporal point processes, which are probabilistic generative models for continuous time event sequences. In a nutshell, the model has an encoder-decoder neural network architecture used for predicting the timing of a next event based on a history of past events.

Dacher-Cousineau said that releasing and benchmarking the model in the paper demonstrates that it can quickly learn to do what ETAS can do, while it holds vast potential to do more.

“Our model is a generative model that, just like a natural language processing model, you can generate paragraphs and paragraphs of words, and you can sample it and make synthetic catalogs,” said Dacher-Cousineau. “Part of the paper is there to convince old-school seismologists that this is a model that’s doing the right thing — we’re not overfitting.”

Boosting Earthquake Data With Enhanced Catalogs 

Earthquake catalogs, or records of earthquake data, for particular geographies can be small. That’s because to this day many come from seismic analysts who interpret scribbles of raw data that comes from seismometers. But this, too, is an area where AI researchers are building models to autonomously interpret these P waves and other signals in the data in real time.

Enhanced data is meanwhile helping to fill the void. With the labeled data in earthquake catalogs, machine learning engineers are revisiting these sources of raw data and building enhanced catalogs to get 10x to 100x the number of earthquakes for training data and categories.

“So it’s not necessarily that we put out more instruments to gather data but rather that we enhance the datasets,” said Dacher-Cousineau.

Applying Larger Datasets to Other Settings

With the larger datasets, the researchers are starting to see improvements from RECAST over the standard ETAS model.

To advance the state of the art in earthquake forecasting, Dascher-Cousineau is working with a team of undergraduates at UC Berkeley to train earthquake catalogs on multiple regions for better predictions.

“I have the natural language processing analogies in mind, where it seems very plausible that earthquake sequences in Japan are useful to inform earthquakes in California,” he said. “And you can see that going in the right direction.”

Learn about synthetic data generation with NVIDIA Omniverse Replicator

Read More

Brains of the Operation: Atlas Meditech Maps Future of Surgery With AI, Digital Twins

Brains of the Operation: Atlas Meditech Maps Future of Surgery With AI, Digital Twins

Just as athletes train for a game or actors rehearse for a performance, surgeons prepare ahead of an operation.

Now, Atlas Meditech is letting brain surgeons experience a new level of realism in their pre-surgery preparation with AI and physically accurate simulations.

Atlas Meditech, a brain-surgery intelligence platform, is adopting tools — including the MONAI medical imaging framework and NVIDIA Omniverse 3D development platform — to build AI-powered decision support and high-fidelity surgery rehearsal platforms. Its mission: improving surgical outcomes and patient safety.

“The Atlas provides a collection of multimedia tools for brain surgeons, allowing them to mentally rehearse an operation the night before a real surgery,” said Dr. Aaron Cohen-Gadol, founder of Atlas Meditech and its nonprofit counterpart, Neurosurgical Atlas. “With accelerated computing and digital twins, we want to transform this mental rehearsal into a highly realistic rehearsal in simulation.”

Neurosurgical Atlas offers case studies, surgical videos and 3D models of the brain to more than a million online users. Dr. Cohen-Gadol, also a professor of neurological surgery at Indiana University School of Medicine, estimates that more than 90% of brain surgery training programs in the U.S. — as well as tens of thousands of neurosurgeons in other countries — use the Atlas as a key resource during residency and early in their surgery careers.

Atlas Meditech’s Pathfinder software is integrating AI algorithms that can suggest safe surgical pathways for experts to navigate through the brain to reach a lesion.

And with NVIDIA Omniverse, a platform for connecting and building custom 3D pipelines and metaverse applications, the team aims to create custom virtual representations of individual patients’ brains for surgery rehearsal.

Custom 3D Models of Human Brains

A key benefit of Atlas Meditech’s advanced simulations — either onscreen or in immersive virtual reality — is the ability to customize the simulations, so that surgeons can practice on a virtual brain that matches the patient’s brain in size, shape and lesion position.

“Every patient’s anatomy is a little different,” said Dr. Cohen-Gadol. “What we can do now with physics and advanced graphics is create a patient-specific model of the brain and work with it to see and virtually operate on a tumor. The accuracy of the physical properties helps to recreate the experience we have in the real world during an operation.”

To create digital twins of patients’ brains, the Atlas Pathfinder tool has adopted MONAI Label, which can support radiologists by automatically annotating MRI and CT scans to segment normal structures and tumors.

“MONAI Label is the gateway to any healthcare project because it provides us with the opportunity to segment critical structures and protect them,” said Dr. Cohen-Gadol. “For the Atlas, we’re training MONAI Label to act as the eyes of the surgeon, highlighting what is a normal vessel and what’s a tumor in an individual patient’s scan.”

With a segmented view of a patient’s brain, Atlas Pathfinder can adjust its 3D brain model to morph to the patient’s specific anatomy, capturing how the tumor deforms the normal structure of their brain tissue.

Based on the visualization — which radiologists and surgeons can modify to improve the precision — Atlas Pathfinder suggests the safest surgical approaches to access and remove a tumor without harming other parts of the brain. Each approach links out to the Atlas website, which includes a written tutorial of the operative plan.

“AI-powered decision support can make a big difference in navigating a highly complex 3D structure where every millimeter is critical,” Dr. Cohen-Gadol said.

Realistic Rehearsal Environments for Practicing Surgeons 

Atlas Meditech is using NVIDIA Omniverse to develop a virtual operating room that can immerse surgeons into a realistic environment to rehearse upcoming procedures. In the simulation, surgeons can modify how the patient and equipment are positioned.

Using a VR headset, surgeons will be able to work within this virtual environment, going step by step through the procedure and receiving feedback on how closely they are adhering to the target pathway to reach the tumor. AI algorithms can be used to predict how brain tissue would shift as a surgeon uses medical instruments during the operation, and apply that estimated shift to the simulated brain.

“The power to enable surgeons to enter a virtual, 3D space, cut a piece of the skull and rehearse the operation with a simulated brain that has very similar physical properties to the patient would be tremendous,” said Dr. Cohen-Gadol.

To better simulate the brain’s physical properties, the team adopted NVIDIA PhysX, an advanced real-time physics simulation engine that’s part of NVIDIA Omniverse. Using haptic devices, they were able to experiment with adding haptic feedback to the virtual environment, mimicking the feeling of working with brain tissue.

Envisioning AI, Robotics in the Future of Surgery Training

Dr. Cohen-Gadol believes that in the coming years AI models will be able to further enhance surgery by providing additional insights during a procedure. Examples include warning surgeons about critical brain structures that are adjacent to the area they’re working in, tracking medical instruments during surgery, and providing a guide to next steps in the surgery.

Atlas Meditech plans to explore the NVIDIA Holoscan platform for streaming AI applications to power these real-time, intraoperative insights. Applying AI analysis to a surgeon’s actions during a procedure can provide the surgeon with useful feedback to improve their technique.

In addition to being used for surgeons to rehearse operations, Dr. Cohen-Gadol says that digital twins of the brain and of the operating room could help train intelligent medical instruments such as microscope robots using Isaac Sim, a robotics simulation application developed on Omniverse.

View Dr. Cohen-Gadol’s presentation at NVIDIA GTC.

Subscribe to NVIDIA healthcare news.

Read More

Fall in Line for October With Nearly 60 New Games, Including Latest Game Pass Titles to Join the Cloud

Fall in Line for October With Nearly 60 New Games, Including Latest Game Pass Titles to Join the Cloud

October brings more than falling leaves and pumpkin spice lattes for GeForce NOW members. Get ready for nearly 60 new games to stream, including Forza Motorsport and 16 more PC Game Pass titles.

Assassin’s Creed Mirage leads 29 new games to hit the GeForce NOW library this week. In addition, catch a challenge to earn in-game rewards for World of Warship players.

Leap Into the Cloud

Assassin's Creed Mirage on GeForce NOW
Nothing is true. Everything is permitted … in the cloud.

It’s not an illusion — Ubisoft’s Assassin’s Creed Mirage launches in the cloud this week. Mirage was created as an homage to the first Assassin’s Creed games and pays tribute to the series’ well-loved roots.

Join the powerful proto-Assassin order — the Hidden Ones — as a 17-year-old street thief named Basim Ibn Is’haq as he learns to become a master assassin. Stalk the streets of a bustling and historically accurate ninth-century Baghdad — the perfect urban setting to seamlessly parkour across rooftops, scale tall towers and flee guards while uncovering a conspiracy that threatens the city and Basim’s future destiny.

Take a Leap of Faith into a GeForce NOW Ultimate membership and explore this new open world at up to 4K resolution and 120 frames per second. Ultimate members get exclusive access to GeForce RTX 4080 servers in the cloud, making it the easiest upgrade around.

No Tricks, Only Treats

Don’t be spooked — GeForce NOW has plenty of treats for members this month. More PC Game Pass games are coming soon to the cloud, including Forza Motorsport from Turn 10 Studios and Xbox Game Studios and the Dishonored series from Arkane and Bethesda.

Catch some action (with a little stealth, magic and combat mixed in) with the Dishonored franchise. Dive into a struggle of power and revenge that revolves around the assassination of the Empress of the Isles. Members can follow the whole story starting with the original Dishonored game, up through the latest entry, Dishonored: Death of an Outsider, when the series launches in the cloud this month.

Jump into all the action with an Ultimate or Priority account today, for higher performance and faster access to stream over 1,700 games.

Check out the spooktacular list for October:

  • Star Trek: Infinite (New release on Steam, Oct. 12)
  • Lords of the Fallen (New release on Steam and Epic Games Store, Oct. 13)
  • Wizard with a Gun (New release on Steam, Oct. 17)
  • Alaskan Road Truckers (New release Steam and Epic Games Store, Oct. 18)
  • Hellboy: Web of Wyrd (New release on Steam, Oct. 18)
  • HOT WHEELS UNLEASHED 2 – Turbocharged (New release on Steam, Oct. 19)
  • Laika Aged Through Blood (New release on Steam, Oct. 19)
  • Cities: Skylines II (New release on Steam, Xbox and available on PC Game Pass, Oct. 24)
  • Ripout (New release on Steam, Oct 24)
  • War Hospital (New release on Steam, Oct. 26)
  • Alan Wake 2 (New release on Epic Games Store, Oct. 26)
  • Headbangers: Rhythm Royale (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Jusant (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Bad North (Xbox, available on Microsoft Store)
  • Daymare 1994: Sandcastle (Steam)
  • For The King (Xbox, available on Microsoft Store)
  • Forza Motorsport (Steam, Xbox and available on PC Game Pass)
  • Heretic’s Fork (Steam)
  • Moonbreaker (Steam)
  • Metro Simulator 2 (Steam)
  • Narita Boy (Xbox, available on Microsoft Store)
  • Sifu (Xbox, available on Microsoft Store)
  • StalCraft (Steam)
  • Star Renegades (Xbox, available on Microsoft Store)
  • Streets of Rogue (Xbox, available on Microsoft Store)
  • Supraland (Xbox, available on Microsoft Store)
  • The Surge (Xbox, available on Microsoft Store)
  • Tiny Football (Steam)
  • Vampire Survivors (Steam and Xbox, available on PC Game Pass)
  • VEILED EXPERTS (Steam)
  • Yes, Your Grace (Xbox, available on Microsoft Store)

Come Sail Away

A new challenge awaits on the open sea.

World of Warships is launching a new in-game event this week exclusive to GeForce NOW members. From Oct. 5-9, those streaming the game on GeForce NOW will be prompted to complete a special in-game challenge chain, only available from the cloud, to earn economic reward containers and one-day GeForce NOW Priority trials. Aspiring admirals can learn more about these challenges on the World of Warships blog and social channels.

Those new to World of Warships can activate the invite code “GEFORCENOW” in the game starting today to claim exclusive rewards, including a seven-day Premium World of Warships account, 300 doubloons, credits and economic boosters. Once 15 battles are completed, players can choose one of the following tech tree ships to speed up game progress: Japanese destroyer Isokaze, American cruiser Phoenix, German battleship Moltke or British aircraft carrier Hermes.

Age Of Empires II on GeForce NOW
Conquer the cloud.

The leaves may be falling, but new games are always coming to the cloud. Dive into the action now with 29 new games this week:

  • Battle Shapers (New release on Steam, Oct. 3)
  • Disgaea 7: Vows of the Virtueless (New release on Steam, Oct. 3)
  • Station to Station (New release on Steam, Oct. 3)
  • The Lamplighter’s League (New release on Steam, Xbox and available on PC Game Pass, Oct. 3)
  • Thief Simulator 2 (New release on Steam, Oct. 4)
  • Heads Will Roll: Reforged (New release on Steam, Oct. 4)
  • Assassin’s Creed Mirage (New release on Ubisoft, Oct. 5)
  • Age of Empires II: Definitive Edition (Xbox, available on PC Game Pass)
  • Arcade Paradise (Xbox, available on PC Game Pass)
  • The Ascent (Xbox, available on Microsoft Store)
  • Citizen Sleeper (Xbox, available on PC Game Pass)
  • Dicey Dungeons (Xbox, available on PC Game Pass)
  • Godlike Burger (Epic Games Store)
  • Greedfall (Xbox, available on Microsoft Store)
  • Hypnospace Outlaw (Xbox, available on PC Game Pass)
  • Killer Frequency (Xbox, available on Microsoft Store)
  • Lonely Mountains: Downhill (Xbox, available on PC Game Pass)
  • Metro 2033 Redux (Xbox, available on Microsoft Store)
  • Metro: Last Light Redux (Xbox, available on Microsoft Store)
  • MudRunner (Xbox, available on Microsoft Store)
  • Potion Craft: Alchemist Simulator (Xbox, available on PC Game Pass)
  • Shadow Gambit: The Cursed Crew (Epic Games Store)
  • Slayers X: Terminal Aftermath: Vengance of the Slayer (Xbox, available on PC Game Pass)
  • Soccer Story (Xbox, available on PC Game Pass)
  • SOMA (Xbox, available on PC Game Pass)
  • Space Hulk: Tactics (Xbox, available on Microsoft Store)
  • SpiderHeck (Xbox, available on PC Game Pass)
  • SUPERHOT: MIND CONTROL DELETE (Xbox, available on Microsoft Store)
  • Surviving Mars (Xbox, available on Microsoft Store)

Surprises in September

On top of the 24 games announced in September, an additional 45 joined the cloud last month:

  • Void Crew (New release on Steam, Sept. 7)
  • Tavernacle! (New release on Steam, Sept. 11)
  • Gunbrella (New release on Steam, Sept. 13)
  • HumanitZ (New release on Steam, Sept. 18)
  • These Doomed Isles (New release on Steam, Sept. 25)
  • Overpass 2 (New release on Steam, Sept. 28)
  • 911 Operator (Epic Games Store)
  • A Plague Tale: Requiem (Xbox)
  • Amnesia: The Bunker (Xbox, available on PC Game Pass)
  • Airborne Kingdom (Epic Games Store)
  • Atomic Heart (Xbox)
  • BlazBlue: Cross Tag Battle (Xbox, available on PC Game Pass)
  • Bramble: The Mountain King (Xbox, available on PC Game Pass)
  • Call of the Wild: The Angler (Xbox)
  • Chained Echoes (Xbox, available on PC Game Pass)
  • Danganronpa V3: Killing Harmony (Xbox)
  • Descenders (Xbox, available on PC Game Pass)
  • Doom Eternal (Xbox, available on PC Game Pass)
  • Dordogne (Xbox, available on PC Game Pass)
  • Eastern Exorcist (Xbox, available on PC Game Pass)
  • Figment 2: Creed Valley (Xbox, available on PC Game Pass)
  • Hardspace: Shipbreaker (Xbox)
  • Insurgency: Sandstorm (Xbox)
  • I Am Fish (Xbox)
  • Last Call BBS (Xbox)
  • The Legend of Tianding (Xbox, available on PC Game Pass)
  • The Matchless Kungfu (Steam)
  • Mechwarrior 5: Mercenaries (Xbox, available on PC Game Pass)
  • Monster Sanctuary (Xbox)
  • Opus Magnum (Xbox)
  • Pizza Possum (New release on Steam, Sept. 28)
  • A Plague Tale: Innocence (Xbox)
  • Quake II (Steam, Epic Games Store and Xbox, available on PC Game Pass)
  • Remnant II (Epic Games Store)
  • Road 96 (Xbox)
  • Shadowrun: Hong Kong – Extended Edition (Xbox)
  • SnowRunner (Xbox)
  • Soulstice (New release on Epic Games Store, free on Sept. 28)
  • Space Hulk: Deathwing – Enhanced Edition (Xbox)
  • Spacelines from the Far Out (Xbox)
  • Superhot (Xbox)
  • Totally Reliable Delivery Service (Xbox, available on PC Game Pass)
  • Vampyr (Xbox)
  • Warhammer 40,000: Battlesector (Xbox, available on PC Game Pass)
  • Yooka-Laylee and the Impossible Lair (Xbox)

Halo Infinite and Kingdoms Reborn didn’t make it in September. Stay tuned to GFN Thursday for more updates.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

A Mine-Blowing Breakthrough: Open-Ended AI Agent Voyager Autonomously Plays ‘Minecraft’

A Mine-Blowing Breakthrough: Open-Ended AI Agent Voyager Autonomously Plays ‘Minecraft’

For NVIDIA Senior AI Scientist Jim Fan, the video game Minecraft served as the “perfect primordial soup” for his research on open-ended AI agents.

In the latest AI Podcast episode, host Noah Kravitz spoke with Fan on using large language models to create AI agents — specifically to create Voyager, an AI bot built with Chat GPT-4 that can autonomously play Minecraft.

AI agents are models that “can proactively take actions and then perceive the world, see the consequences of its actions, and then improve itself,” Fan said. Many current AI agents are programmed to achieve specific objectives, such as beating a game as quickly as possible or answering a question. They can work autonomously toward a particular output but lack a broader decision-making agency.

Fan wondered if it was possible to have a “truly open-ended agent that can be prompted by arbitrary natural language to do open-ended, even creative things.”

But he needed a flexible playground in which to test that possibility.

“And that’s why we found Minecraft to be almost a perfect primordial soup for open-ended agents to emerge, because it sets up the environment so well,” he said. Minecraft at its core, after all, doesn’t set a specific key objective for players other than to survive and freely explore the open world.

That became the springboard for Fan’s project, MineDojo, which eventually led to the creation of the AI bot Voyager.

“Voyager leverages the power of Chat GPT-4 to write code in Javascript to execute in the game,” Fan explained. “GPT-4 then looks at the output, and if there’s an error from JavaScript or some feedback from the environment, GPT-4 does a self-reflection and tries to debug the code.”

The bot learns from its mistakes and stores the correctly implemented programs in a skill library for future use, allowing for “lifelong learning.”

In-game, Voyager can autonomously explore for hours, adapting its decisions based on its environment and developing skills to combat monsters and find food when needed.

“We see all these behaviors come from the Voyager setup, the skill library and also the coding mechanism,” Fan explained. “We did not preprogram any of these behaviors.”

He then spoke more generally about the rise and trajectory of LLMs. He foresees strong applications in software, gaming and robotics and increasingly pressing conversations surrounding AI safety.

Fan encourages those looking to get involved and work with LLMs to “just do something,” whether that means using online resources or experimenting with beginner-friendly, CPU-based AI models.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

 

Read More

How AI Helps Fight Wildfires in California

How AI Helps Fight Wildfires in California

California has a new weapon against the wildfires that have devastated the state: AI.

A freshly launched system powered by AI trained on NVIDIA GPUs promises to provide timely alerts to first responders across the Golden State every time a blaze ignites.

The ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego, uses advanced AI developed by DigitalPath.

Harnessing the raw power of NVIDIA GPUs and aided by a network of thousands of cameras dotting the Californian landscape, DigitalPath has refined a convolutional neural network to spot signs of fire in real time.

A Mission That’s Close to Home

DigitalPath CEO Jim Higgins said it’s a mission that means a lot to the 100-strong technology partner, which is nestled in the Sierra Nevada foothills in Chico, Calif., a short drive from the town of Paradise, where the state’s deadliest wildfire killed 85 people in 2018.

“It’s one of the main reasons we’re doing this,” Higgins said of the wildfire, the deadliest and most destructive in the history of the most populous U.S. state. “We don’t want people to lose their lives.”

The ALERTCalifornia initiative is based at UC San Diego’s Jacobs School of Engineering, the Qualcomm Institute and the Scripps Institution of Oceanography.

The program manages a network of thousands of monitoring cameras and sensor arrays and collects data that provides actionable, real-time information to inform public safety.

The AI program started in June and was initially deployed in six of Cal Fire’s command centers. This month it expanded to all of CAL FIRE’s 21 command centers.

ALERTCalifornia, powered by DigitalPath, can detect fires from cameras positioned across the golden state.

DigitalPath began by building out a management platform for a network of cameras used to confirm California wildfires after a 911 call.

The company quickly realized there would be no way to have people examine images from the thousands of cameras relaying images to the system every ten to fifteen seconds.

So Ethan Higgins, the company’s system architect, turned to AI.

The team began by training a convolutional neural network on a cloud-based system running an NVIDIA A100 Tensor Core GPU and later transitioned to a system running on eight A100 GPUs.

The AI model is crucial to examining a system that sees almost 8 million images a day streaming in from over 1,000 first-party cameras, primarily in California, and thousands more from third-party sources nationwide, he said.

Impact of Wildfires

All anomalies being tracked throughout California as of Sept. 20, 2023. Image Credit: DigitalPath

It’s arriving just in time.

Wildfires have ravaged California over the past decade, burning millions of acres of land, destroying thousands of homes and businesses and claiming hundreds of lives.

According to CAL FIRE, in 2020 alone, the state experienced five of its six largest and seven of its 20 most destructive wildfires.

And the total dollar damage of wildfires in California from 2019 to 2021 was estimated at over $25 billion.

The new system promises to give first responders a crucial tool to prevent such conflagrations.

In fact, during a recent interview with DigitalPath, the system detected two separate fires in Northern California as they ignited.

Every day, the system detects between 50 and 300 events, offering invaluable real-time information to local first responders.

 

 

Beyond Detection: Enhancing Capabilities

Example of multiple cameras detecting a single anomaly. Image Credit: DigitalPath.

But AI is just part of the story.

The system is also a case study in how innovative companies can use AI to amplify their unique capabilities.

One of DigitalPath’s breakthroughs is its system’s ability to identify the same fire captured from diverse camera angles. DigitalPath’s system efficiently filters imagery down to a human-digestible level. The system filters 8 million daily images down to just 100 alerts, or 1.25 thousandths of one percent of total images captured.

“The system was designed from the start with human processing in mind,” Higgins said, ensuring that authorities receive a single, consolidated notification for every incident.

“We’ve got to catch every fire we can,” he adds.

Expanding Horizons

DigitalPath eventually hopes to expand its detection technology to help California detect more kinds of natural disasters.

And having proven its worth in California, DigitalPath is now in talks with state and county officials and university research teams across the fire-prone Western United States under its ALERTWest subsidiary.

Their goal: to help partners replicate the success of UC San Diego and ALERTCalifornia, potentially shielding countless lives and homes from the wrath of wildfires.

Featured image credit: SLworking2, via Flickr, Creative Commons license, some rights reserved.

Read More

Meet the Maker: Robotics Student Rolls Out Autonomous Wheelchair With NVIDIA Jetson

Meet the Maker: Robotics Student Rolls Out Autonomous Wheelchair With NVIDIA Jetson

With the help of AI, robots, tractors and baby strollers — even skate parks — are becoming autonomous. One developer, Kabilan KB, is bringing autonomous-navigation capabilities to wheelchairs, which could help improve mobility for people with disabilities.

The undergraduate from the Karunya Institute of Technology and Sciences in Coimbatore, India, is powering his autonomous wheelchair project using the NVIDIA Jetson platform for edge AI and robotics.

The autonomous motorized wheelchair is connected to depth and lidar sensors — along with USB cameras — which allow it to perceive the environment and plan an obstacle-free path toward a user’s desired destination.

“A person using the motorized wheelchair could provide the location they need to move to, which would already be programmed in the autonomous navigation system or path-planned with assigned numerical values,” KB said. “For example, they could press ‘one’ for the kitchen or ‘two’ for the bedroom, and the autonomous wheelchair will take them there.”

An NVIDIA Jetson Nano Developer Kit processes data from the cameras and sensors in real time. It then uses deep learning-based computer vision models to detect obstacles in the environment.

The developer kit acts as the brain of the autonomous system — generating a 2D map of its surroundings to plan a collision-free path to the destination — and sends updated signals to the motorized wheelchair to help ensure safe navigation along the way.

About the Maker

KB, who has a background in mechanical engineering, became fascinated with AI and robotics during the pandemic, when he spent his free time searching up educational YouTube videos on the topics.

He’s now working toward a bachelor’s degree in robotics and automation at the Karunya Institute of Technology and Sciences and aspires to one day launch a robotics startup.

KB, a self-described supporter of self-education, has also received several certifications from the NVIDIA Deep Learning Institute, including “Building Video AI Applications at the Edge on Jetson Nano” and “Develop, Customize and Publish in Omniverse With Extensions.”

Once he learned the basics of robotics, he began experimenting with simulation in NVIDIA Omniverse, a platform for building and operating 3D tools and applications based on the OpenUSD framework.

“Using Omniverse for simulation, I don’t need to invest heavily in prototyping models for my robots, because I can use synthetic data generation instead,” he said. “It’s the software of the future.”

His Inspiration

With this latest NVIDIA Jetson project, KB aimed to create a device that could be helpful for his cousin, who has a mobility disorder, and other people with disabilities who might not be able to control a manual or motorized wheelchair.

“Sometimes, people don’t have the money to buy an electric wheelchair,” KB said. “In India, only upper- and middle-class people can afford them, so I decided to use the most basic type of motorized wheelchair available and connect it to the Jetson to make it autonomous.”

The personal project was funded by the Program in Global Surgery and Social Change, which is jointly positioned under the Boston Children’s Hospital and Harvard Medical School.

His Jetson Project

After purchasing the basic motorized wheelchair, KB connected its motor hub with the NVIDIA Jetson Nano and lidar and depth cameras.

He trained the AI algorithms for the autonomous wheelchair using YOLO object detection on the Jetson Nano, as well as the Robot Operating System, or ROS, a popular software for building robotics applications.

The wheelchair can tap these algorithms to perceive and map its environment and plan a collision-free path.

“The NVIDIA Jetson Nano’s real-time processing speed prevents delays or lags for the user,” said KB, who’s been working on the project’s prototype since June. The developer dives into the technical components of the autonomous wheelchair on his blog. A demo of the autonomous wheelchair has also been featured on the Karunya Innovation and Design Studio YouTube channel.

Looking forward, he envisions his project could be expanded to allow users to control a wheelchair using brain signals from electroencephalograms, or EEGs, that are connected to machine learning algorithms.

“I want to make a product that would let a person with a full mobility disorder control their wheelchair by simply thinking, ‘I want to go there,’” KB said.

Learn more about the NVIDIA Jetson platform.

Read More

CG Geek Makes VFX Look Easy This Week ‘In the NVIDIA Studio’

CG Geek Makes VFX Look Easy This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources and how they dramatically accelerate content creation.

Releasing a 3D tutorial dubbed The Easiest VFX Tutorial Ever takes supreme confidence and the skills to back it up.

Steve Lund a.k.a. CG Geek — the featured artist of this week’s In the NVIDIA Studio installment — has both in spades. It’s no surprise that over 1 million people have subscribed to his YouTube channel, which features tutorials on animation and visual effects (VFX) as well as select tech reviews.

CG Geek has been a content creator for over 13 years, starting with videos on stop-motion animation before moving on to 3D software. Films and movies are his primary sources of inspiration. He grew up creating short films with his family — experimenting with and implementing video effects and 3D characters — which became a critical foundation for his current work.

Artists can strengthen their creative arsenal with the new Microsoft Surface Laptop Studio 2, available for pickup today. It’s powered by GeForce RTX 4060, GeForce RTX 4050 or NVIDIA RTX 2000 Ada Generation Laptop GPUs with 13th Gen Intel Core processors, up to 64GB of RAM and a 2TB SSD. It features a bright, vibrant 14.4-inch PixelSense Flow touchscreen, a 120Hz refresh rate, and Dolby Vision IQ and HDR to deliver sharper colors.

The versatile Microsoft Surface Laptop Studio 2.

The Easiest VFX Tutorial Ever

CG Geek also happens to be a geek for Blender, free for 3D enthusiasts, who regularly create impressive, individualistic art.

“I love the amazing Blender 3D community,” he said. “Whenever you need inspiration or creative feedback, they’re the most helpful, kind and talented collective of ever-growing artists.”

CG Geek wanted to make a tutorial that could prove that virtually anyone could get started in VFX with relative ease, from anywhere, at any time.

Work on VFX from anywhere — even the outdoors.

The first step, he instructs, is to capture video footage. To keep things simple, CG Geek recommends mounting a camera or mobile device to a tripod. Note that the camera lens determines the focal length and sensor size — critical details to input in Blender later in the process.

Keep track of the camera’s focal length and sensor size.

Keep a close eye on the video footage lighting for shadows and light intensity — it helps to snap a straight-down photo of the environment the 3D element will populate, namely for light bounces, to help create more realistic shadows.

Seasoned visual effects artists can capture and scan the entire 3D area.

Next, secure a 3D model. Create one with guidance from an NVIDIA Studio blog or watch detailed tutorials on the Studio YouTube channel. Alternatively, look online for a 3D model equipped with basic physically based rendering materials, as well as a roughness and normal map.

Sketchfab is an excellent resource for acquiring 3D models.

Next, combine the video footage and 3D materials. Open Blender, import the video footage and line up the 3D grid floor to the surface where the model will be presented. The 3D grid doubles as a shadow catcher that will grab the shadows being cast from the 3D elements. With an added texture, lighting will bounce back against the object, resulting in heightened realism.

The 3D grid floor will determine where the 3D model will be placed.

Then, light the 3D model to match the video footage. Most commonly, this is achieved by acquiring a high-dynamic range image (HDRI), a panorama with lighting data. CG Geek recommends Poly Haven for free, high-quality HDRIs. The key is picking one that resembles the lighting, color, shadow and intensity of the video footage.

Poly Haven has HDRIs for use in VFX work.

Use the HDRI lighting to align the sun’s rotation with the shadows of the footage, adding further realism.

Lighting adjustments in Blender.

From there, import camera information into Blender and render out passes for the 3D model over a transparent background in Cycles. Create as many render layers as possible for added post-render editing flexibility, especially in compositing. Shadowcatcher, glossy passes, Z depth and ambient occlusion layers are recommended for advanced users.

Speedy renders in Blender on NVIDIA Studio hardware.

These layers can then be combined in popular creative apps like Adobe Premiere Pro, After Effects, Blackmagic Design’s DaVinci Resolve or any of the over 100 NVIDIA RTX GPU-accelerated apps. This workflow, in particular, will be completed in Blender’s custom compositor.

Speedy renders in Blender.

Add shadows to the live footage with a multiple overlay. Then, carry over the 3D elements render layer to adjust the intensity of the shadows, helping them mesh better with the video capture. Individual layers can be edited to match the desired tone.

CG Geek made use of Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport. “Rendering in Cycles with multiple render layers and passes, along with the NVIDIA OptiX Denoiser, made animations and early tests a breeze,” he said.

“All my rendering changes can be visualized in real time thanks to the power of NVIDIA Studio before ever even hitting that button.” – CG Geek 

Finally, perform simple masking on areas where the 3D model passes in front of or behind objects. CG Geek’s one-minute YouTube tutorial can help guide this process. DaVinci Resolve or Premiere Pro’s AI-powered magic mask features can further speed the process by automatically masking background elements, saving the effort of painstakingly editing videos frame by frame.

These AI features are all accelerated by the GeForce RTX 4070 GPU equipped in CG Geek’s ASUS Zenbook 14 NVIDIA Studio laptop.

An entire workflow in a single shot.

“NVIDIA Studio laptops powered by RTX GPUs are great for portability and speed in a compact form factor.” – CG Geek

For CG Geek, getting reps in, making mistakes and strengthening weaknesses are the keys to evolving as an artist. “Don’t get hung up on the details!” he stressed. “Give yourself a deadline and then get started on another project.”

For more on the basics of 3D VFX and CGI with Blender, accelerated by the NVIDIA Studio platform and RTX GPUs, watch his featured five-minute tutorial.

Content creator CG Geek.

Check out CG Geek on YouTube.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More