Cooler weather, the changing colors of the leaves, the needless addition of pumpkin spice to just about everything, and discount Halloween candy are just some things to look forward to in the fall.
GeForce NOW members can add one more thing to the list — 25 games joining the cloud gaming library in October, including day-and-date releases like A Plague Tale: Requiem, Victoria 3 and others.
Let’s start off the cooler months with the six games streaming on GeForce NOW today.
Arriving in October
There’s a heap of gaming goodness in store for GeForce NOW members this month.
A tale continues when A Plague Tale: Requiem releases Tuesday, Oct. 18, enhanced with ray-traced effects for RTX 3080 and Priority members.
After escaping their devastated homeland in the critically acclaimed A Plague Tale: Innocence, siblings Amicia and Hugo venture south of 14th-century France to new regions and vibrant cities. But when Hugo’s powers reawaken, death and destruction return in a flood of devouring rats. Forced to flee once more, the siblings place their hopes in a prophesied island that may hold the key to saving Hugo.
The new adventure begins soon — streaming to even Macs and mobile devices with the power of the cloud — so make sure to add the game to your wishlist to start playing when it’s released.
On top of that, check out the rest of the games coming this month:
Asterigos: Curse of the Stars (New release on Steam, Oct. 11)
Kamiwaza: Way of the Thief (New release on Steam, Oct. 11)
Ozymandias: Bronze Age Empire Sim (New release on Steam, Oct. 11)
The great thing about GFN Thursday is that there are new games every week, so there’s no need to wait until Halloween to treat yourself to great gaming. Six games arrive today, including the new release of Dakar Desert Rally with support for NVIDIA DLSS technology.
Dakar Desert Rally captures the speed and excitement of Amaury Sport Organisation’s largest rally race, with a wide variety of licensed vehicles from the world’s top makers. An in-game dynamic weather system means racers will need to overcome the elements as well as the competition to win. Unique challenges and fierce, online multiplayer races are available for all members, whether an off-road simulation diehard or a casual racing fan.
This week also brings the latest season of Ubisoft’s Roller Champions. “Dragon’s Way” includes new maps, effects, cosmetics, emotes, gear and other seasonal goodies to bring out gamers’ inner beasts.
Here’s the full list of new games coming to the cloud this week:
Thanks to earbuds, people can take calls anywhere, while doing anything. The problem: those on the other end of the call can hear all the background noise, too, whether it’s the roommate’s vacuum cleaner or neighboring conversations at a café.
Now, work by a trio of graduate students at the University of Washington, who spent the pandemic cooped up together in a noisy apartment, lets those on the other end of the call hear just the speaker — rather than all the surrounding sounds.
Users found that the system, dubbed “ClearBuds” — presented last month at the ACM International Conference on Mobile Systems, Applications and Services — improved background noise suppression much better than a commercially available alternative.
AI Podcast host Noah Kravitz caught up with the team at ClearBuds to discuss the unlikely pandemic-time origin story behind a technology that promises to make calls clearer and easier, wherever we go.
Audio Analytic has been using machine learning that enables a vast array of devices to make sense of the world of sound. Dr. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges and the fun involved in teaching machines to listen.
Overjet, a member of the NVIDIA Inception program for startups, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of Overjet, talks about how her company improves patient care with AI-powered technology that analyzes and annotates X-rays for dentists and insurance providers.
Maya Ackerman is the CEO of WaveAI, a Silicon Valley startup using AI and machine learning to, as the company motto puts it, “unlock new heights of human creative expression.” She discusses WaveAI’s LyricStudio software, an AI-based lyric and poetry writing assistant.
Subscribe to the AI Podcast: Now Available on Amazon Music
Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.
When not engrossed in his studies toward a Ph.D. in statistics, conducting data-driven research on AI and robotics, or enjoying his favorite hobby of sailing, Yizhou Zhao is winning contests for developers who use NVIDIA Omniverse — a platform for connecting and building custom 3D pipelines and metaverse applications.
The fifth-year doctoral candidate at the University of California, Los Angeles recently received first place in the inaugural #ExtendOmniverse contest, where developers were invited to create their own Omniverse extension for a chance to win an NVIDIA RTX GPU.
Omniverse extensions are core building blocks that let anyone create and extend functions of Omniverse apps using the popular Python programming language.
Zhao’s winning entry, called “IndoorKit,” allows users to easily load and record robotics simulation tasks in indoor scenes. It sets up robotics manipulation tasks by automatically populating scenes with the indoor environment, the bot and other objects with just a few clicks.
“Typically, it’s hard to deploy a robotics task in simulation without a lot of skills in scene building, layout sampling and robot control,” Zhao said. “By bringing assets into Omniverse’s powerful user interface using the Universal Scene Description framework, my extension achieves instant scene setup and accurate control of the robot.”
Within “IndoorKit,” users can simply click “add object,” “add house,” “load scene,” “record scene” and other buttons to manipulate aspects of the environment and dive right into robotics simulation.
The “IndoorKit” extension also relies on assets from the NVIDIA Isaac Sim robotics simulation platform and Omniverse’s built-in PhysX capabilities for accurate, articulated manipulation of the bots.
In addition, “IndoorKit” can randomize a scene’s lighting, room materials and more. One scene Zhao built with the extension is highlighted in the feature video above.
Omniverse for Robotics
The “IndoorKit” extension bridges Omniverse and robotics research in simulation.
“I don’t see how accurate robot control was performed prior to Omniverse,” Zhao said. He provides four main reasons for why Omniverse was the ideal platform on which to build this extension:
First, Python’s popularity means many developers can build extensions with it to unlock machine learning and deep learning research for a broader audience, he said.
Second, using NVIDIA RTX GPUs with Omniverse greatly accelerates robot control and training.
Third, Omniverse’s ray-tracing technology enables real-time, photorealistic rendering of his scenes. This saves 90% of the time Zhao used to spend for experiment setup and simulation, he said.
And fourth, Omniverse’s real-time advanced physics simulation engine, PhysX, supports an extensive range of features — including liquid, particle and soft-body simulation — which “land on the frontier of robotics studies,” according to Zhao.
“The future of art, engineering and research is in the spirit of connecting everything: modeling, animation and simulation,” he said. “And Omniverse brings it all together.”
Julien Salinas wears many hats. He’s an entrepreneur, software developer and, until lately, a volunteer fireman in his mountain village an hour’s drive from Grenoble, a tech hub in southeast France.
He’s nurturing a two-year old startup, NLP Cloud, that’s already profitable, employs about a dozen people and serves customers around the globe. It’s one of many companies worldwide using NVIDIA software to deploy some of today’s most complex and powerful AI models.
NLP Cloud is an AI-powered software service for text data. A major European airline uses it to summarize internet news for its employees. A small healthcare company employs it to parse patient requests for prescription refills. An online app uses it to let kids talk to their favorite cartoon characters.
Large Language Models Speak Volumes
It’s all part of the magic of natural language processing (NLP), a popular form of AI that’s spawning some of the planet’s biggest neural networks called large language models. Trained with huge datasets on powerful systems, LLMs can handle all sorts of jobs such as recognizing and generating text with amazing accuracy.
NLP Cloud uses about 25 LLMs today, the largest has 20 billion parameters, a key measure of the sophistication of a model. And now it’s implementing BLOOM, an LLM with a whopping 176 billion parameters.
Running these massive models in production efficiently across multiple cloud services is hard work. That’s why Salinas turns to NVIDIA Triton Inference Server.
High Throughput, Low Latency
“Very quickly the main challenge we faced was server costs,” Salinas said, proud his self-funded startup has not taken any outside backing to date.
“Triton turned out to be a great way to make full use of the GPUs at our disposal,” he said.
For example, NVIDIA A100 Tensor Core GPUs can process as many as 10 requests at a time — twice the throughput of alternative software — thanks to FasterTransformer, a part of Triton that automates complex jobs like splitting up models across many GPUs.
FasterTransformer also helps NLP Cloud spread jobs that require more memory across multiple NVIDIA T4 GPUs while shaving the response time for the task.
Customers who demand the fastest response times can process 50 tokens — text elements like words or punctuation marks — in as little as half a second with Triton on an A100 GPU, about a third of the response time without Triton.
“That’s very cool,” said Salinas, who’s reviewed dozens of software tools on his personal blog.
Touring Triton’s Users
Around the globe, other startups and established giants are using Triton to get the most out of LLMs.
Microsoft’s Translate service helped disaster workers understand Haitian Creole while responding to a 7.0 earthquake. It was one of many use cases for the service that got a 27x speedup using Triton to run inference on models with up to 5 billion parameters.
NLP provider Cohere was founded by one of the AI researchers who wrote the seminal paper that defined transformer models. It’s getting up to 4x speedups on inference using Triton on its custom LLMs, so users of customer support chatbots, for example, get swift responses to their queries.
NLP Cloud and Cohere are among many members of the NVIDIA Inception program, which nurtures cutting-edge startups. Several other Inception startups also use Triton for AI inference on LLMs.
Tokyo-based rinna created chatbots used by millions in Japan, as well as tools to let developers build custom chatbots and AI-powered characters. Triton helped the company achieve inference latency of less than two seconds on GPUs.
In Tel Aviv, Tabnine runs a service that’s automated up to 30% of the code written by a million developers globally (see a demo below). Its service runs multiple LLMs on A100 GPUs with Triton to handle more than 20 programming languages and 15 code editors.
Twitter uses the LLM service of Writer, based in San Francisco. It ensures the social network’s employees write in a voice that adheres to the company’s style guide. Writer’s service achieves a 3x lower latency and up to 4x greater throughput using Triton compared to prior software.
If you want to put a face to those words, Inception member Ex-human, just down the street from Writer, helps users create realistic avatars for games, chatbots and virtual reality applications. With Triton, it delivers response times of less than a second on an LLM with 6 billion parameters while reducing GPU memory consumption by a third.
A Full-Stack Platform
Back in France, NLP Cloud is now using other elements of the NVIDIA AI platform.
For inference on models running on a single GPU, it’s adopting NVIDIA TensorRT software to minimize latency. “We’re getting blazing-fast performance with it, and latency is really going down,” Salinas said.
The company also started training custom versions of LLMs to support more languages and enhance efficiency. For that work, it’s adopting NVIDIA Nemo Megatron, an end-to-end framework for training and deploying LLMs with trillions of parameters.
The 35-year-old Salinas has the energy of a 20-something for coding and growing his business. He describes plans to build private infrastructure to complement the four public cloud services the startup uses, as well as to expand into LLMs that handle speech and text-to-image to address applications like semantic search.
“I always loved coding, but being a good developer is not enough: You have to understand your customers’ needs,” said Salinas, who posted code on GitHub nearly 200 times last year.
If you’re passionate about software, learn the latest on Triton in this technical blog.
Planes taxiing for long periods due to ground traffic — or circling the airport while awaiting clearance to land — don’t just make travelers impatient. They burn fuel unnecessarily, harming the environment and adding to airlines’ costs.
Searidge Technologies, based in Ottawa, Canada, has created AI-powered software to help the aviation industry avoid such issues, increasing efficiency and enhancing safety for airports.
Its Digital Tower and Apron solutions, powered by NVIDIA GPUs, use vision AI to manage traffic control for airports and alert users of safety concerns in real time. Searidge enables airports to handle 15-30% more aircraft per hour and reduce the number of tarmac incidents.
The company’s tech is used across the world, including at London’s Heathrow Airport, Fort Lauderdale-Hollywood International Airport in Florida and Dubai International Airport, to name a few.
In June, Searidge’s Digital Apron and Tower Management System (DATMS) went operational at Hong Kong International Airport as part of an initial phase of the Airport Authority Hong Kong’s large-scale expansion plan, which will bring machine learning to a new, integrated airport operations center.
In addition, Searidge provides the Civil Aviation Department of Hong Kong’s air-traffic control systems with next-generation safety enhancements using its vision AI software.
The deployment in Hong Kong is the industry’s largest digital platform for tower and apron management — and the first collaboration between an airport and an air-navigation service provider for a single digital platform.
Searidge is a member of NVIDIA Metropolis, a partner program focused on bringing to market a new generation of vision AI applications that make the world’s most important spaces and operations safer and more efficient.
Digital Tools for Airports
The early 2000s saw massive growth and restructuring of airports — and with this came increased use of digital tools in the aviation industry.
Founded in 2006, Searidge has become one of the first to bring machine learning to video processing in the aviation space, according to Pat Urbanek, the company’s vice president of business development for Asia Pacific and the Middle East.
“Video processing software for air-traffic control didn’t exist before,” Urbanek said. “It’s taken a decade to become mainstream — but now, intelligent video and machine learning have been brought into airport operations, enabling new levels of automation in air-traffic control and airside operations to enhance safety and efficiency.”
DATMS’s underlying machine learning platform, called Aimee, enables traffic-lighting automation based on data from radars and 4K-resolution video cameras. Aimee is trained to detect aircraft and vehicles. And DATMS is programmed based on the complex roadway rules that determine how buses and other vehicles should operate on service roads across taxiways.
After analyzing video data, the AI-enabled system activates or deactivates airports’ traffic lights in real time, based on when it’s appropriate for passenger buses and other vehicles to move. The status of each traffic light and additional details can also be visualized on end-user screens in airport traffic control rooms.
“What size is an aircraft? Does it have enough space to turn on the runway? Is it going too fast? All of this information and more is sent out over the Searidge Platform and displayed on screen based on user preference,” said Marco Rueckert, vice president of technology at Searidge.
The same underlying technology is applied to provide enhanced safety alerts for aircraft departure and arrival. In real time, DATMS alerts air traffic controllers of safety-standard breaches — taking into consideration clearances for aircraft to enter a runway, takeoff or land.
Speedups With NVIDIA GPUs
Searidge uses NVIDIA GPUs to optimize inference throughput across its deployments at airports around the globe. To train its AI models, Searidge uses an NVIDIA DGX A100 system.
“The NVIDIA platform allowed us to really bring down the hardware footprint and costs from the customer’s perspective,” Rueckert said. “It provides the scalability factor, so we can easily add more cameras with increasing resolution, which ultimately helps us solve more problems and address more customer needs.”
The company is also exploring the integration of voice data — based on communication between pilots and air-traffic controllers — within its machine learning platform to further enhance airport operations.
Searidge’s Digital Tower and Apron solutions can be customized for the unique challenges that come with varying airport layouts and traffic patterns.
“Of course, having aircraft land on time and letting passengers make their connections increases business and efficiency, but our technology has an environmental impact as well,” Urbanek said. “It can prevent burning of huge amounts of fuel — in the air or at the gate — by providing enhanced efficiency and safety for taxiing, takeoff and landing.”
Watch the latest GTC keynote by NVIDIA founder and CEO Jensen Huang to discover how vision AI and other groundbreaking technologies are shaping the world:
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
TwitchCon — the world’s top gathering of live streamers — kicks off Friday with the new line of GeForce RTX 40 Series GPUs bringing incredible new technology — from AV1 to AI — to elevate live streams for aspiring and professional Twitch creators alike.
In addition, creator and educator EposVox is in the NVIDIA Studio to discuss his influences, inspiration and advice for getting the most out of live streams.
Plus, join the #From2Dto3D challenge this month by sharing a 2D piece of art next to a 3D rendition of it for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #From2Dto3D to enter.
AV1 and Done
Releasing on Oct. 12, the new GeForce RTX 40 Series GPUs feature the eighth-generation NVIDIA video encoder, NVENC for short, now with support for AV1 encoding. For creators like EposVox, the new AV1 encoder will deliver 40% increased efficiency, unlocking higher resolutions and crisper image quality.
NVIDIA has collaborated with OBS Studio to add AV1 support to its next software release, expected later this month. In addition, Discord is enabling AV1 end to end for the first time later this year. GeForce RTX 40 Series owners will be able to stream with crisp, clear image quality at 1440p and even 4K resolution at 60 frames per second.
GeForce RTX 40 Series GPUs also feature dual encoders. This allows creators to capture up to 8K60. And when it’s time to cut a VOD of live streams, the dual encoders work in tandem, dividing work automatically, which slashes export times nearly in half. Blackmagic Design’s DaVinci Resolve, the popular Voukoder plugin for Adobe Premiere Pro, and Jianying — the top video editing app in China — are all enabling dual encoder through encode presets. Expect dual encoder availability for these apps in October.
The GeForce RTX 40 Series GPUs also give game streamers an unprecedented gen-to-gen frame-rate boost in PC games alongside the new NVIDIA DLSS 3 technology, which accelerates performance by up to 4x. This will unlock richer, more immersive ray-traced experiences to share via live streams, such as in Cyberpunk 2077 and Portal with RTX.
Virtual Live Streams Come to Life
VTube Studio is a leading app for virtual streamers (VTubers) that makes it easy and fun to bring digital avatars to life on a live stream.
VTube Studio is adding support this month for the NVIDIA Broadcast AR SDK, allowing users to seamlessly control their avatars with AI by using a regular webcam and a GeForce RTX GPU.
Objectively Blissful Streaming
OBS doesn’t stand for objectively blissful streaming, but it should.
OBS Studio is free, open-source software for video recording and live streaming. It’s one of EposVox’s essential apps, as he said it “allows me to produce my content at a rapid pace that’s constantly evolving.”
The software now features native integration of the AI-powered NVIDIA Broadcast effects, including Virtual Background, Noise Removal and Room Echo Removal.
In addition to adding AV1 support for GeForce RTX 40 Series GPUs later this month, the recent OBS 28.0 release added support for high-efficiency video coding (HEVC or H.265), improving video compression rates by 15% across a wide range of NVIDIA GPUs. It also now includes support for high-dynamic range (HDR), offering a greater range of bright and dark colors, which brings stunning vibrance and dramatic improvements in visual quality.
Broadcast for All
The SDKs that power NVIDIA Broadcast are available to developers, enabling native AI feature support in devices ranging from Logitech, Corsair and Elgato, as well as advanced workflows in OBS and Notch software.
Features released last month at NVIDIA GTC include new and updated AI-powered effects.
Virtual Background now includes temporal information, so random objects in the background will no longer create distractions by flashing in and out. This will be available in the next major version of OBS Studio.
Face Expression Estimation allows apps to accurately track facial expressions for face meshes, even with the simplest of webcams. It’s hugely beneficial to VTubers and can be found in the next version of VTube Studio.
Eye Contact allows podcasters to appear as if they’re looking directly at the camera — highly useful for when the user is reading a script or looking away to engage with viewers in the chat window.
It’s EposVox’s World, We’re All Just Living in It
Adam Taylor, who goes by the stage name EposVox or “The Stream Professor,” runs a YouTube channel focused on tech education for content creators and streamers.
He’s been making videos since before YouTube even existed.
“DailyMotion, Google Video, does anyone remember MetaCafe? X-Fire?” said EposVox.
He maintains a strong passion for educational content, which stemmed from his desire to learn video editing workflows as a young man, when he lacked the wealth of knowledge and resources available today.
“I immediately ran into constant walls of information that were kept behind closed doors when it came to deeper video topics, audio setups and more,” the artist said. “It was really frustrating — there was nothing and no one, aside from a decade or two of DOOM9 forums and outdated broadcast books, that had ever heard of a USB port to help guide me.”
While content creation and live streaming, especially with software like OBS Studio and XSplit, are EposVox’s primary focuses, he also aspires to make technology more fun and easy to use.
“The GPU acceleration in 3D and video apps, and now all the AI innovations that are coming to new generations, are incredible — I’m not sure I’d be able to create on the level that I do, nor at the speed I do, without NVIDIA GPUs.”
When searching for content inspiration, EposVox deploys a proactive approach — he’s all about asking questions. “Whether it’s trying to figure out how to do some overkill new setup for myself, breaking down neat effects I see elsewhere, or just asking which point in the process might cause friction for a viewer — I ask questions, figure out the best way to answer those questions, and deliver them to viewers,” he said.
EposVox stressed the importance of experimenting with multiple creative applications, noting that “every tool I can add to my tool chest enhances my creativity by giving me more options or ways to create, and more experiences with new processes for creating things.” This is especially true for the use of AI in his creative workflows, he added.
“What I love about AI art generation right now is the fact that I can just type any idea that comes to mind, in plain text language, and see it come to life,” he said. “I may not get exactly what I was expecting, I may have to continue refining my language and ideas to approach the representation I’m after — but knocking down the barrier between idea conception and seeing some form of that idea in front of me, I cannot overstate the impact that is created here.”
For an optimal live-streaming setup, EposVox recommends a PC equipped with a GeForce RTX GPU. His GeForce RTX 3090 desktop GPU, he said, can handle the rigors of the entire creative process and remain fast even when he’s constantly switching between computationally complex creative applications.
The artist said, “These days, I use GPU-accelerated NVENC encoding for capturing, exporting videos and live streaming.”
EposVox can’t wait for his GeForce RTX 4090 GPU upgrade, primarily to take advantage of the new dual encoders, noting “ I’ll probably end up saving a few hours a day since less time waiting on renders and uploads means I can move from project to project much quicker, rather than having to walk away and work on other things. I’ll be able to focus so much more.”
When asked for parting advice, EposVox didn’t hesitate: “If you commit to a creative vision for a project, but the entity you’re making it for— the company, agency, person or whomever — takes the project in a completely different direction, find some way to still bring your vision to life,” he said. “You’ll be so much better off — in terms of how you feel and the experience gained — if you can still bring that to life.”
For more tips on live streaming and video exports, check out EposVox’s YouTube channel.
And for step-by-step tutorials for all creative fields — created by industry-leading artists and community showcases — check out the NVIDIA Studio YouTube channel.
“Our goal was to create something that had never been done before,” said Gabriele Leone, creative director at NVIDIA, who led a team of over 30 artists working around the globe with nearly a dozen design tools to complete the project in just three months.
That something is a fully simulated, real-time playable environment — inspired by the team’s shared favorite childhood game, Re-Volt. In Racer RTX, radio-controlled cars zoom through Los Angeles streets, a desert and a chic loft bedroom.
The demo consists entirely of simulation, rather than animation. This means that its 1,800+ hand-modeled and textured 3D models — whether the radio-controlled cars or the dominos they knock over while racing — didn’t require traditional 3D design tasks like baking or pre-compute, which is the presetting of lighting for environments and other properties for assets.
Instead, the assets react to the changing virtual environment in real time while obeying the laws of physics. This is enabled by the real-time, advanced physics simulation engine, PhysX, which is built into NVIDIA Omniverse, a platform for connecting and building custom 3D pipelines and metaverse applications.
Dust trails are left behind by the cars depending on the turbulence from passing vehicles. And sand deforms under racers’ wheels according to how the tires drift.
And with the Omniverse RTX Renderer, lighting can be physically simulated with a click, changing throughout the environment and across surfaces based on whether it’s dawn, day or dusk in the scenes, which are set in Los Angeles’ buzzing beach town of Venice.
Connecting Apps and Workflows
Racer RTX was created to test the limits of the new NVIDIA Ada Lovelace architecture — and steer creators and developers toward a new future of their work.
“We wanted to demonstrate the next generation of content creation, where worlds will no longer be prebaked, but physically accurate, full simulations,” Leone said.
The result showcases high-fidelity, hyper-realistic physics and real-time ray tracing enabled by Omniverse — in 4K resolution at 60 frames per second, running with Ada and the new DLSS 3 technology.
“Our globally spread team used nearly a dozen different design and content-creation tools — bringing everything together in Omniverse using the ground-truth, extensible Universal Scene Description framework,” Leone added.
The NVIDIA artists began the project by sketching initial concept art and taking a slew of reference photos in the westside of LA. Next, they turned to software like Autodesk 3ds Max, Autodesk Maya, Blender, Cinema4D and many more to create the 3D assets, the vast majority of which were modeled by hand.
To add texture to the props, the artists used Adobe Substance 3D Designer and Adobe Substance 3D Painter. They then exported the files from these apps using the USD open 3D framework — and brought them into Omniverse Create for real-time collaboration in the virtual world.
Hyper-Realistic Physics
The RC cars in Racer RTX are each modeled with up to 70 individual pieces, including joints and suspensions, all with physics properties.
“Each car, each domino, every object in the demo has a different center of mass and weight depending on real-world parameters, so they act differently according to the laws of physics,” Leone said. “We can change the material of the floors, too, from sand to wood to ice — and use Omniverse’s native PhysX feature to make the vehicles drift along the surface with physically accurate friction.”
And to make the dust kick up behind the cars as they would in the real world, the artists used the NVIDIA Flow application for smoke, fluid and fire simulation.
In addition, the team created their own tools for the project-specific workflow, including Omniverse extensions — core building blocks that enable anyone to create and extend functionalities of Omniverse apps with just a few lines of Python code — to randomize and align objects in the scene.
The extensions, 3D assets and environments for the Racer RTX demo will be packaged together and available for download in the coming months, so owners of the GeForce RTX 4090 GPU can gear up to explore the environment.
Learn More About Omniverse
Dive deeper into the making of Racer RTX in an on-demand NVIDIA GTC session — where Leone is joined by Andrew Averkin, senior art manager; Chase Telegin, technical director of software; and Nikolay Usov, senior environment artist at NVIDIA, to discuss how they built the large-scale, photorealistic virtual world.
Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.
Genshin Impact’s new Version 3.1 update launches this GFN Thursday, just in time for the game’s second anniversary. Even better: GeForce NOW members can get an exclusive starter pack reward, perfect for their first steps in HoYoverse’s open-world adventure, action role-playing game.
And don’t forget the nine new games joining the GeForce NOW library this week, because there’s always something new to play.
Get the Party Started in ‘Genshin Impact’
Genshin Impact Version 3.1, “King Deshret and the Three Magi,” has arrived in time for the game’s second anniversary. The latest update introduces the massive desert area, new characters, events, gifts and more — and it’s the perfect time for new players to start their adventure, streaming on GeForce NOW.
Step into the starkly beautiful desert to uncover the legends of King Deshret and clues to the past buried in the sand. In addition, three Sumeru characters, Candace, Cyno and Nilou, join the playable cast.
Far beyond the sweltering sands, celebrations for Mondstadt’s Weinlesefest are arriving as the crisp autumn wind blows, delivering more events with “Wind Chaser” and “Star-Seeker’s Sojourn,” mini-games, and rich rewards.
Members who’ve opted in to GeForce NOW’s Rewards program will receive an email for a Genshin Impact starter kit that can be claimed through the NVIDIA Rewards redemption portal. The kit will become available in game once players reach Adventure Rank 10.
The reward includes 30,000 Mora to purchase various items, three “Mystic Enhancement Ores” to enhance weapons and three “Hero’s Wit” points to level up characters.
Haven’t opted in for members’ rewards yet? Log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game goodies.
Better hurry — these rewards are available for a limited time on a first-come, first-serve basis. To get first dibs, upgrade to a GeForce NOW Priority or RTX 3080 membership to receive rewards before anyone else.
Moar Games Plz
Ready to get into the action? Here’s what’s joining the GeForce NOW library this week:
Editor’s note: This post is part of the weekly In the NVIDIA Studio series, which celebrates featured artists and offers creative tips and tricks. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series features, demonstrating how NVIDIA Studio technology dramatically accelerates content creation.
NVIDIA artist Sabour Amirazodi demonstrates his video editing workflows featuring AI this week in a special edition of In the NVIDIA Studio.
The talented, versatile artist was asked to attend and capture video from the Electric Daisy Carnival music festival, commonly known as EDC, in Las Vegas this summer. Music festivals are profoundly inspirational for Amirazodi, as such spectacles are only achieved by teams bringing together multiple disciplines to drive unforgettable experiences, he said.
“From music produced by world-class DJs, to incredible visuals designed by the best motion-graphics artists, down to skillful pyro techs and lighting directors, it’s a mix of so many creative worlds to create such an amazing experience,” said Amirazodi.
To properly capture every finite detail of the action, Amirazodi filmed the entire event in spectacular 12K and 8K resolution with two cameras: the Canon R5 Mirrorless and the Blackmagic URSA Cinema.
Working with such large video files, Amirazodi deployed Blackmagic Design’s DaVinci Resolve 18 software to get the editing job done, accelerated by his NVIDIA Studio-powered desktop equipped with four NVIDIA RTX A6000 GPUs.
“Resolve does an incredible job taking advantage of NVIDIA RTX GPUs and using them to accelerate everything from playback to AI-accelerated effects and even encoding for final delivery,” Amirazodi said.
AI tools have become increasingly important in video-editing workflows, as 80% of all creative work consists of repetitive, redundant tasks. Reducing, or in some cases eliminating, these tasks frees creators in all fields to focus on experimenting with and perfecting their craft.
Take rotoscoping, the process of creating animated sequences by tracing over live-action footage. Done frame by frame, this is a notoriously slow and lengthy process. Thanks to the Magic Mask feature in DaVinci Resolve, however, AI can mask the selected object and automatically track it through multiple frames. This enables artists to apply specific effects to live footage with a single click. “Rotoscoping is a huge one that used to take me forever to accomplish,” said Amirazodi.
This game-changing feature is further sped up, by up to 70%, with the GeForce RTX 40 Series GPUs, compared to the previous generation.
Similarly, the Surface Tracking feature allows AI to track any surface, including uneven ones such as clothes with wrinkles. Even if the selection morphs and warps, it continues to be tracked, sticking to the surfaces Amirazodi selected.
Depth Map Generation is another AI-powered DaVinci Resolve feature that saved Amirazodi countless hours in the editing bay. By generating a depth map, the artist applied vibrant colors and lens effects like fog, and could blur the background of any clip.
DaVinci Resolve has an entire suite of RTX-accelerated, AI-powered features to explore.
Face Refinement detects facial features for fast touch-ups such as sharpening eyes and subtle relighting. Speed Warp can quickly create super-slow-motion videos with ease. Amirazodi’s favorite feature, Detect Scene Cuts, uses DaVinci Resolve’s neural engine to predict video cuts without manual edits — it’s an incredible boon for his efficiency.
According to Amirazodi, AI is “only scratching the surface” of creative possibilities.
Most AI features require significant computational power, and GeForce RTX GPUs allow video editors to get the most of these new AI features.
The GeForce RTX 40 Series also features new AV1 dual encoders. These work in tandem, dividing work automatically between them to double output and slash export times by up to 50%. GeForce RTX 40 Series graphics card owners gain an instant advantage over fellow freelancers seeking quick exports in multiple formats for different platforms.
The dual encoders are also capable of recording stunning content in up to 8K resolution and 60 frames per second in real time via GeForce Experience and OBS Studio.
The high-speed decoder allows editors to load and work with RAW footage files in real time for DaVinci Resolve as well as REDCINE-X PRO and Adobe Premiere Pro — without the need to generate lower-resolution files, also known as proxies.
DaVinci Resolve, the popular Voukoder plugin for Adobe Premiere Pro, and Jianying — the top video editing app in China — are all enabling AV1 support, as well as a dual encoder through encode presets, expected in October.
Amirazodi specializes in video editing, 3D modeling, interactive experiences, and is an all-around creative savant. View his work on IMDb.
For more on AI-powered features in DaVinci Resolve, check out this new demo video:
Last Call for #CreatorsJourney Submissions
The NVIDIA Studio #CreatorsJourney contest is ending on Friday, Sept. 30.
Entering is quick and easy. Simply post an older piece of artwork alongside a more recent one to showcase your growth as an artist. Follow and tag NVIDIA Studio on Instagram, Twitter or Facebook, and use the #CreatorsJourney tag to join, like Amanda Melville, who persevered to become an exceptional 3D artist:
3 years of growth 3 years of knowledge 3 years of being glad I stuck through it and keep learning
The massive virtual worlds created by growing numbers of companies and creators could be more easily populated with a diverse array of 3D buildings, vehicles, characters and more — thanks to a new AI model from NVIDIA Research.
Trained using only 2D images, NVIDIA GET3D generates 3D shapes with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.
The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture and social media.
GET3D can generate a virtually unlimited number of 3D shapes based on the data it’s trained on. Like an artist who turns a lump of clay into a detailed sculpture, the model transforms numbers into complex 3D shapes.
With a training dataset of 2D car images, for example, it creates a collection of sedans, trucks, race cars and vans. When trained on animal images, it comes up with creatures such as foxes, rhinos, horses and bears. Given chairs, the model generates assorted swivel chairs, dining chairs and cozy recliners.
“GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA, who leads the Toronto-based AI lab that created the tool. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”
GET3D is one of more than 20 NVIDIA-authored papers and workshops accepted to the NeurIPS AI conference, taking place in New Orleans and virtually, Nov. 26-Dec. 4.
It Takes AI Kinds to Make a Virtual World
The real world is full of variety: streets are lined with unique buildings, with different vehicles whizzing by and diverse crowds passing through. Manually modeling a 3D virtual world that reflects this is incredibly time consuming, making it difficult to fill out a detailed digital environment.
Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.
GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. The larger, more diverse the training dataset it’s learned from, the more varied and detailed the output.
NVIDIA researchers trained GET3D on synthetic data consisting of 2D images of 3D shapes captured from different camera angles. It took the team just two days to train the model on around 1 million images using NVIDIA A100 Tensor Core GPUs.
Enabling Creators to Modify Shape, Texture, Material
GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.
Once creators export GET3D-generated shapes to a graphics application, they can apply realistic lighting effects as the object moves or rotates in a scene. By incorporating another AI tool from NVIDIA Research, StyleGAN-NADA, developers can use text prompts to add a specific style to an image, such as modifying a rendered car to become a burned car or a taxi, or turning a regular house into a haunted one.
The researchers note that a future version of GET3D could use camera pose estimation techniques to allow developers to train the model on real-world data instead of synthetic datasets. It could also be improved to support universal generation — meaning developers could train GET3D on all kinds of 3D shapes at once, rather than needing to train it on one object category at a time.
For the latest news from NVIDIA AI research, watch the replay of NVIDIA founder and CEO Jensen Huang’s keynote address at GTC: