What Is Green Computing?

Everyone wants green computing.

Mobile users demand maximum performance and battery life. Businesses and governments increasingly require systems that are powerful yet environmentally friendly. And cloud services must respond to global demands without making the grid stutter.

For these reasons and more, green computing has evolved rapidly over the past three decades, and it’s here to stay.

What Is Green Computing?

Green computing, or sustainable computing, is the practice of maximizing energy efficiency and minimizing environmental impact in the ways computer chips, systems and software are designed and used.

Also called green information technology, green IT or sustainable IT, green computing spans concerns across the supply chain, from the raw materials used to make computers to how systems get recycled.

In their working lives, green computers must deliver the most work for the least energy, typically measured by performance per watt.

Why Is Green Computing Important?

Green computing is a significant tool to combat climate change, the existential threat of our time.

Global temperatures have risen about 1.2°C over the last century. As a result, ice caps are melting, causing sea levels to rise about 20 centimeters and increasing the number and severity of extreme weather events.

The rising use of electricity is one of the causes of global warming. Data centers represent a small fraction of total electricity use, about 1% or 200 terawatt-hours per year, but they’re a growing factor that demands attention.

Powerful, energy-efficient computers are part of the solution. They’re advancing science and our quality of life, including the ways we understand and respond to climate change.

What Are the Elements of Green Computing?

Engineers know green computing is a holistic discipline.

“Energy efficiency is a full-stack issue, from the software down to the chips,” said Sachin Idgunji, co-chair of the power working group for the industry’s MLPerf AI benchmark and a distinguished engineer working on performance analysis at NVIDIA.

For example, in one analysis he found NVIDIA DGX A100 systems delivered a nearly 5x improvement in energy efficiency in scale-out AI training benchmarks compared to the prior generation.

“My primary role is analyzing and improving energy efficiency of AI applications at everything from the GPU and the system node to the full data center scale,” he said.

Idgunji’s work is a job description for a growing cadre of engineers building products from smartphones to supercomputers.

What’s the History of Green Computing?

Green computing hit the public spotlight in 1992, when the U.S. Environmental Protection Agency launched Energy Star, a program for identifying consumer electronics that met standards in energy efficiency.

A logo for energy efficient systems
The Energy Star logo is now used across more than three dozen product groups.

A 2017 report found nearly 100 government and industry programs across 22 countries promoting what it called green ICTs, sustainable information and communication technologies.

One such organization, the Green Electronics Council, provides the Electronic Product Environmental Assessment Tool, a registry of systems and their energy-efficiency levels. The council claims it’s saved nearly 400 million megawatt-hours of electricity through use of 1.5 billion green products it’s recommended to date.

Work on green computing continues across the industry at every level.

For example, some large data centers use liquid-cooling while others locate data centers where they can use cool ambient air. Schneider Electric recently released a whitepaper recommending 23 metrics for determining the sustainability level of data centers.

A checklist for green computing in a data center
Data centers need to consider energy and water use as well as greenhouse gas emissions and waste to measure their sustainability, according to a Schneider whitepaper.

A Pioneer in Energy Efficiency

Wu Feng, a computer science professor at Virginia Tech, built a career pushing the limits of green computing. It started out of necessity while he was working at the Los Alamos National Laboratory.

A computer cluster for open science research he maintained in an external warehouse had twice as many failures in summers versus winters. So, he built a lower-power system that wouldn’t generate as much heat.

Green Destiny, an energy efficient computer
The Green Destiny supercomputer

He demoed the system, dubbed Green Destiny, at the Supercomputing conference in 2001. Covered by the BBC, CNN and the New York Times, among others, it sparked years of talks and debates in the HPC community about the potential reliability as well as efficiency of green computing.

Interest rose as supercomputers and data centers grew, pushing their boundaries in power consumption. In November 2007, after working with some 30 HPC luminaries and gathering community feedback, Feng launched the first Green500 List, the industry’s benchmark for energy-efficient supercomputing.

A Green Computing Benchmark

The Green500 became a rallying point for a community that needed to reign in power consumption while taking performance to new heights.

“Energy efficiency increased exponentially, flops per watt doubled about every year and a half for the greenest supercomputer at the top of the list,” said Feng.

By some measures, the results showed the energy efficiency of the world’s greenest systems increased two orders of magnitude in the last 14 years.

The Green500 list shows the energy efficiency of NVIDIA GPUs
The Green500 showed that heterogeneous systems — those with accelerators like GPUs in addition to CPUs — are consistently the most energy-efficient ones.

Feng attributes the gains mainly to the use of accelerators such as GPUs, now common among the world’s fastest systems.

“Accelerators added the capability to execute code in a massively parallel way without a lot of overhead — they let us run blazingly fast,” he said.

He cited two generations of the Tsubame supercomputers in Japan as early examples. They used NVIDIA Kepler and Pascal architecture GPUs to lead the Green500 list in 2014 and 2017, part of a procession of GPU-accelerated systems on the list.

“Accelerators have had a huge impact throughout the list,” said Feng, who will receive an award for his green supercomputing work at the Supercomputing event in November.

“Notably, NVIDIA was fantastic in its engagement and support of the Green500 by ensuring its energy-efficiency numbers were reported, thus helping energy efficiency become a first-class citizen in how supercomputers are designed today,” he added.

AI and Networking Get More Efficient

Today, GPUs and data processing units (DPUs) are bringing greater energy efficiency to AI and networking tasks, as well as HPC jobs like simulations run on supercomputers and enterprise data centers.

AI, the most powerful technology of our time, will become a part of every business. McKinsey & Co. estimates AI will add a staggering $13 trillion to global GDP by 2030 as deployments grow.

NVIDIA estimates data centers could save a whopping 19 terawatt-hours of electricity a year if all AI, HPC and networking offloads were run on GPU and DPU accelerators (see the charts below). That’s the equivalent of the energy consumption of 2.9 million passenger cars driven for a year.

It’s an eye-popping measure of the potential for energy efficiency with accelerated computing.

The energy efficiency of using GPUs and DPUs for green computing
An analysis of the potential energy savings of accelerated computing with GPUs and DPUs.

AI Benchmark Measures Efficiency

Because AI represents a growing part of enterprise workloads, the MLPerf industry benchmarks for AI have been measuring performance per watt on submissions for data center and edge inference since February 2021.

“The next frontier for us is to measure energy efficiency for AI on larger distributed systems, for HPC workloads and for AI training — it’s similar to the Green500 work,” said Idgunji, whose power group at MLPerf includes members from six other chip and systems companies.

Energy efficiency gains of green computing with NVIDIA Jetson
NVIDIA Jetson modules recently demonstrated significant generation-to-generation leaps in performance per watt in MLPerf benchmarks of AI inference.

The public results motivate participants to make significant improvements with each product generation. They also help engineers and developers understand ways to balance performance and efficiency across the major AI workloads that MLPerf tests.

“Software optimizations are a big part of work because they can lead to large impacts in energy efficiency, and if your system is energy efficient, it’s more reliable, too,” Idgunji said.

Green Computing for Consumers

In PCs and laptops, “we’ve been investing in efficiency for a long time because it’s the right thing to do,” said Narayan Kulshrestha, a GPU power architect at NVIDIA who’s been working in the field nearly two decades.

For example, Dynamic Boost 2.0 uses deep learning to automatically direct power to a CPU, a GPU or a GPU’s memory to increase system efficiency. In addition, NVIDIA created a system-level design for laptops, called Max-Q, to optimize and balance energy efficiency and performance.

Building a Cyclical Economy

When a user replaces a system, the standard practice in green computing is that the old system gets broken down and recycled. But Matt Hull sees better possibilities.

“Our vision is a cyclical economy that enables everyone with AI at a variety of price points,” said Hull, the vice president of sales for data center AI products at NVIDIA.

So he aims to find the system a new home with users in developing countries who find it useful and affordable. It’s a work in progress seeking the right partner and writing a new chapter in an existing lifecycle management process.

Green Computing Fights Climate Change

Energy-efficient computers are among the sharpest tools fighting climate change.

Scientists in government labs and universities have long used GPUs to model climate scenarios and predict weather patterns. Recent advances in AI, driven by NVIDIA GPUs, can now help model weather forecasting 100,000x quicker than traditional models. Watch the following video for details:

In an effort to accelerate climate science, NVIDIA announced plans to build Earth-2, an AI supercomputer dedicated to predicting the impacts of climate change. It will use NVIDIA Omniverse, a 3D design collaboration and simulation platform, to build a digital twin of Earth so scientists can model climates in ultra-high resolution.

In addition, NVIDIA is working with the United Nations Satellite Centre to accelerate climate-disaster management and train data scientists across the globe in using AI to improve flood detection.

Meanwhile, utilities are embracing machine learning to move toward a green, resilient and smart grid. Power plants are using digital twins to predict costly maintenance and model new energy sources, such as fusion-reactor designs.

What’s Ahead in Green Computing?

Feng sees the core technology behind green computing moving forward on multiple fronts.

In the short term, he’s working on what’s called energy proportionality, that is, ways to make sure systems get peak power when they need peak performance and scale gracefully down to zero power as they slow to an idle, like a modern car engine that slows its RPMs and then shuts down at a red light.

Researchers seek to close the gap in energy-proportional computing.

Long term, he’s exploring ways to minimize data movement inside and between computer chips to reduce their energy consumption. And he’s among many researchers studying the promise of quantum computing to deliver new kinds of acceleration.

It’s all part of the ongoing work of green computing, delivering ever more performance at ever greater efficiency.

The post What Is Green Computing? appeared first on NVIDIA Blog.

Read More

GeForce RTX 4090 GPU Arrives, Enabling New World-Building Possibilities for 3D Artists This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll deep dive on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Creators can now pick up the GeForce RTX 4090 GPU, available from top add-in card providers including ASUS, Colorful, Gainward, Galaxy, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.

Fall has arrived, and with it comes the perfect time to showcase the beautiful, harrowing video, Old Abandoned Haunted Mansion, created by 3D artist and principal lighting expert Pasquale Scionti this week In the NVIDIA Studio.

Artists like Scionti can create at the speed of light with the help of RTX 40 Series GPUs alongside 110 RTX-accelerated apps, the NVIDIA Studio suite of software and dedicated Studio Drivers.

A Quantum Leap in Creative Performance

The new GeForce RTX 4090 GPU brings an extraordinary boost in performance, third-generation RT Cores, fourth-generation Tensor Cores, an eighth-generation NVIDIA Dual AV1 Encoder and 24GB of Micron G6X memory capable of reaching 1TB/s bandwidth.

The new GeForce RTX 4090 GPU.

3D artists can now build scenes in fully ray-traced environments with accurate physics and realistic materials — all in real time, without proxies. DLSS 3 technology uses the AI-powered RTX Tensor Cores and a new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness and speeds up movement in the viewport. NVIDIA is working with popular 3D apps Unity and Unreal Engine 5 to integrate DLSS 3.

DLSS 3 will also benefit workflows in the NVIDIA Omniverse platform for building and connecting custom 3D pipelines. New Omniverse tools such as NVIDIA RTX Remix for modders, which was used to create Portal with RTX, will be game changers for 3D content creation.

Video and live-streaming creative workflows are also turbocharged as the new AV1 encoder delivers 40% increased efficiency, unlocking higher resolution and crisper image quality. Expect AV1 integration in OBS Studio, DaVinci Resolve and Adobe Premiere Pro (though the Voukoder plugin) later this month.

The new dual encoders capture up to 8K resolution at 60 FPS in real time via GeForce Experience and OBS Studio, and cut video export times nearly in half. These encoders will be enabled in popular video-editing apps including Blackmagic Design’s DaVinci Resolve, the Voukoder plugin for Adobe Premiere Pro, and Jianying Pro — China’s top video-editing app — later this month.

State-of-the-art AI technology, like AI image generators and new video-editing tools in DaVinci Resolve, is ushering in the next step in the AI revolution, delivering up to a 2x increase in performance over the previous generation.

To break technological barriers and expand creative possibilities, pick up the GeForce RTX 4090 GPU today. Check out this product finder for retail availability.

Haunted Mansion Origins

The visual impact of Old Abandoned Haunted Mansion is nothing short of remarkable, with photorealistic details for lighting and shadows and stunningly accurate textures.

However, it’s Scionti’s intentional omission of specific detail that allows viewers to construct their own narrative, a staple of his work.

Scionti highlighted additional mysterious features he created within the haunted mansion: a painting with a specter on the stairs, knocked-over furniture, a portrait of a woman who might’ve lived there and a mirror smashed in the middle as if someone struck it.

“Perhaps whatever happened is still in these walls,” mused Scionti. “Abandoned, reclaimed by nature.”

Scionti said he finds inspiration in the works of H.R. Giger, H.P. Lovecraft and Edgar Allan Poe, and often dreams of the worlds he aspires to build before bringing them to life in 3D. He stressed, however, “I don’t have a dark side! It just appears in my work!”

For Old Abandoned Haunted Mansion, the artist began by creating a moodboard featuring abandoned places. He specifically included structures that were reclaimed by nature to create a warm mood with the sun filtering in from windows, doors and broken walls.

Foundational building blocks in Autodesk 3ds Max.

Scionti then modeled the scene’s objects, such as the ceiling lamp, painting frames and staircase, using Autodesk 3ds Max. By using a GeForce RTX 3090 GPU and selecting the default Autodesk Arnold renderer, he deployed RTX-accelerated AI denoising, resulting in interactive renders that were easy to edit while maintaining photorealism.

Modeling in Autodesk 3d Max.

The versatile Autodesk 3ds Max software supports third-party GPU-accelerated renderers such as V-Ray, OctaneRender and Redshift, giving RTX owners additional options for their creative workflows.

When it comes time to export the renders, Scionti will soon be able to use GeForce RTX 40 Series GPUs to do so up to 80% faster than the previous generation.

Texture applications in Adobe 3D Substance Painter.

Scionti imported the models, like the ceiling lamp and various paintings, into Adobe Substance 3D Painter to apply unique textures. The artist used RTX-accelerated light and ambient occlusion to bake his assets in mere seconds.

Modeling techniques for the curtains, the drape on the armchair and the ghostly figure were created using Marvelous Designer, a realistic cloth-making program for 3D artists. In a system-requirements page, the Marvelous Designer team recommends using GeForce RTX 30 and other NVIDIA RTX GPU class GPUs, as well as downloading the latest NVIDIA Studio Driver.

Texturing and material creation in Quixel Mixer.

Additional objects like the wooden ceiling were created using Quixel Mixer, an all-in-one texturing and material-creation tool designed to be intuitive and extremely fast.

Browsing objects in Quixel Megascans.

Scionti then searched Quixel Megascans, the largest and fastest growing 3D can library, to acquire the remaining assets to round out the piece.

With the composition in place, Scionti applied final details in Unreal Engine 5.

RTX ON in Unreal Engine 5

Scionti used Unreal Engine 5, activating hardware-accelerated RTX ray tracing for high-fidelity, interactive visualization of 3D designs. He was further aided by NVIDIA DLSS, which uses AI to upscale frames rendered at lower resolution while retaining high-fidelity detail. The artist then constructed the scene rich with beautiful lighting, shadows and textures.

The new GeForce RTX 40 Series GPU lineup will use DLSS 3 — coming soon to UE5 — with AI Frame Generation to further enhance interactivity in the viewport.

Scionti perfected his lighting with Lumen, UE5’s fully dynamic global illumination and reflections system, supported by GeForce RTX GPUs.

Photorealistic details achieved thanks to Unreal Engine 5 and NVIDIA RTX-accelerated ray tracing.

“Nanite meshes were useful to have high polygons for close up details,” noted Scionti. “For lighting, I used the sun and sky, but to add even more light, I inserted rectangular light sources outside each opening, like the windows and the broken wall.”

To complete the video, Scionti added a deliberately paced, instrumental score which consists of a piano, violin, synthesizer and drum. The music injects an unexpected emotional element to the piece.

Scionti reflected on his creative journey, which he considers a relentless pursuit of knowledge and perfecting his craft. “The pride of seeing years of commitment and passion being recognized is incredible, and that drive has led me to where I am today,” he said.

To embark on an Unreal Engine 5-powered creative journey through desert scenes, alien landscapes, abandoned towns, castle ruins and beyond, check out the latest NVIDIA Studio Standout featuring some of the most talented 3D artists, including Scionti.

3D artist and principal lighting expert Pasquale Scionti.

For more, explore Scionti’s Instagram.

Join the #From2Dto3D challenge

Scionti brought Old Abandoned Haunted Mansion from 2D beauty into 3D realism — and the NVIDIA Studio team wants to see more 2D to 3D progress.

Join the #From2Dto3D challenge this month for a chance to be featured on the NVIDIA Studio social media channels, like @juliestrator, whose delightfully cute illustration is elevated in 3D:

Entering is quick and easy. Simply post a 2D piece of art next to a 3D rendition of it on Instagram, Twitter or Facebook. And be sure to tag #From2Dto3D to enter.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post GeForce RTX 4090 GPU Arrives, Enabling New World-Building Possibilities for 3D Artists This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Beyond Words: Large Language Models Expand AI’s Horizon

Back in 2018, BERT got people talking about how machine learning models were learning to read and speak. Today, large language models, or LLMs, are growing up fast, showing dexterity in all sorts of applications.

They’re, for one, speeding drug discovery, thanks to research from the Rostlab at Technical University of Munich, as well as work by a team from Harvard, Yale and New York University and others. In separate efforts, they applied LLMs to interpret the strings of amino acids that make up proteins, advancing our understanding of these building blocks of biology.

It’s one of many inroads LLMs are making in healthcare, robotics and other fields.

A Brief History of LLMs

Transformer models — neural networks, defined in 2017, that can learn context in sequential data — got LLMs started.

Researchers behind BERT and other transformer models made 2018 “a watershed moment” for natural language processing, a report on AI said at the end of that year. “Quite a few experts have claimed that the release of BERT marks a new era in NLP,” it added.

Developed by Google, BERT (aka Bidirectional Encoder Representations from Transformers) delivered state-of-the-art scores on benchmarks for NLP. In 2019, it announced BERT powers the company’s search engine.

Google released BERT as open-source software, spawning a family of follow-ons and setting off a race to build ever larger, more powerful LLMs.

For instance, Meta created an enhanced version called RoBERTa, released as open-source code in July 2017. For training, it used “an order of magnitude more data than BERT,” the paper said, and leapt ahead on NLP leaderboards. A scrum followed.

Scaling Parameters and Markets

For convenience, score is often kept by the number of an LLM’s parameters or weights, measures of the strength of a connection between two nodes in a neural network. BERT had 110 million, RoBERTa had 123 million, then BERT-Large weighed in at 354 million, setting a new record, but not for long.

Compute required for training LLMs
As LLMs expanded into new applications, their size and computing requirements grew.

In 2020, researchers at OpenAI and Johns Hopkins University announced GPT-3, with a whopping 175 billion parameters, trained on a dataset with nearly a trillion words. It scored well on a slew of language tasks and even ciphered three-digit arithmetic.

“Language models have a wide range of beneficial applications for society,” the researchers wrote.

Experts Feel ‘Blown Away’

Within weeks, people were using GPT-3 to create poems, programs, songs, websites and more. Recently, GPT-3 even wrote an academic paper about itself.

“I just remember being kind of blown away by the things that it could do, for being just a language model,” said Percy Liang, a Stanford associate professor of computer science, speaking in a podcast.

GPT-3 helped motivate Stanford to create a center Liang now leads, exploring the implications of what it calls foundational models that can handle a wide variety of tasks well.

Toward Trillions of Parameters

Last year, NVIDIA announced the Megatron 530B LLM that can be trained for new domains and languages. It debuted with tools and services for training language models with trillions of parameters.

“Large language models have proven to be flexible and capable … able to answer deep domain questions without specialized training or supervision,” Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, said at that time.

Making it even easier for users to adopt the powerful models, the NVIDIA Nemo LLM service debuted in September at GTC. It’s an NVIDIA-managed cloud service to adapt pretrained LLMs to perform specific tasks.

Transformers Transform Drug Discovery

The advances LLMs are making with proteins and chemical structures are also being applied to DNA.

Researchers aim to scale their work with NVIDIA BioNeMo, a software framework and cloud service to generate, predict and understand biomolecular data. Part of the NVIDIA Clara Discovery collection of frameworks, applications and AI models for drug discovery, it supports work in widely used protein, DNA and chemistry data formats.

NVIDIA BioNeMo features multiple pretrained AI models, including the MegaMolBART model, developed by NVIDIA and AstraZeneca.

LLM use cases in healthcare
In their paper on foundational models, Stanford researchers projected many uses for LLMs in healthcare.

LLMs Enhance Computer Vision

Transformers are also reshaping computer vision as powerful LLMs replace traditional convolutional AI models. For example, researchers at Meta AI and Dartmouth designed TimeSformer, an AI model that uses transformers to analyze video with state-of-the-art results.

Experts predict such models could spawn all sorts of new applications in computational photography, education and interactive experiences for mobile users.

In related work earlier this year, two companies released powerful AI models to generate images from text.

OpenAI announced DALL-E 2, a transformer model with 3.5 billion parameters designed to create realistic images from text descriptions. And recently, Stability AI, based in London, launched Stability Diffusion,

Writing Code, Controlling Robots

LLMs also help developers write software. Tabnine — a member of NVIDIA Inception, a program that nurtures cutting-edge startups — claims it’s automating up to 30% of the code generated by a million developers.

Taking the next step, researchers are using transformer-based models to teach robots used in manufacturing, construction, autonomous driving and personal assistants.

For example, DeepMind developed Gato, an LLM that taught a robotic arm how to stack blocks. The 1.2-billion parameter model was trained on more than 600 distinct tasks so it could be useful in a variety of modes and environments, whether playing games or animating chatbots.

Gato LLM has many applications
The Gato LLM can analyze robot actions and images as well as text.

“By scaling up and iterating on this same basic approach, we can build a useful general-purpose agent,” researchers said in a paper posted in May.

It’s another example of what the Stanford center in a July paper called a paradigm shift in AI. “Foundation models have only just begun to transform the way AI systems are built and deployed in the world,” it said.

Learn how companies around the world are implementing LLMs with NVIDIA Triton for many use cases.

The post Beyond Words: Large Language Models Expand AI’s Horizon appeared first on NVIDIA Blog.

Read More

Fall Into October With 25 New Games Streaming on GeForce NOW

Cooler weather, the changing colors of the leaves, the needless addition of pumpkin spice to just about everything, and discount Halloween candy are just some things to look forward to in the fall.

GeForce NOW members can add one more thing to the list — 25 games joining the cloud gaming library in October, including day-and-date releases like A Plague Tale: Requiem, Victoria 3 and others.

Let’s start off the cooler months with the six games streaming on GeForce NOW today.

Arriving in October

There’s a heap of gaming goodness in store for GeForce NOW members this month.

A tale continues when A Plague Tale: Requiem releases Tuesday, Oct. 18, enhanced with ray-traced effects for RTX 3080 and Priority members.

After escaping their devastated homeland in the critically acclaimed A Plague Tale: Innocence, siblings Amicia and Hugo venture south of 14th-century France to new regions and vibrant cities. But when Hugo’s powers reawaken, death and destruction return in a flood of devouring rats. Forced to flee once more, the siblings place their hopes in a prophesied island that may hold the key to saving Hugo.

The new adventure begins soon — streaming to even Macs and mobile devices with the power of the cloud — so make sure to add the game to your wishlist to start playing when it’s released.

On top of that, check out the rest of the games coming this month:

  • Asterigos: Curse of the Stars (New release on Steam, Oct. 11)
  • Kamiwaza: Way of the Thief (New release on Steam, Oct. 11)
  • Ozymandias: Bronze Age Empire Sim (New release on Steam, Oct. 11)
  • LEGO Bricktales (New release on Steam, Oct. 12)
  • PC Building Simulator 2 (New release on Epic Games Store, Oct 12)
  • The Last Oricru (New release on Steam, Oct. 13)
  • Scorn (New release on Steam and Epic Games Store, Oct. 14)
  • A Plague Tale: Requiem (New release on Steam and Epic Games Store, Oct. 18)
  • Warhammer 40,000: Shootas, Blood & Teef (New release on Steam Oct. 20)
  • FAITH: The Unholy Trinity (New release on Steam, Oct. 21)
  • Victoria 3 (New release on Steam, Oct. 25)
  • The Unliving (New release on Steam, Oct. 31)
  • Commandos 3 – HD Remaster (Steam and Epic Games Store)
  • Draw Slasher (Steam)
  • Guild Wars: Game of the Year (Steam)
  • Guild Wars: Trilogy (Steam)
  • Labyrinthine (Steam)
  • Volcanoids (Steam)
  • Monster Outbreak (Steam and Epic Games Store)

Gotta Go Fast

The great thing about GFN Thursday is that there are new games every week, so there’s no need to wait until Halloween to treat yourself to great gaming. Six games arrive today, including the new release of Dakar Desert Rally with support for NVIDIA DLSS technology.

Dakar Desert Rally on GeForce NOW
Honestly, don’t even bother going to the car wash. You’ll just get it dirty again.

Dakar Desert Rally captures the speed and excitement of Amaury Sport Organisation’s largest rally race, with a wide variety of licensed vehicles from the world’s top makers. An in-game dynamic weather system means racers will need to overcome the elements as well as the competition to win. Unique challenges and fierce, online multiplayer races are available for all members, whether an off-road simulation diehard or a casual racing fan.

This week also brings the latest season of Ubisoft’s Roller Champions. “Dragon’s Way” includes new maps, effects, cosmetics, emotes, gear and other seasonal goodies to bring out gamers’ inner beasts.

Here’s the full list of new games coming to the cloud this week:

  • Marauders (New release on Steam)
  • Dakar Desert Rally (New release on Steam)
  • Lord of Rigel (New release on Steam)
  • Priest Simulator (New release on Steam)
  • Barotrauma (Steam)
  • Black Desert Online – North America and Europe (Pearl Abyss Launcher)

Pssst – Wake Up, September Ended

Don’t sleep on these extra 13 titles that came to the cloud on top of the 22 games announced in September.

For some frightful fun as we enter Spooky Season, let us know what game still haunts your dreams on Twitter or in the comments below.

The post Fall Into October With 25 New Games Streaming on GeForce NOW appeared first on NVIDIA Blog.

Read More

Researchers Use AI to Help Earbud Users Mute Background Noise

Thanks to earbuds, people can take calls anywhere, while doing anything. The problem: those on the other end of the call can hear all the background noise, too, whether it’s the roommate’s vacuum cleaner or neighboring conversations at a café.

Now, work by a trio of graduate students at the University of Washington, who spent the pandemic cooped up together in a noisy apartment, lets those on the other end of the call hear just the speaker — rather than all the surrounding sounds.

Users found that the system, dubbed “ClearBuds” — presented last month at the ACM International Conference on Mobile Systems, Applications and Services — improved background noise suppression much better than a commercially available alternative.

AI Podcast host Noah Kravitz caught up with the team at ClearBuds to discuss the unlikely pandemic-time origin story behind a technology that promises to make calls clearer and easier, wherever we go.

You Might Also Like

Listen Up: How Audio Analytic Is Teaching Machines to Listen

Audio Analytic has been using machine learning that enables a vast array of devices to make sense of the world of sound. Dr. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges and the fun involved in teaching machines to listen.

A Podcast With Teeth: How Overjet Brings AI to Dentists’ Offices

Overjet, a member of the NVIDIA Inception program for startups, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of Overjet, talks about how her company improves patient care with AI-powered technology that analyzes and annotates X-rays for dentists and insurance providers.

Sing It, Sister! Maya Ackerman on LyricStudio, an AI-Based Writing Assistant

Maya Ackerman is the CEO of WaveAI, a Silicon Valley startup using AI and machine learning to, as the company motto puts it, “unlock new heights of human creative expression.” She discusses WaveAI’s LyricStudio software, an AI-based lyric and poetry writing assistant.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

The post Researchers Use AI to Help Earbud Users Mute Background Noise appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Ph.D. Student Lets Anyone Bring Simulated Bots to Life With NVIDIA Omniverse Extension

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Yizhou Zhao

When not engrossed in his studies toward a Ph.D. in statistics, conducting data-driven research on AI and robotics, or enjoying his favorite hobby of sailing, Yizhou Zhao is winning contests for developers who use NVIDIA Omniverse — a platform for connecting and building custom 3D pipelines and metaverse applications.

The fifth-year doctoral candidate at the University of California, Los Angeles recently received first place in the inaugural #ExtendOmniverse contest, where developers were invited to create their own Omniverse extension for a chance to win an NVIDIA RTX GPU.

Omniverse extensions are core building blocks that let anyone create and extend functions of Omniverse apps using the popular Python programming language.

Zhao’s winning entry, called “IndoorKit,” allows users to easily load and record robotics simulation tasks in indoor scenes. It sets up robotics manipulation tasks by automatically populating scenes with the indoor environment, the bot and other objects with just a few clicks.

“Typically, it’s hard to deploy a robotics task in simulation without a lot of skills in scene building, layout sampling and robot control,” Zhao said. “By bringing assets into Omniverse’s powerful user interface using the Universal Scene Description framework, my extension achieves instant scene setup and accurate control of the robot.”

Within “IndoorKit,” users can simply click “add object,” “add house,” “load scene,” “record scene” and other buttons to manipulate aspects of the environment and dive right into robotics simulation.

With Universal Scene Description (USD), an open-source, extensible file framework, Zhao seamlessly brought 3D models into his environments using Omniverse Connectors for Autodesk Maya and Blender software.

The “IndoorKit” extension also relies on assets from the NVIDIA Isaac Sim robotics simulation platform and Omniverse’s built-in PhysX capabilities for accurate, articulated manipulation of the bots.

In addition, “IndoorKit” can randomize a scene’s lighting, room materials and more. One scene Zhao built with the extension is highlighted in the feature video above.

Omniverse for Robotics 

The “IndoorKit” extension bridges Omniverse and robotics research in simulation.

A view of Zhao’s “IndoorKit” extension

“I don’t see how accurate robot control was performed prior to Omniverse,” Zhao said. He provides four main reasons for why Omniverse was the ideal platform on which to build this extension:

First, Python’s popularity means many developers can build extensions with it to unlock machine learning and deep learning research for a broader audience, he said.

Second, using NVIDIA RTX GPUs with Omniverse greatly accelerates robot control and training.

Third, Omniverse’s ray-tracing technology enables real-time, photorealistic rendering of his scenes. This saves 90% of the time Zhao used to spend for experiment setup and simulation, he said.

And fourth, Omniverse’s real-time advanced physics simulation engine, PhysX, supports an extensive range of features — including liquid, particle and soft-body simulation — which “land on the frontier of robotics studies,” according to Zhao.

“The future of art, engineering and research is in the spirit of connecting everything: modeling, animation and simulation,” he said. “And Omniverse brings it all together.”

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Discover how to build an Omniverse extension in less than 10 minutes.

For a deeper dive into developing on Omniverse, watch the on-demand NVIDIA GTC session, “How to Build Extensions and Apps for Virtual Worlds With NVIDIA Omniverse.”

Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers like Zhao can build custom USD-based applications and extensions for the platform.

To discover more free tools, training and a community for developers, join the NVIDIA Developer Program.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Ph.D. Student Lets Anyone Bring Simulated Bots to Life With NVIDIA Omniverse Extension appeared first on NVIDIA Blog.

Read More

AI Esperanto: Large Language Models Read Data With NVIDIA Triton

Julien Salinas wears many hats. He’s an entrepreneur, software developer and, until lately, a volunteer fireman in his mountain village an hour’s drive from Grenoble, a tech hub in southeast France.

He’s nurturing a two-year old startup, NLP Cloud, that’s already profitable, employs about a dozen people and serves customers around the globe. It’s one of many companies worldwide using NVIDIA software to deploy some of today’s most complex and powerful AI models.

NLP Cloud is an AI-powered software service for text data. A major European airline uses it to summarize internet news for its employees. A small healthcare company employs it to parse patient requests for prescription refills. An online app uses it to let kids talk to their favorite cartoon characters.

Large Language Models Speak Volumes

It’s all part of the magic of natural language processing (NLP), a popular form of AI that’s spawning some of the planet’s biggest neural networks called large language models. Trained with huge datasets on powerful systems, LLMs can handle all sorts of jobs such as recognizing and generating text with amazing accuracy.

NLP Cloud uses about 25 LLMs today, the largest has 20 billion parameters, a key measure of the sophistication of a model. And now it’s implementing BLOOM, an LLM with a whopping 176 billion parameters.

Running these massive models in production efficiently across multiple cloud services is hard work. That’s why Salinas turns to NVIDIA Triton Inference Server.

High Throughput, Low Latency

“Very quickly the main challenge we faced was server costs,” Salinas said, proud his self-funded startup has not taken any outside backing to date.

“Triton turned out to be a great way to make full use of the GPUs at our disposal,” he said.

For example, NVIDIA A100 Tensor Core GPUs can process as many as 10 requests at a time — twice the throughput of alternative software —  thanks to FasterTransformer, a part of Triton that automates complex jobs like splitting up models across many GPUs.

FasterTransformer also helps NLP Cloud spread jobs that require more memory across multiple NVIDIA T4 GPUs while shaving the response time for the task.

Customers who demand the fastest response times can process 50 tokens — text elements like words or punctuation marks — in as little as half a second with Triton on an A100 GPU, about a third of the response time without Triton.

“That’s very cool,” said Salinas, who’s reviewed dozens of software tools on his personal blog.

Touring Triton’s Users

Around the globe, other startups and established giants are using Triton to get the most out of LLMs.

Microsoft’s Translate service helped disaster workers understand Haitian Creole while responding to a 7.0 earthquake. It was one of many use cases for the service that got a 27x speedup using Triton to run inference on models with up to 5 billion parameters.

NLP provider Cohere was founded by one of the AI researchers who wrote the seminal paper that defined transformer models. It’s getting up to 4x speedups on inference using Triton on its custom LLMs, so users of customer support chatbots, for example, get swift responses to their queries.

NLP Cloud and Cohere are among many members of the NVIDIA Inception program, which nurtures cutting-edge startups. Several other Inception startups also use Triton for AI inference on LLMs.

Tokyo-based rinna created chatbots used by millions in Japan, as well as tools to let developers build custom chatbots and AI-powered characters. Triton helped the company achieve inference latency of less than two seconds on GPUs.

In Tel Aviv, Tabnine runs a service that’s automated up to 30% of the code written by a million developers globally (see a demo below). Its service runs multiple LLMs on A100 GPUs with Triton to handle more than 20 programming languages and 15 code editors.

Twitter uses the LLM service of Writer, based in San Francisco. It ensures the social network’s employees write in a voice that adheres to the company’s style guide. Writer’s service achieves a 3x lower latency and up to 4x greater throughput using Triton compared to prior software.

If you want to put a face to those words, Inception member Ex-human, just down the street from Writer, helps users create realistic avatars for games, chatbots and virtual reality applications. With Triton, it delivers response times of less than a second on an LLM with 6 billion parameters while reducing GPU memory consumption by a third.

A Full-Stack Platform

Back in France, NLP Cloud is now using other elements of the NVIDIA AI platform.

For inference on models running on a single GPU, it’s adopting NVIDIA TensorRT software to minimize latency. “We’re getting blazing-fast performance with it, and latency is really going down,” Salinas said.

The company also started training custom versions of LLMs to support more languages and enhance efficiency. For that work, it’s adopting NVIDIA Nemo Megatron, an end-to-end framework for training and deploying LLMs with trillions of parameters.

The 35-year-old Salinas has the energy of a 20-something for coding and growing his business. He describes plans to build private infrastructure to complement the four public cloud services the startup uses, as well as to expand into LLMs that handle speech and text-to-image to address applications like semantic search.

“I always loved coding, but being a good developer is not enough: You have to understand your customers’ needs,” said Salinas, who posted code on GitHub nearly 200 times last year.

If you’re passionate about software, learn the latest on Triton in this technical blog.

The post AI Esperanto: Large Language Models Read Data With NVIDIA Triton appeared first on NVIDIA Blog.

Read More

Searidge Technologies Offers a Safety Net for Airports

Planes taxiing for long periods due to ground traffic — or circling the airport while awaiting clearance to land — don’t just make travelers impatient. They burn fuel unnecessarily, harming the environment and adding to airlines’ costs.

Searidge Technologies, based in Ottawa, Canada, has created AI-powered software to help the aviation industry avoid such issues, increasing efficiency and enhancing safety for airports.

Its Digital Tower and Apron solutions, powered by NVIDIA GPUs, use vision AI to manage traffic control for airports and alert users of safety concerns in real time. Searidge enables airports to handle 15-30% more aircraft per hour and reduce the number of tarmac incidents.

The company’s tech is used across the world, including at London’s Heathrow Airport, Fort Lauderdale-Hollywood International Airport in Florida and Dubai International Airport, to name a few.

In June, Searidge’s Digital Apron and Tower Management System (DATMS) went operational at Hong Kong International Airport as part of an initial phase of the Airport Authority Hong Kong’s large-scale expansion plan, which will bring machine learning to a new, integrated airport operations center.

In addition, Searidge provides the Civil Aviation Department of Hong Kong’s air-traffic control systems with next-generation safety enhancements using its vision AI software.

The deployment in Hong Kong is the industry’s largest digital platform for tower and apron management — and the first collaboration between an airport and an air-navigation service provider for a single digital platform.

Searidge is a member of NVIDIA Metropolis, a partner program focused on bringing to market a new generation of vision AI applications that make the world’s most important spaces and operations safer and more efficient.

Digital Tools for Airports

The early 2000s saw massive growth and restructuring of airports — and with this came increased use of digital tools in the aviation industry.

Founded in 2006, Searidge has become one of the first to bring machine learning to video processing in the aviation space, according to Pat Urbanek, the company’s vice president of business development for Asia Pacific and the Middle East.

“Video processing software for air-traffic control didn’t exist before,” Urbanek said. “It’s taken a decade to become mainstream — but now, intelligent video and machine learning have been brought into airport operations, enabling new levels of automation in air-traffic control and airside operations to enhance safety and efficiency.”

DATMS’s underlying machine learning platform, called Aimee, enables traffic-lighting automation based on data from radars and 4K-resolution video cameras. Aimee is trained to detect aircraft and vehicles. And DATMS is programmed based on the complex roadway rules that determine how buses and other vehicles should operate on service roads across taxiways.

After analyzing video data, the AI-enabled system activates or deactivates airports’ traffic lights in real time, based on when it’s appropriate for passenger buses and other vehicles to move. The status of each traffic light and additional details can also be visualized on end-user screens in airport traffic control rooms.

“What size is an aircraft? Does it have enough space to turn on the runway? Is it going too fast? All of this information and more is sent out over the Searidge Platform and displayed on screen based on user preference,” said Marco Rueckert, vice president of technology at Searidge.

Image courtesy of Searidge Technologies

The same underlying technology is applied to provide enhanced safety alerts for aircraft departure and arrival. In real time, DATMS alerts air traffic controllers of safety-standard breaches — taking into consideration clearances for aircraft to enter a runway, takeoff or land.

Speedups With NVIDIA GPUs

Searidge uses NVIDIA GPUs to optimize inference throughput across its deployments at airports around the globe. To train its AI models, Searidge uses an NVIDIA DGX A100 system.

“The NVIDIA platform allowed us to really bring down the hardware footprint and costs from the customer’s perspective,” Rueckert said. “It provides the scalability factor, so we can easily add more cameras with increasing resolution, which ultimately helps us solve more problems and address more customer needs.”

The company is also exploring the integration of voice data — based on communication between pilots and air-traffic controllers — within its machine learning platform to further enhance airport operations.

Searidge’s Digital Tower and Apron solutions can be customized for the unique challenges that come with varying airport layouts and traffic patterns.

“Of course, having aircraft land on time and letting passengers make their connections increases business and efficiency, but our technology has an environmental impact as well,” Urbanek said. “It can prevent burning of huge amounts of fuel — in the air or at the gate — by providing enhanced efficiency and safety for taxiing, takeoff and landing.”

Watch the latest GTC keynote by NVIDIA founder and CEO Jensen Huang to discover how vision AI and other groundbreaking technologies are shaping the world:

Feature video courtesy of Dubai Airports.

The post Searidge Technologies Offers a Safety Net for Airports appeared first on NVIDIA Blog.

Read More

Creator EposVox Shares Streaming Lessons, Successes This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

TwitchCon — the world’s top gathering of live streamers — kicks off Friday with the new line of GeForce RTX 40 Series GPUs bringing incredible new technology — from AV1 to AI — to elevate live streams for aspiring and professional Twitch creators alike.

In addition, creator and educator EposVox is in the NVIDIA Studio to discuss his influences, inspiration and advice for getting the most out of live streams.

Plus, join the #From2Dto3D challenge this month by sharing a 2D piece of art next to a 3D rendition of it for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #From2Dto3D to enter.

AV1 and Done

Releasing on Oct. 12, the new GeForce RTX 40 Series GPUs feature the eighth-generation NVIDIA video encoder, NVENC for short, now with support for AV1 encoding. For creators like EposVox, the new AV1 encoder will deliver 40% increased efficiency, unlocking higher resolutions and crisper image quality.

The GeForce RTX 40 Series delivers higher video quality with AV1.

NVIDIA has collaborated with OBS Studio to add AV1 support to its next software release, expected later this month. In addition, Discord is enabling AV1 end to end for the first time later this year. GeForce RTX 40 Series owners will be able to stream with crisp, clear image quality at 1440p and even 4K resolution at 60 frames per second.

GeForce RTX 40 Series GPUs also feature dual encoders. This allows creators to capture up to 8K60. And when it’s time to cut a VOD of live streams, the dual encoders work in tandem, dividing work automatically, which slashes export times nearly in half. Blackmagic Design’s DaVinci Resolve, the popular Voukoder plugin for Adobe Premiere Pro, and Jianying — the top video editing app in China — are all enabling dual encoder through encode presets. Expect dual encoder availability for these apps in October.

 

The GeForce RTX 40 Series GPUs also give game streamers an unprecedented gen-to-gen frame-rate boost in PC games alongside the new NVIDIA DLSS 3 technology, which accelerates performance by up to 4x. This will unlock richer, more immersive ray-traced experiences to share via live streams, such as in Cyberpunk 2077 and Portal with RTX.

Virtual Live Streams Come to Life

VTube Studio is a leading app for virtual streamers (VTubers) that makes it easy and fun to bring digital avatars to life on a live stream.

Seamlessly control avatars with AI by using a webcam and GeForce RTX GPU in VTube Studio.

VTube Studio is adding support this month for the NVIDIA Broadcast AR SDK, allowing users to seamlessly control their avatars with AI by using a regular webcam and a GeForce RTX GPU.

Objectively Blissful Streaming

OBS doesn’t stand for objectively blissful streaming, but it should.

OBS Studio is free, open-source software for video recording and live streaming. It’s one of EposVox’s essential apps, as he said it “allows me to produce my content at a rapid pace that’s constantly evolving.”

The software now features native integration of the AI-powered NVIDIA Broadcast effects, including Virtual Background, Noise Removal and Room Echo Removal.

In addition to adding AV1 support for GeForce RTX 40 Series GPUs later this month, the recent OBS 28.0 release added support for high-efficiency video coding (HEVC or H.265), improving video compression rates by 15% across a wide range of NVIDIA GPUs. It also now includes support for high-dynamic range (HDR), offering a greater range of bright and dark colors, which brings stunning vibrance and dramatic improvements in visual quality.

Broadcast for All

The SDKs that power NVIDIA Broadcast are available to developers, enabling native AI feature support in devices ranging from Logitech, Corsair and Elgato, as well as advanced workflows in OBS and Notch software.

Features released last month at NVIDIA GTC include new and updated AI-powered effects.

 

Virtual Background now includes temporal information, so random objects in the background will no longer create distractions by flashing in and out. This will be available in the next major version of OBS Studio.

 

Face Expression Estimation allows apps to accurately track facial expressions for face meshes, even with the simplest of webcams. It’s hugely beneficial to VTubers and can be found in the next version of VTube Studio.

 

Eye Contact allows podcasters to appear as if they’re looking directly at the camera — highly useful for when the user is reading a script or looking away to engage with viewers in the chat window.

It’s EposVox’s World, We’re All Just Living in It

Adam Taylor, who goes by the stage name EposVox or “The Stream Professor,” runs a YouTube channel focused on tech education for content creators and streamers.

He’s been making videos since before YouTube even existed.

“DailyMotion, Google Video, does anyone remember MetaCafe? X-Fire?” said EposVox.

He maintains a strong passion for educational content, which stemmed from his desire to learn video editing workflows as a young man, when he lacked the wealth of knowledge and resources available today.

“I immediately ran into constant walls of information that were kept behind closed doors when it came to deeper video topics, audio setups and more,” the artist said. “It was really frustrating — there was nothing and no one, aside from a decade or two of DOOM9 forums and outdated broadcast books, that had ever heard of a USB port to help guide me.”

While content creation and live streaming, especially with software like OBS Studio and XSplit, are EposVox’s primary focuses, he also aspires to make technology more fun and easy to use.

“The GPU acceleration in 3D and video apps, and now all the AI innovations that are coming to new generations, are incredible — I’m not sure I’d be able to create on the level that I do, nor at the speed I do, without NVIDIA GPUs.”

When searching for content inspiration, EposVox deploys a proactive approach — he’s all about asking questions. “Whether it’s trying to figure out how to do some overkill new setup for myself, breaking down neat effects I see elsewhere, or just asking which point in the process might cause friction for a viewer — I ask questions, figure out the best way to answer those questions, and deliver them to viewers,” he said.

EposVox stressed the importance of experimenting with multiple creative applications, noting that “every tool I can add to my tool chest enhances my creativity by giving me more options or ways to create, and more experiences with new processes for creating things.” This is especially true for the use of AI in his creative workflows, he added.

“What I love about AI art generation right now is the fact that I can just type any idea that comes to mind, in plain text language, and see it come to life,” he said. “I may not get exactly what I was expecting, I may have to continue refining my language and ideas to approach the representation I’m after — but knocking down the barrier between idea conception and seeing some form of that idea in front of me, I cannot overstate the impact that is created here.”

For an optimal live-streaming setup, EposVox recommends a PC equipped with a GeForce RTX GPU. His GeForce RTX 3090 desktop GPU, he said, can handle the rigors of the entire creative process and remain fast even when he’s constantly switching between computationally complex creative applications.

The artist said, “These days, I use GPU-accelerated NVENC encoding for capturing, exporting videos and live streaming.”

EposVox can’t wait for his GeForce RTX 4090 GPU upgrade, primarily to take advantage of the new dual encoders, noting “ I’ll probably end up saving a few hours a day since less time waiting on renders and uploads means I can move from project to project much quicker, rather than having to walk away and work on other things. I’ll be able to focus so much more.”

When asked for parting advice, EposVox didn’t hesitate: “If you commit to a creative vision for a project, but the entity you’re making it for— the company, agency, person or whomever — takes the project in a completely different direction, find some way to still bring your vision to life,” he said. “You’ll be so much better off — in terms of how you feel and the experience gained — if you can still bring that to life.”

YouTuber and live streamer EposVox, aka “The Stream Professor.”

For more tips on live streaming and video exports, check out EposVox’s YouTube channel.

And for step-by-step tutorials for all creative fields — created by industry-leading artists and community showcases — check out the NVIDIA Studio YouTube channel.

Finally, join the #From2Dto3D challenge by posting on Instagram, Twitter or Facebook.

Get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Creator EposVox Shares Streaming Lessons, Successes This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

The Wheel Deal: ‘Racer RTX’ Demo Revs to Photorealistic Life, Built on NVIDIA Omniverse

NVIDIA artists ran their engines at full throttle for the stunning Racer RTX demo, which debuted at last week’s GTC keynote, showcasing the power of NVIDIA Omniverse and the new GeForce RTX 4090 GPU.

“Our goal was to create something that had never been done before,” said Gabriele Leone, creative director at NVIDIA, who led a team of over 30 artists working around the globe with nearly a dozen design tools to complete the project in just three months.

That something is a fully simulated, real-time playable environment — inspired by the team’s shared favorite childhood game, Re-Volt. In Racer RTX, radio-controlled cars zoom through Los Angeles streets, a desert and a chic loft bedroom.

The demo consists entirely of simulation, rather than animation. This means that its 1,800+ hand-modeled and textured 3D models — whether the radio-controlled cars or the dominos they knock over while racing — didn’t require traditional 3D design tasks like baking or pre-compute, which is the presetting of lighting for environments and other properties for assets.

Instead, the assets react to the changing virtual environment in real time while obeying the laws of physics. This is enabled by the real-time, advanced physics simulation engine, PhysX, which is built into NVIDIA Omniverse, a platform for connecting and building custom 3D pipelines and metaverse applications.

Dust trails are left behind by the cars depending on the turbulence from passing vehicles. And sand deforms under racers’ wheels according to how the tires drift.

And with the Omniverse RTX Renderer, lighting can be physically simulated with a click, changing throughout the environment and across surfaces based on whether it’s dawn, day or dusk in the scenes, which are set in Los Angeles’ buzzing beach town of Venice.

 

Connecting Apps and Workflows

Racer RTX was created to test the limits of the new NVIDIA Ada Lovelace architecture — and steer creators and developers toward a new future of their work.

“We wanted to demonstrate the next generation of content creation, where worlds will no longer be prebaked, but physically accurate, full simulations,” Leone said.

The result showcases high-fidelity, hyper-realistic physics and real-time ray tracing enabled by Omniverse — in 4K resolution at 60 frames per second, running with Ada and the new DLSS 3 technology.

“Our globally spread team used nearly a dozen different design and content-creation tools — bringing everything together in Omniverse using the ground-truth, extensible Universal Scene Description framework,” Leone added.

The NVIDIA artists began the project by sketching initial concept art and taking a slew of reference photos in the westside of LA. Next, they turned to software like Autodesk 3ds Max, Autodesk Maya, Blender, Cinema4D and many more to create the 3D assets, the vast majority of which were modeled by hand.

“Racer RTX” features over 1,800 unique 3D models.

To add texture to the props, the artists used Adobe Substance 3D Designer and Adobe Substance 3D Painter. They then exported the files from these apps using the USD open 3D framework — and brought them into Omniverse Create for real-time collaboration in the virtual world.

Hyper-Realistic Physics

The RC cars in Racer RTX are each modeled with up to 70 individual pieces, including joints and suspensions, all with physics properties.

“Each car, each domino, every object in the demo has a different center of mass and weight depending on real-world parameters, so they act differently according to the laws of physics,” Leone said. “We can change the material of the floors, too, from sand to wood to ice — and use Omniverse’s native PhysX feature to make the vehicles drift along the surface with physically accurate friction.”

Radio-controlled cars race through Los Angeles streets in “Racer RTX.”

And to make the dust kick up behind the cars as they would in the real world, the artists used the NVIDIA Flow application for smoke, fluid and fire simulation.

In addition, the team created their own tools for the project-specific workflow, including Omniverse extensions — core building blocks that enable anyone to create and extend functionalities of Omniverse apps with just a few lines of Python code — to randomize and align objects in the scene.

The extensions, 3D assets and environments for the Racer RTX demo will be packaged together and available for download in the coming months, so owners of the GeForce RTX 4090 GPU can gear up to explore the environment.

Learn More About Omniverse

Dive deeper into the making of Racer RTX in an on-demand NVIDIA GTC session — where Leone is joined by Andrew Averkin, senior art manager; Chase Telegin, technical director of software; and Nikolay Usov, senior environment artist at NVIDIA, to discuss how they built the large-scale, photorealistic virtual world.

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote in replay:

The post The Wheel Deal: ‘Racer RTX’ Demo Revs to Photorealistic Life, Built on NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More