How Are Foundation Models Used in Gaming?

How Are Foundation Models Used in Gaming?

AI technologies are having a massive impact across industries, including media and entertainment, automotive, customer service and more. For game developers, these advances are paving the way for creating more realistic and immersive in-game experiences.

From creating lifelike characters that convey emotions to transforming simple text into captivating imagery, foundation models are becoming essential in accelerating developer workflows while reducing overall costs. These powerful AI models have unlocked a realm of possibilities, empowering designers and game developers to build higher-quality gaming experiences.

What Are Foundation Models?

A foundation model is a neural network that’s trained on massive amounts of data — and then adapted to tackle a wide variety of tasks. They’re capable of enabling a range of general tasks, such as text, image and audio generation. Over the last year, the popularity and use of foundation models has rapidly increased, with hundreds now available.

For example, GPT-4 is a large multimodal model developed by OpenAI that can generate human-like text based on context and past conversations. Another, DALL-E 3, can create realistic images and artwork from a description written in natural language.

Powerful foundation models like NVIDIA NeMo and Edify model in NVIDIA Picasso make it easy for companies and developers to inject AI into their existing workflows. For example, using the NVIDIA NeMo framework, organizations can quickly train, customize and deploy generative AI models at scale. And using NVIDIA Picasso, teams can fine-tune pretrained Edify models with their own enterprise data to build custom products and services for generative AI images, videos, 3D assets, texture materials and 360 HDRi.

How Are Foundation Models Built?

Foundation models can be used as a base for AI systems that can perform multiple tasks. Organizations can easily and quickly use a large amount of unlabeled data to create their own foundation models.

The dataset should be as large and diverse as possible, as too little data or poor-quality data can lead to inaccuracies — sometimes called hallucinations — or cause finer details to go missing in generated outputs.

Next, the dataset must be prepared. This includes cleaning the data, removing errors and formatting it in such a way that the model can understand it. Bias is a pervasive issue when preparing a dataset, so it’s important to measure, reduce and tackle these inconsistencies and inaccuracies.

Training a foundational model can be time-consuming, especially given the size of the model and the amount of data required. Hardware like NVIDIA A100 or H100 Tensor Core GPUs, along with high-performance data systems like the NVIDIA DGX SuperPOD, can accelerate training. For example, ChatGPT-3 was trained on over 1,000 NVIDIA A100 GPUs over about 34 days.

The three requirements of a successful foundation model.

After ‌training, the foundation model is evaluated on quality, diversity and speed. There are several methods for evaluating performance, for example:

  • Tools and frameworks that quantify how well the model predicts a sample of text
  • Metrics that compare generated outputs with one or more references and measure the similarities between them
  • Human evaluators who assess the quality of the generated output on various criteria

Once the model passes the relevant tests and evaluations, it can then be deployed for production.

Exploring Foundation Models in Games

Pretrained foundation models can be leveraged by middleware, tools and game developers both during production and at run-time. To train a base model, resources and time are necessary — alongside a certain level of expertise. Currently, many developers within the gaming industry are exploring off-the-shelf models, but need custom solutions that fit their specific use cases. They need models that are trained on commercially safe data and optimized for real-time performance — without exorbitant costs of deployment. The difficulty of meeting these requirements has slowed adoption of foundation models.

However, innovation within the generative AI space is swift, and once major hurdles are addressed, developers of all sizes — from startups to AAA studios — will use foundation models to gain new efficiencies in game development and accelerate content creation. Additionally, these models can help create completely new gameplay experiences.

The top industry use cases are centered around intelligent agents and AI-powered animation and asset creation. For example, many creators today are exploring models for creating intelligent non-playable characters, or NPCs.

Custom LLMs fine-tuned with the lingo and lore of specific games can generate human-like text, understand context and respond to prompts in a coherent manner. They’re designed to learn patterns and language structures and understand game state changes — evolving and progressing alongside the player in the game.

As NPCs become increasingly dynamic,real-time animation and audio that sync with their responses will be needed. Developers are using NVIDIA Riva to create expressive character voices using speech and translation AI. And designers are tapping NVIDIA Audio2Face for AI-powered facial animations.

Foundation models are also being used for asset and animation generation. Asset creation during the pre-production and production phases of game development can be time-consuming, tedious and expensive.

With state-of-the-art diffusion models, developers can iterate more quickly, freeing up time to spend on the most important aspects of the content pipeline, such as developing higher-quality assets and iterating. The ability to fine-tune these models from a studio’s own repository of data ensures the outputs generated are similar to the art styles and designs of their previous games.

Foundation models are readily available, and the gaming industry is only in the beginning phases of understanding their full capabilities. Various solutions have been built for real-time experiences, but the use cases are limited. Fortunately, developers can easily access models and microservices through cloud APIs today and explore how AI can affect their games and scale their solutions to more customers and devices than ever before.

The Future of Foundation Models in Gaming

Foundation models are poised to help developers realize the future of gaming. Diffusion models and large language models are becoming much more lightweight as developers look to run them natively on a range of hardware power profiles, including PCs, consoles and mobile devices.

The accuracy and quality of these models will only continue to improve as developers look to generate high-quality assets that need little to no touching up before being dropped into an AAA gaming experience.

Foundation models will also be used in areas that have been challenging for developers to overcome with traditional technology. For example, autonomous agents can help analyze and detect world space during game development, which will accelerate processes for quality assurance.

The rise of multimodal foundation models, which can ingest a mix of text, image, audio and other inputs simultaneously, will further enhance player interactions with intelligent NPCs and other game systems. Also, developers can use additional input types to improve creativity and enhance the quality of generated assets during production.

Multimodal models also show great promise in improving the animation of real-time characters, one of the most time-intensive and expensive processes of game development. They may be able to help make characters’ locomotion identical to real-life actors, infuse style and feel from a range of inputs, and ease the rigging process.

Learn More About Foundation Models in Gaming

From enhancing dialogue and generating 3D content to creating interactive gameplay, foundation models have opened up new opportunities for developers to forge the future of gaming experiences.

Learn more about foundation models and other technologies powering game development workflows.

Read More

GeForce NOW-vember Brings Over 50 New Games to Stream In the Cloud

GeForce NOW-vember Brings Over 50 New Games to Stream In the Cloud

Gear up with gratitude for more gaming time. GeForce NOW brings members a cornucopia of 15 newly supported games to the cloud this week. That’s just the start — there are a total of 54 titles coming in the month of November.

Members can also join thousands of esports fans in the cloud with the addition of Virtex Stadium to the GeForce NOW library for a ‘League of Legends’ world championship viewing party.

Esports Like Never Before

Virtex Stadium on GeForce NOW
Watch “League of Legends” esports like never before.

This year’s League of Legends world championship finals are coming to Virtex Stadium — an online virtual stadium now streaming on NVIDIA’s cloud gaming infrastructure.

In Virtex Stadium, esports fans can hang out with friends from across the world, create and personalize avatars, and watch live competitions together — all from the comfort of their homes.

Starting on Thursday, Nov. 2, watch League of Legends Worlds 2023 in Virtex Stadium with thousands of others. Use props and emotes to cheer players on together via chat.

GeForce NOW members and League of Legends fans can drop into Virtex Stadium without needing to create a new login. Within the Virtex Stadium app, members can choose to create a “Ready Player Me” avatar and account to save their digital characters for future visits. Members can even link their Twitch accounts to chat and emote with other viewers while in the stadium.

Catch all the action on the following dates:

  • Quarterfinal 1: Nov. 2 at 9 a.m. CET
  • Quarterfinal 2: Nov. 3 at 9 a.m. CET
  • Quarterfinal 3: Nov. 4 at 9 a.m. CET
  • Quarterfinal 4: Nov. 5 at 9 a.m. CET
  • Semifinal 1: Nov. 11 at 9 a.m. CET
  • Semifinal 2: Nov. 12 at 9 a.m. CET
  • Final: Nov. 19 at 9 a.m. CET

Time to Shine

Apex Legends: Ignite on GeForce NOW
SHINY!

Electronic Arts’ and Respawn Entertainment’s Apex Legends: Ignite, the newest season for the battle royale first-person shooter, is now available to stream from the cloud. Light the way with Conduit, the new support Legend with shield-based abilities. Plus, check out a faster and deadlier Storm Point map, a new Battle Pass with rewards, and more to help ignite Apex Legends players’ ways to victory.

Members can start their adventures now, along with 15 other games newly supported in the cloud this week:

  • Headbangers: Rhythm Royale (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Jusant (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • RoboCop: Rogue City (New release on Steam, Nov. 2)
  • The Talos Principle 2 (New release on Steam, Nov. 2)
  • StrangerZ (New release on Steam, Nov. 3)
  • Curse of the Dead Gods (Xbox, available on Microsoft Store)
  • Daymare 1994: Sandcastle (Steam)
  • ENDLESS Dungeon (Steam)
  • F1 Manager 2023 (Xbox, available on PC Game Pass)
  • Heretic’s Fork (Steam)
  • HOT WHEELS UNLEASHED 2 – Turbocharged (Epic Games Store)
  • Kingdoms Reborn (Steam)
  • Q.U.B.E. 2 (Epic Games Store)
  • Soulstice (Epic Games Store)
  • Virtex Stadium (Free)

Then check out the plentiful games for the rest of November:

  • The Invincible (New release on Steam, Nov 6.)
  • Roboquest (New release on Steam, Nov. 7)
  • Stronghold: Definitive Edition (New release on Steam, Nov. 7)
  • Dungeons 4 (New release on Steam, Xbox and available on PC Game Pass, Nov. 9)
  • Space Trash Scavenger (New release on Steam, Nov. 9)
  • Spirittea (New release on Steam, Xbox and available on PC Game Pass, Nov 13)
  • Naheulbeuk’s Dungeon Master (New release on Steam, Nov 15)
  • Last Train Home (New release on Steam, Nov. 28)
  • Gangs of Sherwood  (New release on Steam, Nov. 30)
  • Airport CEO (Steam)
  • Arcana of Paradise —The Tower (Steam)
  • Blazing Sails: Pirate Battle Royale (Epic Games Store)
  • Breathedge (Xbox, available on Microsoft Store)
  • Bridge Constructor: The Walking Dead (Xbox, available on Microsoft Store)
  • Bus Simulator 21 (Xbox, available on Microsoft Store)
  • Farming Simulator 19 (Xbox, available on Microsoft Store)
  • GoNNER (Xbox, available on Microsoft Store)
  • GoNNER2 (Xbox, available on Microsoft Store)
  • Hearts of Iron IV (Xbox, available on Microsoft Store)
  • Hexarchy (Steam)
  • I Am Future (Epic Games Store)
  • Imagine Earth (Xbox, available on Microsoft Store)
  • Jurassic World Evolution 2 (Xbox, available on PC Game Pass)
  • Land of the Vikings (Steam)
  • Onimusha: Warlords (Steam)
  • Overcooked! 2 (Xbox, available on Microsoft Store)
  • Saints Row IV (Xbox, available on Microsoft Store)
  • Settlement Survival (Steam)
  • SHENZHEN I/O (Xbox, available on Microsoft Store)
  • SOULVARS (Steam)
  • The Surge 2 (Xbox, available on Microsoft Store)
  • Thymesia (Xbox, available on Microsoft Store)
  • Trailmakers (Xbox, available on PC Game Pass)
  • Tropico 6 (Xbox, available on Microsoft Store)
  • Wartales (Xbox, available on PC Game Pass)
  • The Wonderful One: After School Hero (Steam)
  • Warhammer Age of Sigmar: Realms of Ruin (Steam)
  • West of Dead (Xbox, available on Microsoft Store)
  • Wolfenstein: The New Order (Xbox, available on PC Game Pass)
  • Wolfenstein: The Old Blood (Steam, Epic Games Store, Xbox and available on PC Game Pass)

Outstanding October

On top of the 60 games announced in October, an additional 48 joined the cloud last month, including several from this week’s additions, Curse of the Dead Gods, ENDLESS Dungeon, Farming Simulator 19, Hearts of Iron IV, Kingdoms Reborn, RoboCop: Rogue City, StrangerZ, The Talos Principle 2, Thymesia, Tropico 6 and Virtex Stadium:

  • AirportSim (New release on Steam, Oct. 19)
  • Battle Chasers: Nightwar (Xbox, available on Microsoft Store)
  • Black Skylands (Xbox, available on Microsoft Store)
  • Blair Witch (Xbox, available on Microsoft Store)
  • Call of the Sea (Xbox, available on Microsoft Store
  • Chicory: A Colorful Tale (Xbox and available on PC Game Pass)
  • Cricket 22 (Xbox and available on PC Game Pass)
  • Dead by Daylight (Xbox and available on PC Game Pass)
  • Deceive Inc. (Epic Games Store)
  • Dishonored (Steam)
  • Dishonored: Death of the Outsider (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Dishonored Definitive Edition (Epic Games Store, Xbox and available on PC Game Pass)
  • Dishonored 2 (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Dune: Spice Wars (Xbox and available on PC Game Pass)
  • Eternal Threads (New release on Epic Games Store, Oct. 19)
  • Everspace 2 (Xbox and available on PC Game Pass)
  • EXAPUNKS (Xbox and available on PC Game Pass)
  • From Space (New release on Xbox, available on PC Game Pass, Oct. 12)
  • Ghostrunner 2 (New release on Steam, Oct. 26)
  • Ghostwire: Tokyo (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Golf With Your Friends (Xbox, available on PC Game Pass)
  • Gungrave G.O.R.E (Xbox and available on PC Game Pass)
  • The Gunk (Xbox and available on PC Game Pass)
  • Hotel: A Resort Simulator (New release on Steam, Oct. 12)
  • Kill It With Fire (Xbox and available on PC Game Pass)
  • Railway Empire 2 (Xbox and available on PC Game Pass)
  • Rubber Bandits (Xbox, available on PC Game Pass)Saints Row IV (Xbox, available on Microsoft Store)
  • Saltsea Chronicles (New release on Steam, Oct. 12)
  • Soulstice (Epic Games Store)
  • State of Decay 2: Juggernaut Edition (Steam, Epic Games Store, Xbox and available on PC Game Pass)
  • Supraland Six Inches Under (Epic Games Store)
  • Techtonica (Xbox and available on PC Game Pass)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Xbox and available on PC Game Pass)
  • Torchlight III (Xbox and available on PC Game Pass)
  • Totally Accurate Battle Simulator (Xbox and available on PC Game Pass)
  • Tribe: Primitive Builder (New release on Steam, Oct. 12)
  • Trine 5: A Clockwork Conspiracy (Epic Games Store)

War Hospital didn’t make it in October due to a delay of its launch date. StalCraft and VEILED EXPERTS also didn’t make it in October due to technical issues. Stay tuned to GFN Thursday for more updates.

What are you looking forward to streaming this month? Let us know on Twitter or in the comments below.

Read More

Turing’s Mill: AI Supercomputer Revs UK’s Economic Engine

Turing’s Mill: AI Supercomputer Revs UK’s Economic Engine

The home of the first industrial revolution just made a massive investment in the next one.

The U.K. government has announced it will spend £225 million ($273 million) to build one of the world’s fastest AI supercomputers.

Called Isambard-AI, it’s the latest in a series of systems named after a legendary 19th century British engineer and hosted by the University of Bristol. When fully installed next year, it will pack 5,448 NVIDIA GH200 Grace Hopper Superchips to deliver a whopping 21 exaflops of AI performance for researchers across the country and beyond.

The announcement was made at the AI Safety Summit, a gathering of over 100 global government and technology leaders, held in Bletchley Park, the site of the world’s first digital programmable computer, which reflected the work of innovators like Alan Turing, considered the father of AI.

AI “will bring a transformation as far-reaching as the industrial revolution, the coming of electricity or the birth of the internet,” said British Prime Minister Rishi Sunak in a speech last week about the event, designed to catalyze international collaboration.

Propelling the Modern Economy

Like one of Isambard Brunel’s creations — the first propeller-driven, ocean-going iron ship — the AI technology running on his namesake is already driving countries forward.

AI contributes more than £3.7 billion to the U.K. economy and employs more than 50,000 people, said Michelle Donelan, the nation’s Science, Innovation and Technology Secretary, in an earlier announcement about the system.

The investment in the so-called AI Research Resource in Bristol “will catalyze scientific discovery and keep the U.K. at the forefront of AI development,” she said.

Like AI itself, the system will be used across a wide range of organizations tapping the potential of machine learning to advance robotics, data analytics, drug discovery, climate research and more.

“Isambard-AI represents a huge leap forward for AI computational power in the U.K.,” said Simon McIntosh-Smith, a Bristol professor and director of the Isambard National Research Facility. “Today, Isambard-AI would rank within the top 10 fastest supercomputers in the world and, when in operation later in 2024, it will be one of the most powerful AI systems for open science anywhere.”

The Next Manufacturing Revolution

Like the industrial revolution, AI promises advances in manufacturing. That’s one reason why Isambard-AI will be based at the National Composites Centre (NCC, pictured above) in the Bristol and Bath Science Park, one of the country’s seven manufacturing research centers.

The U.K.’s Frontier AI Taskforce, a research group leading a global effort on how frontier AI can be safely developed, will also be a major user of the system.

Hewlett Packard Enterprise, which is building Isambard-AI, is also collaborating with the University of Bristol on energy-efficiency plans that support net-zero carbon targets mandated by the British government.

Energy-Efficient HPC

A second system coming next year to the NCC will show Arm’s energy efficiency for non-accelerated high performance computing workloads.

Isambard-3 will deliver an estimated 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world’s three greenest non-accelerated supercomputers. That’s because the system — part of a research alliance among universities of Bath, Bristol, Cardiff and Exeter — will sport 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research.

“Isambard-3’s application performance efficiency of up to 6x its predecessor, which rivals many of the 50 fastest TOP500 systems, will provide scientists with a revolutionary new supercomputing platform to advance groundbreaking research,” said Bristol’s McIntosh-Smith, when the system was announced in March.

Read More

Unlocking the Power of Language: NVIDIA’s Annamalai Chockalingam on the Rise of LLMs

Unlocking the Power of Language: NVIDIA’s Annamalai Chockalingam on the Rise of LLMs

Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.”

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential.

LLMs are a “subset of the larger generative AI movement” that deals with language. They’re deep learning algorithms that can recognize, summarize, translate, predict and generate language.

AI has been around for a while, but according to Chockalingam, three key factors enabled LLMs.

One is the availability of large-scale data sets to train models with. As more people used the internet, more data became available for use. The second is the development of computer infrastructure, which has become advanced enough to handle “mountains of data” in a “reasonable timeframe.” And the third is advancements in AI algorithms, allowing for non-sequential or parallel processing of large data pools.

LLMs can do five things with language: generate, summarize, translate, instruct or chat. With a combination of “these modalities and actions, you can build applications” to solve any problem, Chockalingam said.

Enterprises are tapping LLMs to “drive innovation,” “develop new customer experiences,” and gain a “competitive advantage.” They’re also exploring what safe deployment of those models looks like, aiming to achieve responsible development, trustworthiness and repeatability.

New techniques like retrieval augmented generation (RAG) could boost LLM development. RAG involves feeding models with up-to-date “data sources or third-party APIs” to achieve “more appropriate responses” — granting them current context so that they can “generate better” answers.

Chockalingam encourages those interested in LLMs to “get your hands dirty and get started” — whether that means using popular applications like ChatGPT or playing with pretrained models in the NVIDIA NGC catalog.

NVIDIA offers a full-stack computing platform for developers and enterprises experimenting with LLMs, with an ecosystem of over 4 million developers and 1,600 generative AI organizations. To learn more, register for LLM Developer Day on Nov. 17 to hear from NVIDIA experts about how best to develop applications.

SUBHEAD: Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

In the world’s largest solar race car event of the year, the University of New South Wales Sunswift Racing team is having its day in the sun.

The World Solar Challenge, which first began some 35 years ago, attracts academic participants from across the globe. This year’s event drew nearly 100 competitors.

The race runs nearly 1,900 miles over the course of about four days and pits challengers in a battle not for speed but for greatest energy efficiency.

UNSW Sydney won the energy efficiency competition and crossed the finish line first, taking the Cruiser Cup with its Sunswift 7 vehicle, which utilizes NVIDIA Jetson Xavier NX for energy optimization. It was also the only competitor to race with 4 people on board and a remote mission control team.

“It’s a completely different proposition to say we can use the least amount of energy and arrive in Adelaide before anybody else, but crossing the line first is just about bragging rights,” said Richard Hopkins, project manager at Sunswift and a UNSW professor. Hopkins previously managed Formula 1 race teams in the U.K.

Race organizers bill the event, which cuts across the entire Australian continent on public roads — from Darwin in the north to Adelaide in the south — as the “world’s greatest innovation and engineering challenge contributing to a more sustainable mobility future.” It’s also become a launchpad for students pursuing career paths in the electric vehicle industry.

Like many of the competitors, UNSW is coming back after a three-year hiatus from the race due to the COVID-19 pandemic, making this year’s competition highly anticipated.

“Every single team member needs to understand what they’re doing and what their role is on the team and perform at the very best during those five-and-a-half days,” said Hopkins. “It is exhausting.”

All In on Energy Efficiency  

The race allows participants to start with a fully charged battery and to charge when the vehicles stop for the night at two locations. The remaining energy used, some 90%, comes from the sun and the vehicles’ solar panels.

UNSW’s seventh-generation Sunswift 7 runs algorithms to optimize for energy efficiency, essentially shutting down all nonessential computing to maximize battery life.

The solar electric vehicle relies on NVIDIA Jetson AI to give it an edge across its roughly 100 automotive monitoring and power management systems.

It can also factor in whether it should drive faster or slower based on weather forecasts. For instance, the car will urge the driver to go faster if it’s going to rain later in the day when conditions would force the car to slow down.

The Sunswift 7 vehicle was designed to mostly drive in a straight line from Darwin to Adelaide, and the object is to use the least amount of power outside of that mission, said Hopkins.

“Sunswift 7 late last year was featured in the Guinness Book of World Records for being the fastest electric vehicle for over 1,000 kilometers on a single charge of battery,” he said.

Jetson-Based Racers for Learning

The UNSW team created nearly 60 design iterations to improve on the aerodynamics of the vehicle. They used computational fluid dynamics modeling and ran simulations to analyze each version.

“We didn’t ever put the car through a physical wind tunnel,” said Hopkins.

The technical team has been working on a model to determine what speed the vehicle should be driven at for maximum energy conservation. “They’re working on taking in as many parameters as you can, given it’s really hard to get good driving data,” said Josh Bramley, technology manager at Sunswift Racing.

Sunswift 7 is running on the Robot Operating System (ROS) suite of software and relies on its NVIDIA Jetson module to process all the input from the sensors for analytics, which can be monitored by the remote pit crew back on campus at UNSW.

Jetson is used for all the control systems on the car, so everything from the accelerator pedal, wheel sensors, solar current sensors and more are processed on it for data to analyze for ways AI might help, said Bramley. The next version of the vehicle is expected to pack more AI, he added.

“A lot of the AI and computer vision will be coming for Sunswift 8 in the next solar challenge,” said Bramley.

More than 100 students are getting course credit for the Sunswift Racing team work, and many are interested in pursuing careers in electric vehicles, said Hopkins.

Past World Solar Challenge contestants have gone on to work at Tesla, SpaceX and Zipline.

Talk about a bright future.

Learn more about the NVIDIA Jetson platform for edge AI and robotics.

Read More

DLSS 3.5 With Ray Reconstruction Now Available in NVIDIA Omniverse

DLSS 3.5 With Ray Reconstruction Now Available in NVIDIA Omniverse

The highly anticipated NVIDIA DLSS 3.5 update, including Ray Reconstruction for NVIDIA Omniverse — a platform for connecting and building custom 3D tools and apps — is now available.

RTX Video Super Resolution (VSR) will be available with tomorrow’s NVIDIA Studio Driver release — which also supports the DLSS 3.5 update in Omniverse and is free for RTX GPU owners. The version 1.5 update delivers greater overall graphical fidelity, upscaling for native videos and support for GeForce RTX 20 Series GPUs.

NVIDIA Creative Director and visual effects producer Sabour Amirazodi returns In the NVIDIA Studio to share his Halloween-themed project: a full projection mapping show on his house, featuring haunting songs, frightful animation, spooky props and more.

Creators can join the #SeasonalArtChallenge by submitting harvest- and fall-themed pieces through November.

The latest Halloween-themed Studio Standouts video features ghouls, creepy monsters, haunted hospitals, dimly lit homes and is not for the faint-of-heart.

Remarkable Ray Reconstruction

NVIDIA DLSS 3.5 — featuring Ray Reconstruction — enhances ray-traced image quality on GeForce RTX GPUs by replacing hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays.

Previewing content in the viewport, even with high-end hardware, can sometimes offer less than ideal image quality, as traditional denoisers require hand-tuning for every scene.

With DLSS 3.5, the AI neural network recognizes a wide variety of scenes, producing high-quality preview images and drastically reducing time spent rendering scenes.

NVIDIA Omniverse and the USD Composer app — featuring the Omniverse RTX Renderer — specialize in real-time preview modes, offering ray-tracing inference and higher-quality previews while building and iterating.

The feature can be enabled by opening “Render Settings” under “Ray Tracing,” opening the “Direct Lighting” tab and ensuring “New Denoiser (experimental)” is turned on.

The ‘Haunted Sanctuary’ Returns

Sabour Amirazodi’s “home-made” installation, Haunted Sanctuary, has become an annual tradition, much to the delight of his neighbors.

Crowds form to watch the spectacular Halloween light show.

Amirazodi begins by staging props, such as pumpkins and skeletons, around his house.

Physical props add to the spooky atmosphere.

Then he carefully positions his projectors — building protective casings to keep them both safe and blended into the scene.

Amirazodi custom builds, paints and welds his projector cases to match the Halloween-themed decor.

“In the last few years, I’ve rendered 32,862 frames of 5K animation out of the Octane Render Engine. The loop has now become 21 minutes long, and the musical show is another 28 minutes!” — Sabour Amirazodi

Building a virtual scene onto a physical object requires projection mapping, so Amirazodi used NVIDIA GPU-accelerated MadMapper software and its structured light-scan feature to map custom visuals onto his house. He achieved this by connecting a DSLR camera to his mobile workstation, which was powered by an NVIDIA RTX A5000 GPU.

He used the camera to shoot a series of lines and capture photos. Then, he translated to the projector’s point of view an image on which to base a 3D model. Basic camera-matching tools found in Cinema 4D helped recreate the scene. Afterward, Amirazodi applied various mapping and perspective correction edits.

Projection mapping requires matching the virtual world with real-world specifications, done in Cinema 4D.

Next, Amirazodi animated and rigged the characters. GPU acceleration in the viewport enabled smooth interactivity with complex 3D models.

“I like having a choice between several third-party NVIDIA GPU-accelerated 3D renderers, such as V-Ray, OctaneRender and Redshift in Cinema 4D,” noted Amirazodi.

“I switched to NVIDIA graphics cards in 2017. GPUs are the only way to go for serious creators.” — Sabour Amirazodi

Amirazodi then spent hours on his RTX 6000 workstation creating and rendering out all the animations, assembling them in Adobe After Effects and compositing them on the scanned canvas in MadMapper. There, he crafted individual scenes to render out as chunks and assembled them in Adobe Premiere Pro. Remarkably, he repeated this workflow for every projector.

Once satisfied with the sequences, Amirazodi encoded everything using Adobe Media Encoder and loaded them onto BrightSign digital players — all networked to run the show synchronously.

Amirazodi used the advantages of GPU acceleration to streamline his workflow — saving him countless hours. “After Effects has numerous plug-ins that are GPU-accelerated — plus, Adobe Premiere Pro and Media Encoder use the new dual encoders found in the Ada generation of NVIDIA RTX 6000 GPUs, cutting my export times in half,” he said.

Smooth timeline movement in Adobe Premiere Pro assisted by the NVIDIA RTX A6000 GPU.

Amirazodi’s careful efforts are all in the Halloween spirit — creating a hauntingly memorable experience for his community.

“The hard work and long nights all become worth it when I see the smile on my kids’ faces and all the joy it brings to the entire neighborhood,” he reflected.

NVIDIA Creative Director Sabour Amirazodi.

Discover more of Amirazodi’s work on IMDb.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

Silicon Volley: Designers Tap Generative AI for a Chip Assist

Silicon Volley: Designers Tap Generative AI for a Chip Assist

A research paper released today describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors.

The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair.

Multiple engineering teams coordinate for as long as two years to construct one of these digital megacities.

Some groups define the chip’s overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

A Broad Vision for LLMs

“I believe over time large language models will help all the processes, across the board,” said Mark Ren, an NVIDIA Research director and lead author on the paper.

Bill Dally, NVIDIA’s chief scientist, announced the paper today in a keynote at the International Conference on Computer-Aided Design, an annual gathering of hundreds of engineers working in the field called electronic design automation, or EDA.

“This effort marks an important first step in applying LLMs to the complex work of designing semiconductors,” said Dally at the event in San Francisco. “It shows how even highly specialized fields can use their internal data to train useful generative AI models.”

ChipNeMo Surfaces

The paper details how NVIDIA engineers created for their internal use a custom LLM, called ChipNeMo, trained on the company’s internal data to generate and optimize software and assist human designers.

Long term, engineers hope to apply generative AI to each stage of chip design, potentially reaping significant gains in overall productivity, said Ren, whose career spans more than 20 years in EDA.

After surveying NVIDIA engineers for possible use cases, the research team chose three to start: a chatbot, a code generator and an analysis tool.

Initial Use Cases

The latter — a tool that automates the time-consuming tasks of maintaining updated descriptions of known bugs — has been the most well-received so far.

A prototype chatbot that responds to questions about GPU architecture and design helped many engineers quickly find technical documents in early tests.

Animation of a generative AI code generator using an LLM
A code generator will help designers write software for a chip design.

A code generator in development (demonstrated above)  already creates snippets of about 10-20 lines of software in two specialized languages chip designers use. It will be integrated with existing tools, so engineers have a handy assistant for designs in progress.

Customizing AI Models With NVIDIA NeMo

The paper mainly focuses on the team’s work gathering its design data and using it to create a specialized generative AI model, a process portable to any industry.

As its starting point, the team chose a foundation model and customized it with NVIDIA NeMo, a framework for building, customizing and deploying generative AI models that’s included in the NVIDIA AI Enterprise software platform. The selected NeMo model sports 43 billion parameters, a measure of its capability to understand patterns. It was trained using more than a trillion tokens, the words and symbols in text and software.

Diagram of the ChipNeMo workflow for training a custom model
ChipNeMo provides an example of how one deeply technical team refined a pretrained model with its own data.

The team then refined the model in two training rounds, the first using about 24 billion tokens worth of its internal design data and the second on a mix of about 130,000 conversation and design examples.

The work is among several examples of research and proofs of concept of generative AI in the semiconductor industry, just beginning to emerge from the lab.

Sharing Lessons Learned

One of the most important lessons Ren’s team learned is the value of customizing an LLM.

On chip-design tasks, custom ChipNeMo models with as few as 13 billion parameters match or exceed performance of even much larger general-purpose LLMs like LLaMA2 with 70 billion parameters. In some use cases, ChipNeMo models were dramatically better.

Along the way, users need to exercise care in what data they collect and how they clean it for use in training, he added.

Finally, Ren advises users to stay abreast of the latest tools that can speed and simplify the work.

NVIDIA Research has hundreds of scientists and engineers worldwide focused on topics such as AI, computer graphics, computer vision, self-driving cars and robotics. Other recent projects in semiconductors include using AI to design smaller, faster circuits and to optimize placement of large blocks.

Enterprises looking to build their own custom LLMs can get started today using NeMo framework available from GitHub and NVIDIA NGC catalog.

Read More

Turning the Tide on Coral Reef Decline: CUREE Robot Dives Deep With Deep Learning

Turning the Tide on Coral Reef Decline: CUREE Robot Dives Deep With Deep Learning

Researchers are taking deep learning for a deep dive, literally.

The Woods Hole Oceanographic Institution (WHOI) Autonomous Robotics and Perception Laboratory (WARPLab) and MIT are developing a robot for studying coral reefs and their ecosystems.

The WARPLab autonomous underwater vehicle (AUV), enabled by an NVIDIA Jetson Orin NX module, is an effort from the world’s largest private ocean research institution to turn the tide on reef declines.

Some 25% of coral reefs worldwide have vanished in the past three decades, and most of the remaining reefs are heading for extinction, according to the WHOI Reef Solutions Initiative.

The AUV, dubbed CUREE (Curious Underwater Robot for Ecosystem Exploration), gathers visual, audio, and other environmental data alongside divers to help understand the human impact on reefs and the sea life around them. The robot runs an expanding collection of NVIDIA Jetson-enabled edge AI to build 3D models of reefs and to track creatures and plant life. It also runs models to navigate and collect data autonomously.

WHOI, whose submarine first explored the Titanic in 1986, is developing its CUREE robot for data gathering to scale the effort and aid in mitigation strategies. The oceanic research organization is also exploring the use of simulation and digital twins to better replicate reef conditions and investigate solutions like NVIDIA Omniverse, a development platform for building and connecting 3D tools and applications.

Creating a digital twin of Earth in Omniverse, NVIDIA is developing the world’s most powerful AI supercomputer for predicting climate change, called Earth-2.

Underwater AI: DeepSeeColor Model

Anyone who’s gone snorkeling knows that seeing underwater isn’t as clear as seeing on land. Over distance, water attenuates the visible spectrum of light from the sun underwater, muting some colors more than others. At the same time, particles in the water create a hazy view, known as backscatter.

A team from WARPLab recently published a research paper on undersea vision correction that helps mitigate these problems and supports the work of CUREE. The paper describes a model, called DeepSeeColor, that uses a sequence of two convolutional neural networks to reduce backscatter and correct colors in real time on the NVIDIA Jetson Orin NX while undersea.

“NVIDIA GPUs are involved in a large portion of our pipeline because, basically, when the images come in, we use DeepSeeColor to color correct them, and then we can do the fish detection and transmit that to a scientist up at the surface on a boat,” said Stewart Jamieson, a robotics Ph.D. candidate at MIT and AI developer at WARPLab.

Eyes and Ears: Fish and Reef Detection

CUREE packs four forward-facing cameras, four hydrophones for underwater audio capture, depth sensors and inertial measurement unit sensors. GPS doesn’t work underwater, so it is only used to initialize the robot’s starting position while on the surface.

Using a combination of cameras and hydrophones along with AI models running on the Jetson Orin NX enables CUREE to collect data for producing 3D models of reefs and undersea terrains.

To use the hydrophones for audio data collection, CUREE needs to drift with its motor off so that there’s no interference with the audio.

“It can build a spatial soundscape map of the reef, using sounds produced by different animals,” said Yogesh Girdhar, an associate scientist at WHOI, who leads WARPLab. “We currently (in post-processing) detect where all the chatter associated with bioactivity hotspots is,” he added, referring to all the noises of sea life.

The team has been training detection models for both audio and video input to track creatures. But a big noise interference with detecting clear audio samples has come from one creature in particular.

“The problem is that, underwater, the snapping shrimps are loud,” said Girdhar. On land, this classic dilemma of how to separate sounds from background noises is known as the cocktail party problem. “If only we could figure out an algorithm to remove the effects of sounds of snapping shrimps from audio, but at the moment we don’t have a good solution,” said Girdhar.

Despite few underwater datasets in existence, pioneering fish detection and tracking is going well, said Levi Cai, a Ph.D. candidate in the MIT-WHOI joint program. He said they’re taking a semi-supervised approach to the marine animal tracking problem. The tracking is initialized using targets detected by a fish detection neural network trained on open-source datasets for fish detection, which is fine-tuned with transfer learning from images gathered by CUREE.

“We manually drive the vehicle until we see an animal that we want to track, and then we click on it and have the semi-supervised tracker take over from there,” said Cai.

Jetson Orin Energy Efficiency Drives CUREE

Energy efficiency is critical for small AUVs like CUREE. The compute requirements for data collection consume roughly 25% of the available energy resources, with driving the robots taking the remainder.

CUREE typically operates for as long as two hours on a charge, depending on the reef mission and the observation requirements, said Girdhar, who goes on the dive missions in St. John in the U.S. Virgin Islands.

To enhance energy efficiency, the team is looking into AI for managing the sensors so that computing resources automatically stay awake while making observations and sleep when not in use.

“Our robot is small, so the amount of energy spent on GPU computing actually matters — with Jetson Orin NX our power issues are gone, and it’s made our system much more robust,” said  Girdhar.

Exploring Isaac Sim to Make Improvements 

The WARPLab team is experimenting with NVIDIA Isaac Sim, a scalable robotics simulation application and synthetic data generation tool powered by Omniverse, to accelerate development of autonomy and observation for CUREE.

The goal is to do simple simulations in Isaac Sim to get the core essence of the problem to be simulated and then finish the training in the real world undersea, said Yogesh.

“In a coral reef environment, we cannot depend on sonars — we need to get up really close,” he said. “Our goal is to observe different ecosystems and processes happening.”

Understanding Ecosystems and Creating Mitigation Strategies

The WARPLab team intends to make the CUREE platform available for others to understand the impact humans are having on undersea environments and to help create mitigation strategies.

The researchers plan to learn from patterns that emerge from the data. CUREE provides an almost fully autonomous data collection scientist that can communicate findings to human researchers, said Jamieson. “A scientist gets way more out of this than if the task had to be done manually, driving it around staring at a screen all day,” he said.

Girdhar said that ecosystems like coral reefs can be modeled with a network, with different nodes corresponding to different types of species and habitat types. Within that, he said, there are all these different interactions happening, and the researchers seek to understand this network to learn about the relationship between various animals and their habitats.

The hope is that there’s enough data collected using CUREE AUVs to gain a comprehensive understanding of ecosystems and how they might progress over time and be affected by harbors, pesticide runoff, carbon emissions and dive tourism, he said.

“We can then better design and deploy interventions and determine, for example, if we planted new corals how they would change the reef over time,” said Girdar.

Learn more about NVIDIA Jetson Orin NX, Omniverse and Earth-2.

 

Read More

The Sky’s the Limit: ‘Cities: Skylines II’ Streams This Week on GeForce NOW

The Sky’s the Limit: ‘Cities: Skylines II’ Streams This Week on GeForce NOW

The cloud is full of treats this GFN Thursday with Cities: Skylines II now streaming, leading 15 newly supported games this week. The game’s publisher, Paradox Interactive, is offering GeForce NOW one-month Priority memberships for those who pick up the game first, so make sure to grab one before they’re gone.

Among the newly supported additions to the GeForce NOW library are more games from the PC Game Pass catalog, including Ghostwire Tokyo, State of Decay and the Dishonored series. Members can also look forward to Alan Wake 2 — streaming soon.

Cloud City

Cities: Skylines II on GeForce NOW
If you build it, they will come.

Members can build the metropolis of their dreams this week in Cities: Skylines II, the sequel to Paradox Interactive’s award-winning city sim. Raise a city from the ground up and transform it into a thriving urban landscape. Get creative to build on an unprecedented scale while managing a deep simulation and a living economy.

The game’s AI and intricate economics mean every choice ripples through the fabric of a player’s city, so they’ll have to stay sharp — strategizing, solving problems and reacting to challenges. Build sky-high and sprawl across the map like never before. New dynamic map features affect how the city expands amid rising pollution, changing weather and seasonal challenges.

Paradox is offering one-month GeForce NOW Priority memberships to the first 100,000 people who purchase the game, so budding city planners can optimize their gameplay across nearly any device. Visit Cities Skylines II for more info.

Newly Risen in the Cloud

Settle in for a spooky night with the newest PC Game Pass additions to the cloud: State of Decay and the Dishonored series.

State of Decay 2: Juggernaut Edition on GeForce NOW
“The right choice is the one that keeps us alive.”

Drop into a post-apocalyptic world and fend off zombies in State of Decay 2: Juggernaut Edition from Undead Labs and Xbox Game Studios. Band together with a small group of survivors and rebuild a corner of civilization in this dynamic, open-world sandbox. Fortify home base, perform daring raids for food and supplies and rescue other survivors who may have unique talents to contribute. Head online with friends for an up to four-player online co-op mode and visit their communities to help defend them and bring back rewards. No two players’ experiences will be the same.

Dishonor on you, dishonor on your cow, “Dishonored” in the cloud.

Get supernatural with the Dishonored series, which comprises first-person action games set in a steampunk Lovecraftian world. In Dishonored, follow the story of Corvo Attano — a former bodyguard turned assassin driven by revenge after being framed for the murder of the Empress of Dunwall. Choose stealth or violence with Dishonored’s flexible combat system and Corvo’s supernatural abilities.

The Definitive Edition includes the original Dishonored game with updated graphics, the “Void Walker’s Arsenal” add-on pack, plus expansion packs for more missions: “The Knife of Dunwall,” “The Brigmore Witches” and “Dunwall City Trials.”

Follow up with the sequel, Dishonored 2, set 15 years after Dishonored. Members can play as Corvo or his daughter, Emily, who seeks to reclaim her rightful place as the Empress of Dunwall. Dishonored: Death of the Outsider is the latest in the series, following the story of former assassin Billie Lurk on her mission to discover the origins of a mysterious entity called The Outsider.

It’s Getting Dark in Here

Maybe you should be afraid of the dark after all.

Alan Wake 2, the long-awaited sequel to Remedy Entertainments’ survival-horror classic, is coming soon to the cloud.

What begins as a small-town murder investigation rapidly spirals into a nightmare journey. Uncover the source of a supernatural darkness in this psychological horror story filled with suspense and unexpected twists. Play as FBI agent Saga Anderson and Alan Wake, a horror writer long trapped in the Dark Place, to see events unfold from different perspectives.

Ultimate members will soon be able to uncover mysteries with the power of a GeForce RTX 4080 server in the cloud. Survive the surreal world of Alan Wake 2 at up to 4K resolution and 120 frames per second, with path-traced graphics accelerated and enhanced by NVIDIA DLSS 3.5 and NVIDIA Reflex technology.

Trick or Treat: Give Me All New Games to Beat

Ghostwire Tokyo on GeForce NOW
I ain’t afraid of no ghost.

It’s time for a bewitching new list of games in the cloud. Ghostwire Tokyo from Bethesda is an action-adventure game set in a modern-day Tokyo mysteriously depopulated by a paranormal phenomenon. Team with a spectral entity to fight the supernatural forces that have taken over the city, including ghosts, yokai and other creatures from Japanese folklore.

Jump into the action now with 15 new games this week:

Make sure to check out the question of the week. Share your answer on Twitter or in the comments below.

Read More

Next-Gen Neural Networks: NVIDIA Research Announces Array of AI Advancements at NeurIPS

Next-Gen Neural Networks: NVIDIA Research Announces Array of AI Advancements at NeurIPS

NVIDIA researchers are collaborating with academic centers worldwide to advance generative AI, robotics and the natural sciences — and more than a dozen of these projects will be shared at NeurIPS, one of the world’s top AI conferences.

Set for Dec. 10-16 in New Orleans, NeurIPS brings together experts in generative AI, machine learning, computer vision and more. Among the innovations NVIDIA Research will present are new techniques for transforming text to images, photos to 3D avatars, and specialized robots into multi-talented machines.

“NVIDIA Research continues to drive progress across the field — including generative AI models that transform text to images or speech, autonomous AI agents that learn new tasks faster, and neural networks that calculate complex physics,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “These projects, often done in collaboration with leading minds in academia, will help accelerate developers of virtual worlds, simulations and autonomous machines.”

Picture This: Improving Text-to-Image Diffusion Models

Diffusion models have become the most popular type of generative AI models to turn text into realistic imagery. NVIDIA researchers have collaborated with universities on multiple projects advancing diffusion models that will be presented at NeurIPS.

  • A paper accepted as an oral presentation focuses on improving generative AI models’ ability to understand the link between modifier words and main entities in text prompts. While existing text-to-image models asked to depict a yellow tomato and a red lemon may incorrectly generate images of yellow lemons and red tomatoes, the new model analyzes the syntax of a user’s prompt, encouraging a bond between an entity and its modifiers to deliver a more faithful visual depiction of the prompt.
  • SceneScape, a new framework using diffusion models to create long videos of 3D scenes from text prompts, will be presented as a poster. The project combines a text-to-image model with a depth prediction model that helps the videos maintain plausible-looking scenes with consistency between the frames — generating videos of art museums, haunted houses and ice castles (pictured above).
  • Another poster describes work that improves how text-to-image models generate concepts rarely seen in training data. Attempts to generate such images usually result in low-quality visuals that aren’t an exact match to the user’s prompt. The new method uses a small set of example images that help the model identify good seeds — random number sequences that guide the AI to generate images from the specified rare classes.
  • A third poster shows how a text-to-image diffusion model can use the text description of an incomplete point cloud to generate missing parts and create a complete 3D model of the object. This could help complete point cloud data collected by lidar scanners and other depth sensors for robotics and autonomous vehicle AI applications. Collected imagery is often incomplete because objects are scanned from a specific angle — for example, a lidar sensor mounted to a vehicle would only scan one side of each building as the car drives down a street.

Character Development: Advancements in AI Avatars

AI avatars combine multiple generative AI models to create and animate virtual characters, produce text and convert it to speech. Two NVIDIA posters at NeurIPS present new ways to make these tasks more efficient.

  • A poster describes a new method to turn a single portrait image into a 3D head avatar while capturing details including hairstyles and accessories. Unlike current methods that require multiple images and a time-consuming optimization process, this model achieves high-fidelity 3D reconstruction without additional optimization during inference. The avatars can be animated either with blendshapes, which are 3D mesh representations used to represent different facial expressions, or with a reference video clip where a person’s facial expressions and motion are applied to the avatar.
  • Another poster by NVIDIA researchers and university collaborators advances zero-shot text-to-speech synthesis with P-Flow, a generative AI model that can rapidly synthesize high-quality personalized speech given a three-second reference prompt. P-Flow features better pronunciation, human likeness and speaker similarity compared to recent state-of-the-art counterparts. The model can near-instantly convert text to speech on a single NVIDIA A100 Tensor Core GPU.

Research Breakthroughs in Reinforcement Learning, Robotics

In the fields of reinforcement learning and robotics, NVIDIA researchers will present two posters highlighting innovations that improve the generalizability of AI across different tasks and environments.

  • The first proposes a framework for developing reinforcement learning algorithms that can adapt to new tasks while avoiding the common pitfalls of gradient bias and data inefficiency. The researchers showed that their method — which features a novel meta-algorithm that can create a robust version of any meta-reinforcement learning model — performed well on multiple benchmark tasks.
  • Another by an NVIDIA researcher and university collaborators tackles the challenge of object manipulation in robotics. Prior AI models that help robotic hands pick up and interact with objects can handle specific shapes but struggle with objects unseen in the training data. The researchers introduce a new framework that estimates how objects across different categories are geometrically alike — such as drawers and pot lids that have similar handles — enabling the model to more quickly generalize to new shapes.

Supercharging Science: AI-Accelerated Physics, Climate, Healthcare

NVIDIA researchers at NeurIPS will also present papers across the natural sciences — covering physics simulations, climate models and AI for healthcare.

  • To accelerate computational fluid dynamics for large-scale 3D simulations, a team of NVIDIA researchers proposed a neural operator architecture that combines accuracy and computational efficiency to estimate the pressure field around vehicles — the first deep learning-based computational fluid dynamics method on an industry-standard, large-scale automotive benchmark. The method achieved 100,000x acceleration on a single NVIDIA Tensor Core GPU compared to another GPU-based solver, while reducing the error rate. Researchers can incorporate the model into their own applications using the open-source neuraloperator library.

 

  • A consortium of climate scientists and machine learning researchers from universities, national labs, research institutes, Allen AI and NVIDIA collaborated on ClimSim, a massive dataset for physics and machine learning-based climate research that will be shared in an oral presentation at NeurIPS. The dataset covers the globe over multiple years at high resolution — and machine learning emulators built using that data can be plugged into existing operational climate simulators to improve their fidelity, accuracy and precision. This can help scientists produce better predictions of storms and other extreme events.
  • NVIDIA Research interns are presenting a poster introducing an AI algorithm that provides personalized predictions of the effects of medicine dosage on patients. Using real-world data, the researchers tested the model’s predictions of blood coagulation for patients given different dosages of a treatment. They also analyzed the new algorithm’s predictions of the antibiotic vancomycin levels in patients who received the medication — and found that prediction accuracy significantly improved compared to prior methods.

NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Read More