GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

Get ready to feel some chills, even amid the summer heat. Capcom’s award-winning Resident Evil Village brings a touch of horror to the cloud this GFN Thursday, part of three new games joining GeForce NOW this week.

And a new app update brings a visual enhancement to members, along with new ways to curate their GeForce NOW gaming libraries.

Greetings on GFN
#GreetingsFromGFN by @railbeam.

Members are showcasing their favorite locations to visit in the cloud. Follow along with #GreetingsFromGFN on @NVIDIAGFN social media accounts and share picturesque scenes from the cloud for a chance to be featured.

The Bell Tolls for All

Resident Evil Village on GeForce NOW
The cloud — big enough, even, for Lady Dimitrescu and her towering castle.

Resident Evil Village, the follow-up to Capcom’s critically acclaimed Resident Evil 7 Biohazard, delivers a gripping blend of survival-horror and action. Step into the shoes of Ethan Winters, a desperate father determined to rescue his kidnapped daughter.

Set against a backdrop of a chilling European village teeming with mutant creatures, the game includes a captivating cast of characters, including the enigmatic Lady Dimitrescu, who haunts the dimly lit halls of her grand castle. Fend off hordes of enemies, such as lycanthropic villagers and grotesque abominations.

Experience classic survival-horror tactics — such as resource management and exploration — mixed with action featuring intense combat and higher enemy counts.

Ultimate and Priority members can experience the horrors of this dark and twisted world in gruesome, mesmerizing detail with support for ray tracing and high dynamic range (HDR) for the most lifelike shadows and sharp visual fidelity when navigating every eerie hallway. Members can stream it all seamlessly from NVIDIA GeForce RTX-powered servers in the cloud and get a taste of the chills with the Resident Evil Village demo before taking on the towering Lady Dimitrescu in the full game.

I Can See Clearly Now

The latest GeForce NOW app update — version 2.0.64 — adds support for 10-bit color precision. Available for Ultimate members, this feature enhances image quality when streaming on Windows, macOS and NVIDIA SHIELD TV.

SDR10 on GeForce NOW
Rolling out now.

10-bit color precision significantly improves the accuracy and richness of color gradients during streaming. Members will especially notice its effects in scenes with detailed color transitions, such as for vibrant skies, dimly lit interiors, and various loading screens and menus. It’s useful for non-HDR displays and non-HDR-supported games. Find the setting in the GeForce NOW app > Streaming Quality > Color Precision, with the recommended default value of 10-bit.

Try it out on the neon-lit streets of Cyberpunk 2077 for smoother color transitions, and traverse the diverse landscapes of Assassin’s Creed Valhalla and other games for a more immersive streaming experience.

The update, rolling out now, also brings bug fixes and new ways to curate a member’s in-app game library. For more information, visit the NVIDIA Knowledgebase.

Lights, Camera, Action: New Games

Beyond Good and Evil 20th Anniversary Edition on GeForce NOW
Uncover the truth.

Join the rebellion as action reporter Jade in Beyond Good & Evil – 20th Anniversary Edition from Ubisoft. Embark on this epic adventure in up to 4K 60 frames per second with improved graphics and audio, a new speedrun mode, updated achievements and an exclusive anniversary gallery. Enjoy unique new rewards exploring Hillys and discover more about Jade’s past in a new treasure hunt throughout the planet.

Check out the list of new games this week:

  • Beyond Good & Evil – 20th Anniversary Edition (New release on Steam and Ubisoft, June 24)
  • Drug Dealer Simulator 2 (Steam)
  • Resident Evil Village (Steam)
  • Resident Evil Village Demo (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using NVIDIA cuOpt, an accelerated optimization engine for solving complex routing problems, and NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help its customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data — including floorplans from Microsoft PowerPoint and warehouse container data from Excel spreadsheets — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11 a.m. CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using the NVIDIA cuOpt accelerated optimization engine for solving complex routing problems and NVIDIA Omniverse, a platform of application programming interfaces (APIs), software development kits (SDKs) and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help their customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data —  including floorplans from Microsoft PowerPoint and warehouse container data from an Excel spreadsheet — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11am CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

By tapping into the power of OpenUSD and NVIDIA’s AI and optimization technologies, SyncTwin is helping set new standards for factory planning and operations, improving operational efficiency and supporting the vision of sustainability and cost reduction across manufacturing.

Get Plugged Into the World of OpenUSD

Learn more about OpenUSD and meet with NVIDIA experts at SIGGRAPH, taking place July 28-Aug. 1 at the Colorado Convention Center and online. Attend these SIGGRAPH highlights:

  • NVIDIA founder and CEO Jensen Huang’s fireside chat on Monday, July 29, covering the latest in generative AI and accelerated computing.
  • OpenUSD Day on Tuesday, July 30, where industry luminaries and developers will showcase how to build 3D pipelines and tools using OpenUSD.
  • Hands-on OpenUSD training for all skill levels.

Check out this video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, LinkedIn, Medium and X. For more, join the Omniverse community on the forums, Discord server and YouTube channel. 

Featured image courtesy of SyncTwin GmbH.

Read More

Thinking Outside the Blox: How Roblox Is Using Generative AI to Enhance User Experiences

Thinking Outside the Blox: How Roblox Is Using Generative AI to Enhance User Experiences

Roblox is a colorful online platform that aims to reimagine the way that people come together — now that vision is being augmented by generative AI. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Anupam Singh, vice president of AI and growth engineering at Roblox, on how the company is using the technology to enhance virtual experiences with features such as automated chat filters and real-time text translation, which help build inclusivity and user safety. Singh also discusses how generative AI can be used to power coding assistants that help creators focus more on creative expression, rather than spending time manually scripting world-building features.

Time Stamps

1:49: Background on Roblox and user interactions within the platform
6:38: Singh’s insight on AI and machine learning’s role in Roblox’s growth
15:51: Using generative AI to enhance user self-expression
20:04: How generative AI simplifies content creation
24:26: What’s next for Roblox

You Might Also Like:

Media.Monks’ Lewis Smithingham on Enhancing Media and Marketing With AI – Ep. 222

In this episode, Lewis Smithingham, senior vice president of innovation and special operations at Media.Monks, discusses AI’s potential to enhance the media and entertainment industry. Smithingham delves into Media.Monk’s platform for entertainment and speaks to its vision where AI enhances creativity and allows for more personalized, scalable content creation.

The Case for Generative AI in the Legal Field – Ep. 210

AI-driven digital solutions enable law practitioners to search laws and cases intelligently — automating the time-consuming process of drafting and analyzing legal documents. In this episode, Thomson Reuters Chief Product Officer David Wong discusses AI’s potential to help deliver better access to justice.

Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep. 203

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. Anima Anandkumar, senior director of AI research at NVIDIA, discusses generative AI’s potential to make splashes in the scientific community.

Deepdub’s Ofir Krakowski on Redefining Dubbing from Hollywood to Bollywood – Ep. 202

Deepdub acts as a digital bridge, providing access to content by using generative AI to break down language and cultural barriers in the entertainment landscape. In this episode, Deepdub co-founder and CEO Ofir Krakowski speaks on how AI-driven dubbing helps entertainment companies boost efficiency and increase accessibility.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Cut the Noise: NVIDIA Broadcast Supercharges Livestreaming, Remote Work

Cut the Noise: NVIDIA Broadcast Supercharges Livestreaming, Remote Work

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

AI has changed computing forever. The spotlight has most recently been on generative AI, but AI-accelerated, NVIDIA RTX-powered tools have also been key in improving gaming, content creation and productivity over the years.

The NVIDIA Broadcast app is one example, using Tensor Cores on a local RTX GPU to seamlessly improve audio and video quality. Paired with the NVIDIA encoder (NVENC) built into GeForce RTX and NVIDIA RTX GPUs, the app makes it easy to get started as a livestreamer or to look professional during video conference calls.

The Stream Dream

High-quality livestreaming traditionally required expensive hardware. Many livestreamers relied on software CPU encoding using the x264 software library, which often impacted gameplay quality. This led many to use a dual-PC setup, with one PC focused on gaming and content and the other on encoding the stream. It was complicated to assemble, difficult to troubleshoot and often cost-prohibitive for budding livestreamers.

NVENC is here to help. It’s a dedicated hardware video encoder on NVIDIA GPUs that processes the encoding, freeing up the rest of the system to focus on game and content performance. Industry-leading streaming apps like Open Broadcaster Software (OBS) are adding support for NVENC, paving the way for a new generation of broadcasters on popular platforms like Twitch and YouTube.

Meanwhile, NVIDIA Maxine helps solve the issue of expensive equipment. It includes free, AI-powered features like virtual green screens and webcam-based augmented reality tracking that eliminate the need for special equipment like physical green screens or motion- capture suits. Broadcasters first got to experience the technology at TwitchCon 2019, where they tested OBS live on the show floor with an AI-accelerated green screen on a GeForce RTX 2080 GPU.

Maxine’s AI-powered effects debuted for RTX users in the RTX Voice beta, and moved into the NVIDIA Broadcast app.

Now Showing: NVIDIA Broadcast

NVIDIA Broadcast offers AI-powered features that improve audio and video quality for a variety of use cases. It’s user-friendly, works in any app and is a breeze to set up.

It includes:

  • Noise and Acoustic Echo Removal: AI eliminates unwanted background noise from both the mic and inbound audio at the touch of a button.
  • Virtual Backgrounds: Features like Background Removal, Replacement and Blur help customize backgrounds without the need for expensive equipment or complex lighting setups.
  • Eye Contact: AI helps make it appear as though a streamer is looking directly at the camera, even when they’re glancing off camera or taking notes.
  • Auto Frame: Dynamically tracks movements in real time, automatically cropping and zooming moving objects regardless of their position.
  • Vignette: AI applies a darkening effect to the corners of camera images, providing visual contrast to draw attention to the center of the video and adding stylistic flair.
  • Video Noise Removal: Removes visual noise from low-light situations for a cleaner picture.

NVIDIA Broadcast works by creating a virtual camera, microphone or speaker in Windows so that users can set up their devices once and use them in any broadcasting, video conferencing or voice chat apps, including Discord, Google Meet, Microsoft Teams, OBS Studio, Slack, Webex and Zoom.

Those with an NVIDIA GeForce RTX, TITAN RTX, NVIDIA RTX or Quadro RTX GPU can use their GPU’s dedicated Tensor Cores to help the app’s AI networks run in real time.

The same AI-powered technology in NVIDIA Broadcast is also available to app developers as a software development kit. Audiovisual technology company Elgato includes Maxine’s AI audio noise removal technology in its Wave Link software, while VTube Studio — a popular app for connecting a 3D model to a webcam for streaming as an animated character — offers an RTX-accelerated model tracker plug-in as a free download. Independent developer Xaymar uses NVIDIA Maxine in his VoiceFX plug-in.

Content creators can use this plug-in or Elgato’s virtual studio technology (VST) filter to clean up noise and echo from recordings in post-processing in video editing suites like Adobe Premiere Pro or in digital audio workstations like Ableton Live and Adobe Audition.

(Not) Hearing Is Believing

Since its release, NVIDIA Broadcast has been used by millions.

“I’ve utilized the video noise removal and background replacement the most,” said Mr_Vudoo, a Twitch personality and broadcaster. “The eye contact feature was very interesting and quite honestly took me by surprise at how well it worked.”

Unmesh Dinda, host of the YouTube channel PiXimperfect, demonstrated NVIDIA Broadcast’s noise-canceling and echo-removal AI features in an extreme scenario. He set an electric fan whirring directly into his microphone and donned a helmet that was intensely hammered on. Even with these loud sounds in the background, Dinda could be heard crystal clear with Broadcast’s noise-removal feature turned on. The video has racked up more than 12 million views.

NVIDIA Broadcast is also a useful tool for the growing remote workforce. In an article, Tom’s Hardware editor-in-chief Avram Piltch detailed his testing of the app’s noise reduction features against noisy air conditioners, lawn-mowing neighbors and even a robot-wielding, tantrum-throwing child. Broadcast’s AI audio filters prevailed every time:

“I got my eight-year-old to fake throwing a fit right behind me and, once I enabled noise removal, every whine of ‘I’m not going to bed’ went silent (at least on the recording),” said Piltch. “To double the challenge, we had him throw a tantrum while carrying around a robot car with whirring treads. Once again, NVIDIA Broadcast removed all of the unwanted sound.”

Even everyday scenarios like video calls with a medical professional benefit from NVIDIA Broadcast’s AI-powered background removal.

Download NVIDIA Broadcast for free on any RTX-powered desktop or laptop.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

EvolutionaryScale Debuts With ESM3 Generative AI Model for Protein Design

EvolutionaryScale Debuts With ESM3 Generative AI Model for Protein Design

Generative AI has revolutionized software development with prompt-based code generation — protein design is next.

EvolutionaryScale today announced the release of its ESM3 model, the third-generation ESM model, which simultaneously reasons over the sequence, structure and functions of proteins, giving protein discovery engineers a programmable platform.

The startup, which emerged from the Meta FAIR (Fundamental AI Research) unit, recently landed funding led by Lux Capital, Nat Friedman and Daniel Gross, with investment from NVIDIA.

At the forefront of programmable biology, EvolutionaryScale can assist researchers in engineering proteins that can help target cancer cells, find alternatives to harmful plastics, drive environmental mitigations and more.

EvolutionaryScale is pioneering the frontier of programmable biology with the scale-out model development of ESM3, which used NVIDIA H100 Tensor Core GPUs for the most compute ever put into a biological foundation model. The 98 billion parameter ESM3 model uses roughly 25x more flops and 60x more data than its predecessor, ESM2.

The company, which developed a database of more than 2 billion protein sequences to train its AI model, offers technology that can provide clues applicable to drug development, disease eradication and, literally, how humans have evolved at scale as a species — as its name suggests — for drug discovery researchers.

Accelerating In Silico Biological Research With ESM3

With leaps in training data, EvolutionaryScale aims to accelerate protein discovery with ESM3.

The model was trained on almost 2.8 billion protein sequences sampled from organisms and biomes, allowing scientists to prompt the model to identify and validate new proteins with increasing levels of accuracy.

ESM3 offers significant updates over previous versions. The model is natively generative, and it is an “all to all” model, meaning structure and function annotations can be provided as input rather than just as output.

Once it’s made publicly available, scientists can fine-tune this base model to construct purpose-built models based on their own proprietary data. The boost in protein engineering capabilities due to ESM3’s large-scale generative training across enormous amounts of data offers a time-traveling machine for in silico biological research.

Driving the Next Big Breakthroughs With NVIDIA BioNeMo

ESM-3 provides biologists and protein designers with a generative AI boost, helping improve their engineering and understanding of proteins. With simple prompts, it can generate new proteins with a provided scaffold, self-improve its protein design based on feedback and design proteins based on the functionality that the user indicates. These capabilities can be used in tandem in any combination to provide chain-of-thought protein design as if the user were messaging a researcher who had memorized the intricate three-dimensional meaning of every protein sequence known to humans and had learned the language fluently, enabling users to iterate back and forth.

“In our internal testing we’ve been impressed by the ability of ESM3 to creatively respond to a variety of complex prompts,” said Tom Sercu, co-founder and VP of engineering at EvolutionaryScale. “It was able to solve an extremely hard protein design problem to create a novel Green Fluorescent Protein. We expect ESM3 will help scientists accelerate their work and open up new possibilities — we’re looking forward to seeing how it will contribute to future research in the life sciences.”

EvolutionaryScale will be opening an API for closed beta today and code and weights are available for a small open version of ESM3 for non-commercial use. This version is coming soon to NVIDIA BioNeMo, a generative AI platform for drug discovery. The full ESM3 family of models will soon be available to select customers as an NVIDIA NIM microservice, run-time optimized in collaboration with NVIDIA, and supported by an NVIDIA AI Enterprise software license for testing at ai.nvidia.com.

The computing power required to train these models is growing exponentially. ESM3 was trained using the Andromeda cluster, which uses NVIDIA H100 GPUs and NVIDIA Quantum-2 InfiniBand networking.

The ESM3 model will be available on select partner platforms and NVIDIA BioNeMo.

See notice regarding software product information.

Read More

Why 3D Visualization Holds Key to Future Chip Designs

Why 3D Visualization Holds Key to Future Chip Designs

Multi-die chips, known as three-dimensional integrated circuits, or 3D-ICs, represent a revolutionary step in semiconductor design. The chips are vertically stacked to create a compact structure that boosts performance without increasing power consumption.

However, as chips become denser, they present more complex challenges in managing electromagnetic and thermal stresses. To understand and address this, advanced 3D multiphysics visualizations become essential to design and diagnostic processes.

At this week’s Design Automation Conference, a global event showcasing the latest developments in chips and systems, Ansys — a company that develops engineering simulation and 3D design software — will share how it’s using NVIDIA technology to overcome these challenges to build the next generation of semiconductor systems.

To enable 3D visualizations of simulation results for their users, Ansys uses NVIDIA Omniverse, a platform of application programming interfaces, software development kits, and services that enables developers to easily integrate Universal Scene Description (OpenUSD) and NVIDIA RTX rendering technologies into existing software tools and simulation workflows.

The platform powers visualizations of 3D-IC results from Ansys solvers so engineers can evaluate phenomena like electromagnetic fields and temperature variations to optimize chips for faster processing, increased functionality and improved reliability.

With Ansys Icepak on the NVIDIA Omniverse platform, engineers can simulate temperatures across a chip according to different power profiles and floor plans. Finding chip hot-spots can lead to better design of the chips themselves, as well as auxiliary cooling devices. However, these 3D-IC simulations are computationally intensive, limiting the number of simulations and design points users can explore.

Using NVIDIA Modulus, combined with novel techniques for handling arbitrary power patterns in the Ansys RedHawk-SC electrothermal data pipeline and model training framework, the Ansys R&D team is exploring the acceleration of simulation workflows with AI-based surrogate models. Modulus is an open-source AI framework for building, training and fine-tuning physics-ML models at scale with a simple Python interface.

With the NVIDIA Modulus Fourier neural operator (FNO) architecture, which can parameterize solutions for a distribution of partial differential equations, Ansys researchers created an AI surrogate model that efficiently predicts temperature profiles for any given power profile and a given floor plan defined by system parameters like heat transfer coefficient, thickness and material properties. This model offers near real-time results at significantly reduced computational costs, allowing Ansys users to explore a wider design space for new chips.

Ansys uses a 3D FNO model to infer temperatures on a chip surface for unseen power profiles, a given die height and heat-transfer coefficient boundary condition.

Following a successful proof of concept, the Ansys team will explore integration of such AI surrogate models for its next-generation RedHawk-SC platform using NVIDIA Modulus.

As more surrogate models are developed, the team will also look to enhance model generality and accuracy through in-situ fine-tuning. This will enable RedHawk-SC users to benefit from faster simulation workflows, access to a broader design space and the ability to refine models with their own data to foster innovation and safety in product development.

To see the joint demonstration of 3D-IC multiphysics visualization using NVIDIA Omniverse APIs, visit Ansys at the Design Automation Conference, running June 23-27, in San Francisco at booth 1308 or watch the presentation at the Exhibitor Forum.

Read More

Crack the Case With ‘Tell Me Why’ and ‘As Dusk Falls’ on GeForce NOW

Crack the Case With ‘Tell Me Why’ and ‘As Dusk Falls’ on GeForce NOW

Sit back and settle in for some epic storytelling. Tell Me Why and As Dusk Falls — award-winning, narrative-driven games from Xbox Studios — add to the 1,900+ games in the GeForce NOW library, ready to stream from the cloud. 

Members can find more adventures with four new titles available this week.

Experience a Metallica concert like no other in “Metallica: Fuel. Fire. Fury.” This journey through six fan-favorite songs features gameplay that matches the intensity. “Metallica: Fuel. Fire. Fury.” will have six different showtimes running June 22-23 in Fortnite. Anyone can get a front-row seat to the interactive music experience by streaming on their mobile device, powered by GeForce NOW.

Unravel the Mystery

Whether uncovering family mysteries in Alaska or navigating small-town secrets in Arizona, gamers are set to be drawn into richly woven stories with Tell Me Why and Ask Dusk Falls joining the cloud this week.

Tell Me Why on GeForce NOW
Ain’t nothing but a great game.

Tell Me Why — an episodic adventure game from Dontnod Entertainment, the creators of the beloved Life Is Strange series — follows twins Tyler and Alyson Ronan as they reunite after a decade to uncover the mysteries of their troubled childhoods in the fictional town of Delos Crossing, Alaska. Experience true-to-life characters, mature themes and gripping choices.

As Dusk Falls on GeForce NOW
Every family has secrets.

Dive into the intertwined lives of two families over three decades in As Dusk Falls from INTERIOR/NIGHT. Set in small-town Arizona in the 1990s, the game’s unique art style blends 2D character illustrations with 3D environments, creating a visually striking experience. Players’ choices significantly impact the storyline, making each playthrough unique.

GeForce NOW members can now stream these award-winning titles on a variety of devices, including PCs, Macs, SHIELD TVs and Android devices. Upgrade to a Priority or Ultimate membership to enjoy enhanced streaming quality and performance, including up to 4K resolution and 120 frames per second on supported devices. Jump into these emotionally rich narratives and discover the power of choice in shaping the characters’ destinies.

Wake Up to New Games

Still Wakes the Deep on GeForce NOW
Run!

In Still Wakes the Deep from The Chinese Room and Secret Mode, play as an offshore oil rig worker fighting for dear life through a vicious storm, perilous surroundings and the dark, freezing North Sea waters. All lines of communication have been severed. All exits are gone. All that remains is the need to face the unknowable horror aboard. Live the terror and escape the rig, all from the cloud.

Check out the list of new games this week:

  • Still Wakes the Deep (New release on Steam and Xbox, available on PC Game Pass, June 18)
  • Skye: The Misty Isle (New release on Steam, June 19)
  • As Dusk Falls (Steam and Xbox, available on PC Game Pass)
  • Tell Me Why (Steam and Xbox, available on PC Game Pass)
Greetings From GFN
Make sure to catch #GreetingFromGFN.

Plus, #GreetingsFromGFN continues on @NVIDIAGFN social media accounts, with members sharing their favorite locations to visit in the cloud.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Decoding How NVIDIA AI Workbench Powers App Development

Decoding How NVIDIA AI Workbench Powers App Development

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible and showcases new hardware, software, tools and accelerations for NVIDIA RTX PC and workstation users.

The demand for tools to simplify and optimize generative AI development is skyrocketing. Applications based on retrieval-augmented generation (RAG) — a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from specified external sources — and customized models are enabling developers to tune AI models to their specific needs.

While such work may have required a complex setup in the past, new tools are making it easier than ever.

NVIDIA AI Workbench simplifies AI developer workflows by helping users build their own RAG projects, customize models and more. It’s part of the RTX AI Toolkit — a suite of tools and software development kits for customizing, optimizing and deploying AI capabilities — launched at COMPUTEX earlier this month. AI Workbench removes the complexity of technical tasks that can derail experts and halt beginners.

What Is NVIDIA AI Workbench?

Available for free, NVIDIA AI Workbench enables users to develop, experiment with, test and prototype AI applications across GPU systems of their choice — from laptops and workstations to data center and cloud. It offers a new approach for creating, using and sharing GPU-enabled development environments across people and systems.

A simple installation gets users up and running with AI Workbench on a local or remote machine in just minutes. Users can then start a new project or replicate one from the examples on GitHub. Everything works through GitHub or GitLab, so users can easily collaborate and distribute work. Learn more about getting started with AI Workbench.

How AI Workbench Helps Address AI Project Challenges

Developing AI workloads can require manual, often complex processes, right from the start.

Setting up GPUs, updating drivers and managing versioning incompatibilities can be cumbersome. Reproducing projects across different systems can require replicating manual processes over and over. Inconsistencies when replicating projects, like issues with data fragmentation and version control, can hinder collaboration. Varied setup processes, moving credentials and secrets, and changes in the environment, data, models and file locations can all limit the portability of projects.

AI Workbench makes it easier for data scientists and developers to manage their work and collaborate across heterogeneous platforms. It integrates and automates various aspects of the development process, offering:

  • Ease of setup: AI Workbench streamlines the process of setting up a developer environment that’s GPU-accelerated, even for users with limited technical knowledge.
  • Seamless collaboration: AI Workbench integrates with version-control and project-management tools like GitHub and GitLab, reducing friction when collaborating.
  • Consistency when scaling from local to cloud: AI Workbench ensures consistency across multiple environments, supporting scaling up or down from local workstations or PCs to data centers or the cloud.

RAG for Documents, Easier Than Ever

NVIDIA offers sample development Workbench Projects to help users get started with AI Workbench. The hybrid RAG Workbench Project is one example: It runs a custom, text-based RAG web application with a user’s documents on their local workstation, PC or remote system.

Every Workbench Project runs in a “container” — software that includes all the necessary components to run the AI application. The hybrid RAG sample pairs a Gradio chat interface frontend on the host machine with a containerized RAG server — the backend that services a user’s request and routes queries to and from the vector database and the selected large language model.

This Workbench Project supports a wide variety of LLMs available on NVIDIA’s GitHub page. Plus, the hybrid nature of the project lets users select where to run inference.

Workbench Projects let users version the development environment and code.

Developers can run the embedding model on the host machine and run inference locally on a Hugging Face Text Generation Inference server, on target cloud resources using NVIDIA inference endpoints like the NVIDIA API catalog, or with self-hosting microservices such as NVIDIA NIM or third-party services.

The hybrid RAG Workbench Project also includes:

  • Performance metrics: Users can evaluate how RAG- and non-RAG-based user queries perform across each inference mode. Tracked metrics include Retrieval Time, Time to First Token (TTFT) and Token Velocity.
  • Retrieval transparency: A panel shows the exact snippets of text — retrieved from the most contextually relevant content in the vector database — that are being fed into the LLM and improving the response’s relevance to a user’s query.
  • Response customization: Responses can be tweaked with a variety of parameters, such as maximum tokens to generate, temperature and frequency penalty.

To get started with this project, simply install AI Workbench on a local system. The hybrid RAG Workbench Project can be brought from GitHub into the user’s account and duplicated to the local system.

More resources are available in the AI Decoded user guide. In addition, community members provide helpful video tutorials, like the one from Joe Freeman below.

Customize, Optimize, Deploy

Developers often seek to customize AI models for specific use cases. Fine-tuning, a technique that changes the model by training it with additional data, can be useful for style transfer or changing model behavior. AI Workbench helps with fine-tuning, as well.

The Llama-factory AI Workbench Project enables QLoRa, a fine-tuning method that minimizes memory requirements, for a variety of models, as well as model quantization via a simple graphical user interface. Developers can use public or their own datasets to meet the needs of their applications.

Once fine-tuning is complete, the model can be quantized for improved performance and a smaller memory footprint, then deployed to native Windows applications for local inference or to NVIDIA NIM for cloud inference. Find a complete tutorial for this project on the NVIDIA RTX AI Toolkit repository.

Truly Hybrid — Run AI Workloads Anywhere

The Hybrid-RAG Workbench Project described above is hybrid in more than one way. In addition to offering a choice of inference mode, the project can be run locally on NVIDIA RTX workstations and GeForce RTX PCs, or scaled up to remote cloud servers and data centers.

The ability to run projects on systems of the user’s choice — without the overhead of setting up the infrastructure — extends to all Workbench Projects. Find more examples and instructions for fine-tuning and customization in the AI Workbench quick-start guide.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Light Bulb Moment: NVIDIA CEO Sees Bright Future for AI-Powered Electric Grid

Light Bulb Moment: NVIDIA CEO Sees Bright Future for AI-Powered Electric Grid

The electric grid and the utilities managing it have an important role to play in the next industrial revolution that’s being driven by AI and accelerated computing, said NVIDIA founder and CEO Jensen Huang Monday at the annual meeting of the Edison Electric Institute (EEI), an association of U.S. and international utilities.

“The future of digital intelligence is quite bright, and so the future of the energy sector is bright, too,” said Huang in a keynote before an audience of more than a thousand utility and energy industry executives.

Like other companies, utilities will apply AI to increase employee productivity, but “the greatest impact and return is in applying AI in the delivery of energy over the grid,” said Huang, in conversation with Pedro Pizarro, the chair of EEI and president and CEO of Edison International, the parent company of Southern California Edison, one of the nation’s largest electric utilities.

For example, Huang described how grids will use AI-powered smart meters to let customers sell their excess electricity to neighbors.

“You will connect resources and users, just like Google, so your power grid becomes a smart network with a digital layer like an app store for energy,” he said.

“My sense is, like previous industrial revolutions, [AI] will drive productivity to levels that we’ve never seen,” he added.

A video of the fireside chat will be available here soon.

AI Lights Up Electric Grids

Today, electric grids are mainly one-way systems that link a few big power plants to many users. They’ll increasingly become two-way, flexible and distributed networks with solar and wind farms connecting homes and buildings that sport solar panels, batteries and electric vehicle chargers.

It’s a big job that requires autonomous control systems that process and analyze in real time a massive amount of data — work well suited to AI and accelerated computing.

AI is being applied to use cases across electric grids, thanks to a wide ecosystem of companies using NVIDIA’s technologies.

In a recent GTC session, utility vendor Hubbell and startup Utilidata, a member of the NVIDIA Inception program, described a new generation of smart meters using the NVIDIA Jetson platform that utilities will deploy to process and analyze real-time grid data using AI models at the edge. Deloitte announced today its support for the effort.

Siemens Energy detailed in a separate GTC session its work with AI and NVIDIA Omniverse creating digital twins of transformers in substations to improve predictive maintenance, boosting grid resilience. And a video reports on how Siemens Gamesa used Omniverse and accelerated computing to optimize turbine placements for a large wind farm.

“Deploying AI and advanced computing technologies developed by NVIDIA enables faster and better grid modernization and we, in turn, can deliver for our customers,” said Maria Pope, CEO of Portland General Electric in Oregon.

NVIDIA Delivers 45,000x Gain in Energy Efficiency

The advances come as NVIDIA drives down the costs and energy needed to deploy AI.

Over the last eight years, NVIDIA increased energy efficiency of running AI inference on state-of-the-art large language models a whopping 45,000x, Huang said in his recent keynote at COMPUTEX.

NVIDIA Blackwell architecture GPUs will provide 20x greater energy efficiency than CPUs for AI and high-performance computing. If all CPU servers for these jobs transitioned to GPUs, users would save 37 terawatt-hours a year, the equivalent of 25 million metric tons of carbon dioxide and the electricity use of 5 million homes.

That’s why NVIDIA-powered systems swept the top six spots and took seven of the top 10 in the latest ranking of the Green500, a list of the world’s most energy-efficient supercomputers.

In addition, a recent report calls for governments to accelerate adoption of AI as a significant new tool to drive energy efficiency across many industries. It cited examples of utilities adopting AI to make the electric grid more efficient.

Learn more about how utilities are deploying AI and accelerated computing to improve operations, saving cost and energy.

Read More

Seamless in Seattle: NVIDIA Research Showcases Advancements in Visual Generative AI at CVPR

Seamless in Seattle: NVIDIA Research Showcases Advancements in Visual Generative AI at CVPR

NVIDIA researchers are at the forefront of the rapidly advancing field of visual generative AI, developing new techniques to create and interpret images, videos and 3D environments.

More than 50 of these projects will be showcased at the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 17-21 in Seattle. Two of the papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for CVPR’s Best Paper Awards.

NVIDIA is also the winner of the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track — a significant milestone that demonstrates the company’s use of generative AI for comprehensive self-driving models. The winning submission, which outperformed more than 450 entries worldwide, also received CVPR’s Innovation Award.

NVIDIA’s research at CVPR includes a text-to-image model that can be easily customized to depict a specific object or character, a new model for object pose estimation, a technique to edit neural radiance fields (NeRFs) and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare and robotics.

Collectively, the work introduces powerful AI models that could enable creators to more quickly bring their artistic visions to life, accelerate the training of autonomous robots for manufacturing, and support healthcare professionals by helping process radiology reports.

“Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

At CVPR, NVIDIA also announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.

Forget Fine-Tuning: JeDi Simplifies Custom Image Generation

Creators harnessing diffusion models, the most popular method for generating images based on text prompts, often have a specific character or object in mind — they may, for example, be developing a storyboard around an animated mouse or brainstorming an ad campaign for a specific toy.

Prior research has enabled these creators to personalize the output of diffusion models to focus on a specific subject using fine-tuning — where a user trains the model on a custom dataset — but the process can be time-consuming and inaccessible for general users.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago and NVIDIA, proposes a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model achieves state-of-the-art quality, significantly outperforming existing fine-tuning-based and fine-tuning-free methods.

JeDi can also be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

 

New Foundation Model Perfects the Pose

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine-tuning.

The model, which set a new record on a popular benchmark for object pose estimation, uses either a small set of reference images or a 3D representation of an object to understand its shape. It can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions.

FoundationPose could be used in industrial applications to help autonomous robots identify and track the objects they interact with. It could also be used in augmented reality applications where an AI model is used to overlay visuals on a live scene.

NeRFDeformer Transforms 3D Scenes With a Single Snapshot

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In fields like robotics, NeRFs can be used to generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site. However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method, being presented at CVPR, can successfully transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

VILA Visual Language Model Gets the Picture

A CVPR research collaboration between NVIDIA and the Massachusetts Institute of Technology is advancing the state of the art for vision language models, which are generative AI models that can process videos, images and text.

The group developed VILA, a family of open-source visual language models that outperforms prior neural networks on key benchmarks that test how well AI models answer questions about images. VILA’s unique pretraining process unlocked new model capabilities, including enhanced world knowledge, stronger in-context learning and the ability to reason across multiple images.

figure showing how VILA can reason based on multiple images
VILA can understand memes and reason based on multiple images or video frames.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations and even edge devices.

Read more about VILA on the NVIDIA Technical Blog and GitHub.

Generative AI Fuels Autonomous Driving, Smart City Research

A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research. Other AV-related highlights include:

Also at CVPR, NVIDIA contributed the largest ever indoor synthetic dataset to the AI City Challenge, helping researchers and developers advance the development of solutions for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, SDKs and services that enable developers to build Universal Scene Description (OpenUSD)-based applications and workflows.

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about NVIDIA Research at CVPR.

Read More