Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Playfully blending art and technology, underground animator Michael Wartella has teamed up with artificial intelligence to breathe new life into The White Stripes’ fan-favorite song, “Black Math.”

The video was released earlier this month to celebrate the 20th anniversary of the groundbreaking “Elephant” album.

Wartella is known for his genre-bending work as a cartoonist and animator.

His Brooklyn-based Dream Factory Animation studio produced the “Black Math” video, which combines digital and practical animation techniques with AI-generated imagery.

“This track is 20 years old, so we wanted to give it a fresh look, but we wanted it to look like it was cut from the same cloth as classic White Stripes videos,” Wartella said.

For the “Black Math” video, Wartella turned to Automatic1111, an open-source generative AI tool. To create the video, Wartella and his team started off with the actual album cover, using AI to “bore” into the image.

They then used AI to train the AI and build more images in a similar style. “That was really crazy and interesting and everything built from there,” Wartella said.

This image-to-image deep learning model caused a sensation on its release last year, and is part of a new generation of AI tools that are transforming the arts.

“We used several different AI tools and animation tools,” Wartella said. “For every shot, I wanted this to look like an AI video in a way those classic CGI videos look very CGI now.”

Wartella and his team relied heavily on archived images and video of the musician duo as well as motion-capture techniques to create a video replicating the feel of late-1990s and early-2000s music videos.

Wartella has long relied on NVIDIA GPUs to run a full complement of digital animation tools on workstations from Austin, Texas-based BOXX Technologies.

“We’ve used BOXX workstations with NVIDIA cards for almost 20 years now,” he said. “That combination is just really powerful — it’s fast, it’s stable.”

Wartella describes his work on the “Black Math” video as a “collaboration” with the AI tool, using it to generate images, tweaking the results and then returning to the technology for more.

“I see this as a collaboration, not just pressing a button. It’s an incredibly creative tool,” Wartella said of generative AI.

The results were sometimes “kind of strange,” a quality that Wartella prizes.

He took the output from the AI, ran it through conventional composition and editing tools, and then processed the results through AI again.

Wartella felt that working with AI in this way made the video stronger and more abstract.

Wardella and his team used generative AI to create something that feels both different, and familiar to White Stripes fans.

The video presents Jack and Meg White in their 2003 personas, emerging from a whimsical, dark cyber fantasy.

The video parallels the look and feel of the band’s videos from the early 2000s, even as it leans into the otherworldly, almost kaleidoscopic qualities of modern generative AI.

“The lyrics are anti-authoritarian and punkish, so the sound steered this one in the direction,” Wartella said. “The song itself has a scientific theme that is already a perfect fit for the AI.”

When “Black Math” was first released as part of The White Stripes’ critically acclaimed “Elephant” album, it grabbed attention for its high-energy, powerful guitar riffs and Jack White’s unmistakable vocals.

The song played a role in cementing the band’s reputation as a critical player in the garage rock revival of the early 2000s.

Wartella’s inventive approach with “Black Math” highlights the growing use of AI — as well as lively discussion of its implications — among creatives.

Over the past few months, AI-generated art has been increasingly prevalent across various social media platforms, thanks to tools like Midjourney, OpenAI’s Dall·E, DreamStudio and Stable Diffusion.

As AI advances, Wartella said, we can expect to see more artists exploring the potential of these tools in their work.

“I’m in full favor of people having the opportunity to play around with the technology,” Wartella said. “We’ll definitely use AI again if the song or the project calls for it.”

The release of the “Black Math” music video coincides with the launch of “The White Stripes Elephant (20th Anniversary)” deluxe vinyl reissue package, available now through Jack White’s Third Man Records and Sony Legacy Recordings.

Watch the “Black Math” music video:

Read More

What Is Agent Assist?

What Is Agent Assist?

“Please hold” may be the two words that customers hate most — and that contact center agents take pains to avoid saying.

Providing fast, accurate, helpful responses based on contextually relevant information is key to effective customer service. It’s even better if answers are personalized and take into account how a customer might be feeling.

All of this is made easier and quicker for human agents by what the industry calls agent assists.

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

How Agent Assist Technology Works

Agent assist technology gives human agents AI-powered information and real-time recommendations that can enhance their customer conversations.

Taking conversations as input, agent assist technology outputs accurate, timely suggestions on how to best respond to queries — using a combination of automatic speech recognition (ASR), natural language processing (NLP), machine learning and data analytics.

While a customer speaks to a human agent, ASR tools — like the NVIDIA Riva software development kit — transcribe speech into text, in real time. The text can then be run through NLP, AI and machine learning models that offer recommendations to the human agent by analyzing different aspects of the conversation.

First, AI models can evaluate the context of the conversation, identify topics and bring up relevant information for the human agent — like the customer’s account data , a record of their previous inquiries, documents with recommended products and additional information to help resolve issues.

Say a customer is looking to switch to a new phone plan. The agent assist could, for example, immediately display a chart on the human agent’s screen comparing the company’s offerings, which can be used as a reference throughout the conversation.

Another AI model can perform sentiment analysis based on the words a customer is using.

For example, if a customer says, “I’m extremely frustrated with my cellular reception,” the agent assist would advise the human agent to approach the customer differently from a situation where the customer says, “I am happy with my phone plan but am looking for something less expensive.”

It can even present a human agent with verbiage to consider using when soothing, encouraging, informing or otherwise guiding a customer toward conflict resolution.

And, at a conversation’s conclusion, agent assist technology can provide personalized, best next steps for the human agent to give the customer. It can also offer the human agent a summary of the interaction overall, along with feedback to inform future conversations and employee training.

All such ASR, NLP and AI-powered capabilities come together in agent assist technology, which is becoming increasingly integral to businesses across industries.

How Agent Assist Technology Helps Businesses, Customers

By tapping into agent assist technology, businesses can improve productivity, employee retention and customer satisfaction, among other benefits.

For one, agent assist technology reduces contact center call times. Through NLP and intelligent routing algorithms, it can identify customer needs in real time, so human agents don’t need to hunt for basic customer information or search databases for answers.

Leading telecom provider T-Mobile — which offers award-winning service across its Customer Experience Centers — uses agent assist technology to help tackle millions of daily customer care calls. The NVIDIA NeMo framework helped the company achieve 10% higher accuracy for its ASR-generated transcripts across noisy environments, and Riva reduced latency for its agent assist by 10x. (Dive deeper into speech AI by watching T-Mobile’s on-demand NVIDIA GTC session.)

Agent assist technology also speeds up the onboarding process for human agents, helping them quickly become familiar with the products and services offered by their organization. In addition, it empowers contact center employees to provide high levels of service while maintaining low levels of stress — which means higher employee retention for enterprises.

Quicker, more accurate conflict resolution enabled by agent assist also leads to more positive contact center experiences, happier customers and increased loyalty for businesses.

Use Cases Across Industries

Agent assist technology can be used across industries, including:

  • Telecom — Agent assist can provide automated troubleshooting, technical tips and other helpful information for agents to relay to customers.
  • Retail — Agent assist can suggest products, features, pricing, inventory information and more in real time, as well as translate languages according to customer preferences.
  • Financial services — Agent assist can help detect fraud attempts by providing real-time alerts, so that human agents are aware of any suspicious activity throughout an inquiry.

Minerva CQ, a member of the NVIDIA Inception program for cutting-edge startups, provides agent assist technology that brings together real-time, adaptive workflows with behavioral cues, dialogue suggestions and knowledge surfacing to drive faster, better outcomes. Its technology — based on Riva, NeMo and NVIDIA Triton Inference Server — focuses on helping human agents in the energy, healthcare and telecom sectors.

History and Future of Agent Assist

Predecessors of agent assist technology can be traced back to the 1950s, when computer-based systems first replaced manual call routing.

More recently came intelligent virtual assistants, which are usually automated systems or bots that don’t have a human working behind them.

Smart devices and mobile technology have led to a rise in the popularity of these intelligent virtual assistants, which can answer questions, set reminders, play music, control home devices and handle other simple tasks.

But complex tasks and inquiries — especially for enterprises with customer service at their core — can be solved most efficiently when human agents are augmented by AI-powered suggestions. This is where agent assist technology has stepped in.

The technology has much potential for further advancement, with challenges including:

  • Developing methods for agent assists to adapt to changing customer expectations and preferences.
  • Further ensuring data privacy and security through encryption and other methods to strip conversations of confidential or sensitive information before running them through agent assist AI models.
  • Integrating agent assist with other emerging technologies like interactive digital avatars, which can see, hear, understand and communicate with end users to help customers while boosting their sentiment.

Learn more about NVIDIA speech AI technologies.

Additional resources:

Read More

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Horror descends from the cloud this GFN Thursday with the arrival of publisher Capcom’s iconic Resident Evil series.

They’re part of nine new games expanding the GeForce NOW library of over 1,600 titles.

GeForce NOW Servers RTX 4080
More RTX 4080 SuperPODs just in time to play “Resident Evil” titles.

RTX 4080 SuperPODs are now live in Miami, Portland, Ore., and Stockholm. Follow along with the server rollout process, and make the Ultimate upgrade for unbeatable cloud gaming performance.

Survive in the Cloud

Resident Evil on GeForce NOW
“Resident Evil” now resides in the cloud.

The Resident Evil series makes its debut on GeForce NOW with Resident Evil 2, Resident Evil 3 and Resident Evil 7 Biohazard.

Survive against hordes of flesh-eating zombies and other bio-organic creatures created by the sinister Umbrella Corporation in these celebrated — and terrifying — Resident Evil games. The survival horror games feature memorable casts of characters and gripping storylines to keep members glued to their seats.

With RTX ON and high dynamic range, Ultimate and Priority members will also experience the most realistic lighting and deepest shadows. Bonus points for streaming with the lights off for an even more immersive experience.

The Newness

The Resident Evil titles lead nine new games joining the GeForce NOW library:

  • Shadows of Doubt (New release on Steam, April 24)
  • Afterimage (New release on Steam, April 24)
  • Roots of Pacha (New release on Steam, April 25)
  • Bramble: The Mountain King (New release on Steam, April 27)
  • The Swordsmen X: Survival (New release on Steam, April 27)
  • Poker Club (Free on Epic Games Store, April 27)
  • Resident Evil 2 (Steam)
  • Resident Evil 3 (Steam)
  • Resident Evil 7 Biohazard (Steam)

And check out the question of the week. Let us know your answer in the comments below, or on the GeForce NOW Facebook and Twitter channels.

Read More

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.

Content creators in all fields can benefit from free, AI-powered technology available from NVIDIA Studio.

The Studio platform delivers RTX acceleration in over 110 popular creative apps plus an exclusive suite of AI-powered Studio software. NVIDIA Omniverse interconnects 3D workflows, Canvas turns simple brushstrokes into realistic landscape images and RTX Remix helps modders create stunning RTX remasters of classic PC games.

Spotlighted by this week’s In the NVIDIA Studio featured artist Unmesh Dinda, NVIDIA Broadcast transforms the homes, apartments and dorm rooms of content creators, livestreamers and people working from home through the power of AI — all without the need for specialized equipment.

Host of the widely watched YouTube channel PiXimperfect, Dinda takes the noise-canceling and echo-removal AI features in Broadcast to extremes. He turned the perfect demo into a viral hit faster, powered by RTX acceleration in his go-to video-editing software, Adobe Premiere Pro.

It’s Hammer Time

NVIDIA Broadcast has several popular features, including visual background, autoframing, video noise removal, eye contact and vignette effects.

Two of the most frequently used features, noise and echo removal, caught the attention of Dinda, who saw Broadcast’s potential and wanted to show creators how to instantly improve their content.

The foundation of Dinda’s tutorial style came from his childhood. “My father would sit with me every day to help me with schoolwork,” he said. “He always used to explain with examples which were crystal clear to me, so now I do the same with my channel.”

Dinda contemplated how to demonstrate this incredible technology in a quick, relatable way.

“Think of a crazy idea that grabs attention instantly,” said Dinda. “Concepts like holding a drill in both hands or having a friend play drums right next to me.”

Dinda took the advice of famed British novelist William Golding, who once said, “The greatest ideas are the simplest.” Dinda’s final concept ended up as a scene of a hammer hitting a helmet on his head.

It turns out that seeing — and hearing — is believing.

Even with an electric fan whirring directly into his microphone and intense hammering on his helmet, Dinda can be heard crystal clear with Broadcast’s noise-removal feature turned on. To help emphasize the sorcery, Dinda briefly turns the feature off in the demo to reveal the painful sound his viewers would hear without it.

The demo launched on Instagram a few months ago and went viral overnight. Across social media platforms, the video now has over 12 million views and counting.

Dinda wasn’t harmed in the making of this video.

Views are fantastic, but the real gratification of Dinda’s work comes from a genuine desire to improve his followers’ skillsets, he said.

“The biggest inspiration comes from viewers,” said Dinda. “When they comment, message or meet me at an event to say how much the content has helped their career, it inspires me to create more and reach more creatives.”

 

Learn more and download Broadcast, free for all GeForce RTX GPU owners.

Hammer Out the Details

Dinda uses Adobe Premiere Pro to edit his videos, and his GeForce RTX 3080 Ti plays a major part in accelerating his creative workflow.

“I work with and render high-resolution videos on a daily basis, especially with Adobe Premiere Pro. Having a GPU like the GeForce RTX 3080 Ti helps me render and publish in time.” — Unmesh Dinda

He uses the GPU-accelerated decoder, called NVDEC, to unlock smooth playback and scrubbing of the high-resolution video footage he often works in.

As his hammer-filled Broadcast demo launched on several social media platforms, Dinda had the option to deploy the AI-powered, RTX-accelerated auto reframe feature. It automatically and intelligently tracks objects, and crops landscape video to social-media-friendly aspect ratios, saving even more time.

Dinda also used Adobe Photoshop to add graphical overlays to the video. With more than 30 GPU-accelerated features at his disposal — such as super resolution, blur gallery, object selection, smart sharpen and perspective warp — he can improve and adjust footage, quickly and easily.

 

Dinda used the GPU-accelerated NVIDIA encoder, aka NVENC, to speed up video exports up to 5x faster with his RTX GPU, leading to more time saved on the project.

Though he’s a full-time, successful video creator, Dinda stressed, “I have a normal life outside Adobe Photoshop, I promise!”

Streamer Unmesh Dinda.

Check out Dinda’s PiXimperfect channel, a free resource for learning Adobe Photoshop — another RTX-accelerated Studio app.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

Imagine a future where your vehicle’s interior offers personalized experiences and builds trust through human-machine interfaces (HMI) and AI. In this episode of the NVIDIA AI Podcast, Andreas Binner, chief technology officer at Rightware, delves into this fascinating topic with host Katie Burke Washabaugh.

Rightware is a Helsinki-based company at the forefront of developing in-vehicle HMI. Its platform, Kanzi, works in tandem with NVIDIA DRIVE IX to provide a complete toolchain for designing personalized vehicle interiors for the next generation of transportation, including detailed visualizations of the car’s AI.

Binner touches on his journey into automotive technology and HMI, the evolution of infotainment in the automotive industry over the past decade, and surprising trends in HMI. They explore the influence of AI on HMI, novel AI-enabled features and the importance of trust in new technologies.

Other topics include the role of HMI in fostering trust between vehicle occupants and the vehicle, the implications of autonomous vehicle visualization, balancing larger in-vehicle screens with driver distraction risks, additional features for trust-building between autonomous vehicles and passengers, and predictions for intelligent cockpits in the next decade.

Tune in to learn about the innovations that Rightware’s Kanzi platform and NVIDIA DRIVE IX bring to the automotive industry and how they contribute to developing intelligent vehicle interiors.

Read more on the NVIDIA Blog:  NVIDIA DRIVE Ecosystem Creates Pioneering In-Cabin Features With NVIDIA DRIVE IX

You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Right on Track: NVIDIA Open-Source Software Helps Developers Add Guardrails to AI Chatbots

Right on Track: NVIDIA Open-Source Software Helps Developers Add Guardrails to AI Chatbots

Newly released open-source software can help developers guide generative AI applications to create impressive text responses that stay on track.

NeMo Guardrails will help ensure smart applications powered by large language models (LLMs) are accurate, appropriate, on topic and secure. The software includes all the code, examples and documentation businesses need to add safety to AI apps that generate text.

Today’s release comes as many industries are adopting LLMs, the powerful engines behind these AI apps. They’re answering customers’ questions, summarizing lengthy documents, even writing software and accelerating drug design.

NeMo Guardrails is designed to help users keep this new class of AI-powered applications safe.

Powerful Models, Strong Rails

Safety in generative AI is an industry-wide concern. NVIDIA designed NeMo Guardrails to work with all LLMs, such as OpenAI’s ChatGPT.

The software lets developers align LLM-powered apps so they’re safe and stay within the domains of a company’s expertise.

NeMo Guardrails enables developers to set up three kinds of boundaries:

  • Topical guardrails prevent apps from veering off into undesired areas. For example, they keep customer service assistants from answering questions about the weather.
  • Safety guardrails ensure apps respond with accurate, appropriate information. They can filter out unwanted language and enforce that references are made only to credible sources.
  • Security guardrails restrict apps to making connections only to external third-party applications known to be safe.

Virtually every software developer can use NeMo Guardrails — no need to be a machine learning expert or data scientist. They can create new rules quickly with a few lines of code.

Riding Familiar Tools

Since NeMo Guardrails is open source, it can work with all the tools that enterprise app developers use.

For example, it can run on top of LangChain, an open-source toolkit that developers are rapidly adopting to plug third-party applications into the power of LLMs.

“Users can easily add NeMo Guardrails to LangChain workflows to quickly put safe boundaries around their AI-powered apps,” said Harrison Chase, who created the LangChain toolkit and a startup that bears its name.

In addition, NeMo Guardrails is designed to be able to work with a broad range of LLM-enabled applications, such as Zapier. Zapier is an automation platform used by over 2 million businesses, and it’s seen first-hand how users are integrating AI into their work.

“Safety, security, and trust are the cornerstones of responsible AI development, and we’re excited about NVIDIA’s proactive approach to embed these guardrails into AI systems,” said Reid Robinson, lead product manager of AI at Zapier.

“We look forward to the good that will come from making AI a dependable and trusted part of the future.”

Available as Open Source and From NVIDIA

NVIDIA is incorporating NeMo Guardrails into the NVIDIA NeMo framework, which includes everything users need to train and tune language models using a company’s proprietary data.

Much of the NeMo framework is already available as open source code on GitHub.  Enterprises also can get it as a complete and supported package, part of the NVIDIA AI Enterprise software platform.

NeMo is also available as a service. It’s part of NVIDIA AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their own datasets and domain knowledge.

Using NeMo, South Korea’s leading mobile operator built an intelligent assistant that’s had 8 million conversations with its customers. A research team in Sweden employed NeMo to create LLMs that can automate text functions for the country’s hospitals, government and business offices.

An Ongoing Community Effort

Building good guardrails for generative AI is a hard problem that will require lots of ongoing research as AI evolves.

NVIDIA made NeMo Guardrails — the product of several years’ research — open source to contribute to the developer community’s tremendous energy and work on AI safety.

Together, our efforts on guardrails will help companies keep their smart services aligned with safety, privacy and security requirements so these engines of innovation stay on track.

For more details on NeMo Guardrails and to get started, see our technical blog.

Read More

On Earth Day, 5 Ways AI, Accelerated Computing Are Protecting the Planet

On Earth Day, 5 Ways AI, Accelerated Computing Are Protecting the Planet

From climate modeling to endangered species conservation, developers, researchers and companies are keeping an AI on the environment with the help of NVIDIA technology.

They’re using NVIDIA GPUs and software to track endangered African black rhinos, forecast the availability of solar energy in the U.K., build detailed climate models and monitor environmental disasters from satellite imagery.

This Earth Day, discover five key ways AI and accelerated computing are advancing sustainability, climate science and energy efficiency.

1. Applying AI to Biodiversity Conservation, Sustainable Agriculture

To protect endangered species, camera-enabled edge AI devices embedded in the environment or on drones can help scientists observe animals in the wild, monitoring their populations and detecting threats from predators and poachers.

Conservation AI, a U.K.-based nonprofit, has deployed 70+ cameras around the world powered by NVIDIA Jetson modules for edge AI. Together with the NVIDIA Triton Inference Server, the Conservation AI platform can identify species of interest from footage in just four seconds — and help conservationists detect poachers and rapidly intervene. Another research team developed an NVIDIA Jetson-based solution to monitor endangered black rhinos in Namibia using drone-based AI.

aerial view of rhino with trail of prints
An aerial view of a rhino, observed via drone. IMAGE CREDIT: WildTrack.

And artist Sofia Crespo raised awareness for critically endangered plants and animals through a generative AI art display at Times Square, using generative adversarial networks trained on NVIDIA GPUs to create high-resolution visuals representing relatively unknown species.

In the field of agriculture, Bay Area startup Verdant and smart tractor company Monarch Tractor are developing AI to support sustainable farming practices, including precision spraying to reduce the use of herbicides.

2. Powering Renewable Energy Research

NVIDIA AI and high performance computing are advancing nearly every field of renewable energy research.

Open Climate Fix, a nonprofit product lab and member of the NVIDIA Inception program for startups, is developing AI models that can help predict cloud cover over solar panels — helping electric grid operators determine how much solar energy can be generated that day to help meet customers’ power needs. Startups Utilidata and Anuranet are developing AI-enabled electric meters using NVIDIA Jetson to enable a more energy efficient, resilient grid.

Siemens Gamesa Renewable Energy is working with NVIDIA to create physics-informed digital twins of wind farms using NVIDIA Omniverse and NVIDIA Modulus. U.K. company Zenotech used cloud-based GPUs to accurately simulate the likely energy output of a wind farm’s 140 turbines. And Gigastack, a consortium-led project, is using Omniverse to build a proof of concept for a wind farm that will turn water into hydrogen fuel.

Researchers at Lawrence Livermore National Laboratory achieved a breakthrough in fusion energy using HPC simulations running on Sierra, the world’s sixth-fastest HPC system, which has 17,280 NVIDIA GPUs. And the U.K.’s Atomic Energy Authority is testing the NVIDIA Omniverse simulation platform to design a fusion energy power plant.

3. Accelerating Climate Models, Weather Visualizations

Accurately modeling the atmosphere is critical to predicting climate change in the coming decades.

To better predict extreme weather events, NVIDIA created FourCastNet, a physics-ML model that can forecast the precise path of catastrophic atmospheric rivers a full week in advance.

Using Omniverse, NVIDIA and Lockheed Martin are building an AI-powered digital twin for the U.S. National Oceanic and Atmospheric Administration that could significantly reduce the amount of time necessary to generate complex weather visualizations.

An initiative from Northwestern University and Argonne National Laboratory researchers is instead taking a hyper-local approach, using NVIDIA Jetson-powered devices to better understand wildfires, urban heat islands and the effect of climate on crops.

4. Managing Environmental Disasters With Satellite Data

When it’s difficult to gauge a situation from the ground, satellite data provides a powerful vantage point to monitor and manage climate disasters.

NVIDIA is working with the United Nations Satellite Centre to apply AI to the organization’s satellite imagery technology infrastructure, an initiative that will provide humanitarian teams with near-real-time insights about floods, wildfires and other climate-related disasters.

A methane leak detected by Orbital Sidekick technology.

NVIDIA Inception member Masterful AI has developed machine learning tools that can detect climate risks from satellite and drone feeds. The model has been used to identify rusted transformers that could spark a wildfire and improve damage assessments after hurricanes.

San Francisco-based Inception startup Orbital Sidekick operates satellites that collect hyperspectral intelligence — information from across the electromagnetic spectrum. Its NVIDIA Jetson-powered AI solution can detect hydrocarbon or gas leaks from this data, helping reduce the risk of leaks becoming serious crises.

5. Advancing Energy-Efficient Computing 

On its own, adopting NVIDIA tech is already a green choice: If every CPU-only server running AI and HPC worldwide switched to a GPU-accelerated system, the world could save around 20 trillion watt-hours of energy a year, equivalent to the electricity requirements of nearly 2 million U.S. homes.

NVIDIA Grace CPU Superchip

Semiconductor leaders are integrating the NVIDIA cuLitho software library to accelerate the time to market and boost the energy efficiency of computational lithography, the process of designing and manufacturing next-generation chips. And the NVIDIA Grace CPU Superchip — which scored 2x performance gains over comparable x86 processors in tests — can help data centers slash their power bills by up to half.

In the most recent MLPerf inference benchmark for AI performance, the NVIDIA Jetson AGX Orin system-on-module achieved gains of up to 63% in energy efficiency, supplying AI inference at low power levels, including on battery-powered systems.

NVIDIA last year introduced a liquid-cooled NVIDIA A100 Tensor Core GPU, which Equinix evaluated for use in its data centers. Both companies found that a data center using liquid cooling could run the same workloads as an air-cooled facility while using around 30% less energy.

Bonus: Robot-Assisted Recycling on the AI Podcast

Startup EverestLabs developed RecycleOS, an AI software and robotics solution that helps recycling facilities around the world recover an average of 25-40% more waste, ensuring fewer recyclable materials end up in landfills. The company’s founder and CEO talked about its tech on the NVIDIA AI Podcast:

Learn more about green computing, and about NVIDIA-accelerated applications in climate and energy.

Read More

Epic Benefits: Omniverse Connector for Unreal Engine Saves Content Creators Time and Effort

Epic Benefits: Omniverse Connector for Unreal Engine Saves Content Creators Time and Effort

Content creators using Epic Games’ open, advanced real-time 3D creation tool, Unreal Engine, are now equipped with more features to bring their work to life with NVIDIA Omniverse, a platform for creating and operating metaverse applications.

The Omniverse Connector for Unreal Engine’s 201.0 update brings significant enhancements to creative workflows using both open platforms.

Streamlining Import, Export and Live Workflows

The Unreal Engine Omniverse Connector 201.0 release delivers improvements in import, export and live workflows, as well as updated software development kits.

New features include:

  • Alignment with Epic’s USD libraries and USDImporter plug-in: Improved compatibility between Omniverse and Epic’s Universal Scene Description (USD) libraries and USDImporter plug-in make it easier to transfer assets between the two platforms.
  • Python 3.9 scripts with Omniverse URLs: Unreal Engine developers and technical artists can access Epic’s built-in Python libraries by running Python 3.9 scripts with Omniverse URLs, which link to files on Omniverse Nucleus servers, helping automate tasks.
  • Skeletal mesh blendshape import to morph targets: The Unreal Engine Connector 201.0 now allows users to import skeletal mesh blendshapes into morph targets, or stored geometry shapes that can be used for animation. This eases development and material work on characters that use NVIDIA Material Definition Language (MDL), reducing the time it takes to share character assets with other artists.
  • UsdLuxLight schema compatibility: Improved compatibility of Unreal Engine with the UsdLuxLight schema — the blueprint used to define data that describes lighting in USD — makes it easier for content creators to work with lighting in Omniverse.

Transforming Workflows One Update at a Time

Artists and game content creators are seeing notable improvements to their workflows thanks to this connector update.

Developer and creator Abdelrazik Maghata, aka MR GFX on YouTube, recently joined an Omniverse livestream to demonstrate his workflow using Unreal Engine and Omniverse. Maghata explained how to animate a character in real time by connecting the Omniverse Audio2Face generative AI-powered application to Epic’s MetaHuman framework in Unreal Engine.

Maghata, who’s been a content creator on YouTube for 15 years, uses his platform to teach others about the benefits of Unreal Engine for their 3D workflows. He’s recently added Omniverse into his repertoire to build connections between his favorite content creation tools.

“Omniverse will transform the world of 3D,” he said.

Omniverse ambassador and short-film phenom Jae Solina often uses the Unreal Engine Connector in his creative process, as well. The connector has greatly improved his workflow efficiency and increased productivity by providing interoperability between his favorite tools, Solina said.

Getting connected is simple. Learn how to accelerate creative workflows with the Unreal Engine Omniverse Connector by watching this video:

Get Plugged Into the Omniverse 

At the recent NVIDIA GTC conference, the Omniverse team hosted many sessions spotlighting how creators can enhance their workflows with generative AI, 3D SimReady assets and more. Watch for free on demand.

Plus, join the latest Omniverse community challenge, running through the end of the month. Use the Unreal Engine Omniverse Connector and share your creation — whether it’s fan art, a video-game character or even an original game — on social media using the hashtag #GameArtChallenge for a chance to be featured on channels for NVIDIA Omniverse (Twitter, LinkedIn, Instagram) and NVIDIA Studio (Twitter, Facebook, Instagram).

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect teams. Developers can get started with these Omniverse resources

To stay up to date on the platform, subscribe to the newsletter and follow NVIDIA Omniverse on Instagram, Medium and Twitter. Check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

GeForce RTX 30 Series vs. RTX 40 Series GPUs: Key Differences for Gamers

GeForce RTX 30 Series vs. RTX 40 Series GPUs: Key Differences for Gamers

What’s the difference between NVIDIA GeForce RTX 30 and 40 Series GPUs for gamers?

To briefly set aside the technical specifications, the difference lies in the level of performance and capability each series offers.

Both deliver great graphics. Both offer advanced new features driven by NVIDIA’s global AI revolution a decade ago. Either can power glorious high-def gaming experiences.

But the RTX 40 Series takes everything RTX GPUs deliver and turns it up to 11.

“Think of any current PC gaming workload that includes ‘future-proofed’ overkill settings, then imagine the RTX 4090 making like Grave Digger and crushing those tests like abandoned cars at a monster truck rally,” writes Ars Technica.

NVIDIA GeForce RTX 4090 GPU

Common Ground: RTX 30 and 40 Series Features

That said, the RTX 30 Series and 40 Series GPUs have a lot in common.

Both offer hardware-accelerated ray tracing thanks to specialized RT Cores. They also have AI-enabling Tensor Cores that supercharge graphics. And both come loaded with support for next-generation AI and rendering technologies.

But NVIDIA’s GeForce RTX 40 Series delivers all this in a simply unmatched way.

Unveiling the GeForce RTX 40 Series

Unveiled in September 2022, the RTX 40 Series GPUs consist of four variations: the RTX 4090, RTX 4080, RTX 4070 Ti and RTX 4070.

All four are built on NVIDIA’s Ada Lovelace architecture, a significant upgrade over the NVIDIA Ampere architecture used in the RTX 30 Series GPUs.

Tensor and RT Cores Evolution

While both 30 Series and 40 Series GPUs utilize Tensor Cores, Ada’s new fourth-generation Tensor Cores are unbelievably fast, increasing throughput by up to 5x, to 1.4 Tensor-petaflops using the new FP8 Transformer Engine, first introduced in NVIDIA’s Hopper architecture H100 data center GPU.

NVIDIA made real-time ray tracing a reality with the invention of RT Cores, dedicated processing cores on the GPU designed to tackle performance-intensive ray-tracing workloads.

Stay updated on the latest news, features, and tips for gaming, creating, and streaming with NVIDIA GeForce; check out GeForce News – the ultimate destination for GeForce enthusiasts.

Advanced ray tracing requires computing the impact of many rays striking numerous different material types throughout a scene, creating a sequence of divergent, inefficient workloads for the shaders to calculate the appropriate levels of light, darkness and color while rendering a 3D scene.

Ada’s third-generation RT Cores have up to twice the ray-triangle intersection throughput, increasing RT-TFLOP performance by over 2x vs. Ampere’s best.

Shader Execution Reordering and In-Game Performance

And Ada’s new Shader Execution Reordering technology dynamically reorganizes these previously inefficient workloads into considerably more efficient ones. SER can improve shader performance for ray-tracing operations by up to 3x and in-game frame rates by up to 25%.

As a result, 40 Series GPUs excel at real-time ray tracing, delivering unmatched gameplay on the most demanding titles, such as Cyberpunk 2077 that support the technology.

DLSS 3 and Optical Flow Accelerator

Ada also advances NVIDIA DLSS, which brings advanced deep learning techniques to graphics, massively boosting performance.

Powered by the new fourth-gen Tensor Cores and Optical Flow Accelerator on GeForce RTX 40 Series GPUs, DLSS 3 uses AI to create additional high-quality frames.

As a result, RTX 40 Series GPUs deliver buttery-smooth gameplay in the latest and greatest PC games.

Eighth-Generation NVIDIA Encoders

NVIDIA GeForce RTX 40 Series graphics cards also feature new eighth-generation NVENC (NVIDIA Encoders) with AV1 encoding, enabling new possibilities for streamers, broadcasters, video callers and creators.

AV1 is 40% more efficient than H.264. This allows users streaming at 1080p to increase their stream resolution to 1440p while running at the same bitrate and quality.

Remote workers will be able to communicate more smoothly with colleagues and clients. For creators, the ability to stream high-quality video with reduced bandwidth requirements can enable smoother collaboration and content delivery, allowing for a more efficient creative process.

Cutting-Edge Manufacturing and Efficiency

RTX 40 Series GPUs are also built at the absolute cutting edge, with a custom TSMC 4N process. The process and Ada architecture are ultra-efficient.

And RTX 40 Series GPUs come loaded with the memory needed to keep its Ada GPUs running at full tilt.

RTX 30 Series GPUs: Still a Solid Choice

All that said, RTX 30 Series GPUs remain powerful and popular.

Launched in September 2020, the RTX 30 Series GPUs include a range of different models, from the RTX 3050 to the RTX 3090 Ti.

All deliver the grunt to run the latest games in high definition and at smooth frame rates.

NVIDIA GeForce RTX 30 Series graphics card family
The GeForce RTX 30 Series

But while the RTX 30 Series GPUs have remained a popular choice for gamers and professionals since their release, the RTX 40 Series GPUs offer significant improvements for gamers and creators alike, particularly those who want to crank up settings with high frames rates, drive big 4K displays, or deliver buttery-smooth streaming to global audiences.

With higher performance, enhanced ray-tracing capabilities, support for DLSS 3 and better power efficiency, the RTX 40 Series GPUs are an attractive option for those who want the latest and greatest technology.

Related Content

Read More

Driving Toward a Safer Future: NVIDIA Achieves Safety Milestones With DRIVE Hyperion Autonomous Vehicle Platform

Driving Toward a Safer Future: NVIDIA Achieves Safety Milestones With DRIVE Hyperion Autonomous Vehicle Platform

More than 50 automotive companies around the world have deployed over 800 autonomous test vehicles powered by the NVIDIA DRIVE Hyperion automotive compute architecture, which has recently achieved new safety milestones.

The latest NVIDIA DRIVE Hyperion architecture is based on the DRIVE Orin system-on-a-chip (SoC). Many NVIDIA DRIVE processes, as well as hardware and software components, have been assessed and/or certified compliant to ISO 26262 by TÜV SÜD, an independent, accredited assessor that ensures compliance with the International Organization for Standardization (ISO) 26262:2018 Functional Safety Standard for Road Vehicles.

Specifically, NVIDIA DRIVE core development processes are now certified as ISO 26262 Automotive Safety Integrity Level (ASIL) D compliant. ISO 26262 is based on the concept of a safety lifecycle, which includes planning, analysis, design and implementation, verification and validation.

Additionally:

  • The NVIDIA DRIVE Orin SoC completed concept and product assessments and is deemed to meet ISO 26262 ASIL D systematic requirements and ASIL B random fault management requirements.
  • The NVIDIA DRIVE AGX Orin board completed concept assessment and is deemed to meet ISO 26262 ASIL D requirements.
  • The NVIDIA DRIVE Orin-based platform, which unifies the Orin SoC and DRIVE AGX Orin board, completed concept assessment and is deemed to meet ISO 26262 ASIL D requirements.
  • Development of NVIDIA DRIVE OS 6.x is in progress and will be assessed by TÜV SÜD. This follows the recent certification of DRIVE OS 5.2, which includes NVIDIA CUDA libraries and the NVIDIA TensorRT software development kit for real-time AI inferencing.

Building safe autonomous vehicle technology is one of NVIDIA’s largest and most important endeavors, and functional safety is the focus at every step, from design to testing to deployment.

Functional safety is paramount in the deployment of AVs — ensuring they operate safely and reliably without endangering occupants, pedestrians or other road users.

AV Functional Safety Leadership

The initial ISO 26262 ASIL D functional safety certification — and recertification — of NVIDIA’s hardware development processes, along with the assessment of two generations of SoCs that include NVIDIA GPU and Tensor Core technology, demonstrate NVIDIA’s commitment to AV functional safety.

NVIDIA’s leadership in AV safety is further exhibited in its contributions to published standards – such as ISO 26262 and ISO 21448 — and ongoing initiatives — such as ISO/TS 5083 on AV safety, ISO/PAS 8800, ISO/IEC TR 5469 on AI safety and ISO/TR 9839.

Unified Hardware and Software Architecture

NVIDIA offers a unified hardware and software architecture throughout its AV research, design and deployment infrastructure. NVIDIA DRIVE Hyperion is an end-to-end, modular development platform and reference architecture for designing autonomous vehicles. The latest generation includes the NVIDIA DRIVE AGX Orin developer kit, plus a diverse and redundant sensor suite.

Self-Driving Safety Report

Learn more about NVIDIA’s AV safety practices and technologies in the NVIDIA Self-Driving Safety Report.

Read More