Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction

Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction

Minerva CQ, a startup based in the San Francisco Bay Area, is making customer service calls quicker and more efficient for both agents and customers, with a focus on those in the energy sector.

The NVIDIA Inception member’s name is a mashup of the Roman goddess of wisdom and knowledge — and collaborative intelligence (CQ), or the combination of human and artificial intelligence.

The Minerva CQ platform coaches contact-center agents to drive customer conversations — whether in voice or web-based chat — toward the most effective resolutions by offering real-time dialogue suggestions, sentiment analysis and optimal journey flows based on the customer’s intent. It also surfaces relevant context, articles, forms and more.

Powered by the NVIDIA Riva software development kit, Minerva CQ has best-in-class automatic speech recognition (ASR) capabilities in English, Spanish and Italian.

“Many contact-center solutions focus on automation through a chatbot, but our solution lets the AI augment humans to do a better job, because when humans and machines work together, they can accomplish more than what the human or machine alone could,” said Cosimo Spera, founder and CEO of Minerva CQ.

The platform first transcribes a conversation into text in real time. That text is then fed into Minerva CQ’s AI models that analyze customer sentiment, intent, propensity and more.

Minerva CQ then offers agents the best path to help their customers, along with other optional resolution paths.

The speech AI platform can understand voice- and text-based conversations within both the context of a specific exchange and the customer’s broader relationship with the business, according to Jack Garrett, vision architect at Minerva CQ.

Watch a demo of Minerva CQ at work:

Speech AI Powered by NVIDIA Riva

Minerva CQ last month announced that it built what it says is the first and most accurate Italian ASR model for enterprises, adding to the platform’s existing English and Spanish capabilities. The Italian ASR model has a word error rate of under 7% and is expected to be deployed early next year at a global energy company and telecoms provider.

“When we were looking for the best combination of accuracy, speed and cost to help us build the ASR model, NVIDIA Riva was at the top of our list,” Spera said.

Riva enables Minerva CQ to offer real-time responses. This means the AI platform can stream, process and transcribe conversations — all in less than 300 milliseconds, or in a blink of an eye.

“Riva is also fully customizable to solve our customers’ unique problems and comes with industry-leading out-of-the-box accuracy,” said Daniel Hong, chief marketing officer at Minerva CQ. “We were able to quickly and efficiently fine-tune the pretrained language models with help from experts on the NVIDIA Riva team.”

Access to technical experts is one benefit of being part of NVIDIA Inception, a free, global program that nurtures cutting-edge startups. Spera listed AWS credits, support on experimental projects, and collaboration on go-to-market strategy among the ways Inception has bolstered Minerva CQ.

In addition to Riva, Minerva CQ uses the NVIDIA NeMo framework to build and train its conversational AI models, as well as the NVIDIA Triton Inference Server to deliver fast, scalable AI model deployment.

Complementing its focus on the customer, Minerva CQ is also dedicated to agent wellness and building capabilities to track agent satisfaction and experience. The platform enables employees to be experts at their jobs from day one — which greatly reduces stress on agents, instills confidence, and lowers attrition rates and operational costs.

Plus, Minerva CQ automatically provides summary reports of conversations, giving agents and supervisors helpful feedback, and analytics teams powerful business insights.

“All in all, Minerva CQ empowers agents with knowledge and allows them to be confident in the information they share with customers,” Hong said. “Easy customer inquiries can be tackled by automated self-service or AI chatbots, so when the agents are hit with complex questions, Minerva can help.”

Focus on Retail Energy, Electrification

Minerva CQ’s initial deployments are focused on retail energy and electrification.

For retail energy providers, the platform offers agents simple, consistent explanations of energy sources, tariff plans, billing changes and optimal spending choices.

It also assists agents to resolve complex problems for electric vehicle customers, and helps EV technicians troubleshoot infrastructure and logistics issues.

“Retail energy and electrification are inherently intertwined in the movement toward decarbonization, but they can still be relatively siloed in the market space,” Garrett said. “Minerva helps bring them together.”

Minerva CQ is deployed by a leading electric mobility company as well as one of the largest utilities in the world, according to Spera.

These clients’ contact centers across the U.S. and Mexico have seen a 40% decrease in average handle time for a customer service call thanks to Minerva CQ, Spera said. Deployment is planned to expand further into the Spanish-speaking market — as well as in countries where Italian is spoken.

“We all want to save the planet, but it’s important that change come from the bottom up by empowering end users to make steps toward decarbonization,” Spera said. “Our focus is on providing customers with information so they can best transition to clean-energy-source subscriptions.”

He added, “In the coming years, we’d like to see the brand Minerva CQ become synonymous with electrification and decarbonization.”

Learn more about NVIDIA’s work with utilities and apply to join NVIDIA Inception.

The post Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction appeared first on NVIDIA Blog.

Read More

See a Sea Change: 3D Researchers Bring Naval History to Life

See a Sea Change: 3D Researchers Bring Naval History to Life

Museumgoers will be able to explore two sunken WWII ships as if they were scuba divers on the ocean floor, thanks to work at Curtin University in Perth, Australia.

Exhibits in development, for display in Australia and potentially further afield, will use exquisitely detailed 3D models the researchers are creating to tell the story of one of the nation’s greatest naval battles.

On Nov. 19, 1941, Australia’s HMAS Sydney (II) and Germany’s HSK Kormoran lobbed hundreds of shells in a duel that lasted less than an hour. More than 700 died, including every sailor on the Sydney. Both ships sank 8,000 feet, 130 miles off the coast of Western Australia, not to be discovered for decades.

Sydney, now a WWII shipwreck off Perth
HMAS Sydney (II) in 1940. (Photo: Allan C. Green from the State Library of Victoria)

Andrew Woods, an expert in stereoscopic 3D visualization and associate professor at Curtin, built an underwater rig with more than a dozen video and still cameras to capture details of the wrecks in 2015.

Ash Doshi, a computer vision specialist and senior research officer at Curtin, is developing and running software on NVIDIA GPUs that stitches the half-million pictures and 300 hours of video they took into virtual and printed 3D models.

3D at Battleship Scale

It’s hard, pioneering work in a process called photogrammetry. Commercially available software maxes out at around 10,000 images.

“It’s highly computationally intensive — when you double the number of images, you quadruple the compute requirements,” said Woods, who manages the Curtin HIVE, a lab with four advanced visualization systems.

“It would’ve taken a thousand years to process with our existing systems, even though they are fairly fast,” he said.

When completed next year, the work will have taken less than three years, thanks to systems at the nearby Pawsey Supercomputing Centre using NVIDIA V100 and prior-generation GPUs.

Speed Enables Iteration

Accelerated computing is critical because the work is iterative. Images must be processed, manipulated and then reprocessed.

For example, Woods said a first pass on a batch of 400 images would take 10 hours on his laptop. By contrast, he could run a first pass in 10 minutes on his system with two NVIDIA RTX A6000 GPUs awarded through NVIDIA’s Applied Research Accelerator Program.

It would take a month to process 8,000 images on the lab’s fast PCs, work the supercomputer could handle in a day. “Rarely would anyone in industry wait a month to process a dataset,” said Woods.

From Films to VR

Local curators can’t wait to get the Sydney and Kormoran models on display. Half the comments on their Tripadvisor page already celebrate 3D films the team took of the wrecks.

The digital models will more deeply engage museumgoers with interactive virtual and augmented reality exhibits and large-scale 3D prints.

“These 3D models really help us unravel the story, so people can appreciate the history,” Woods said.

Kormoran, WWII shipwreck off Perth
In a video call, Woods and Doshi show how forces embedded an anchor in the Kormoran’s hull as it sank.

The exhibits are expected to tour museums in Perth and Sydney, and potentially cities in Germany and the U.K., where the ships were built.

When the project is complete, the researchers aim to make their code available so others can turn historic artifacts on the seabed into rare museum pieces. Woods expects the software could also find commercial uses monitoring undersea pipelines, oil and gas rigs and more.

A Real-Time Tool

On the horizon, the researchers want to try Instant NeRF, an inverse rendering tool NVIDIA researchers developed to turn 2D images into 3D models in real time.

Woods imagines using it on future shipwreck surveys, possibly running on an NVIDIA DGX System on the survey vessel. It could provide previews in near real time based on images gathered by remotely operated underwater vehicles on the ocean floor, letting the team know when it has enough data to take back for processing on a supercomputer.

“We really don’t want to return to base to find we’ve missed a spot,” said Woods.

Woods’ passion for 3D has its roots in the sea.

“I saw the movie Jaws 3D when I was a teenager, and the images of sharks exploding out of the screen are in part responsible for taking me down this path,” he said.

The researchers released the video below to commemorate the 81st anniversary of the sinking of the WWII ships.

https://hive.curtin.edu.au/SK81st

The post See a Sea Change: 3D Researchers Bring Naval History to Life appeared first on NVIDIA Blog.

Read More

A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

Meet the electric SUV with magnetic appeal.

Lucid Group unveiled its next act, the Gravity SUV, during the AutoMobility Los Angeles auto show. The automaker also launched additional versions of the hit Lucid Air sedan — Air Pure and Air Touring.

Both models offer the future-ready DreamDrive Pro driver-assistance system, powered by the NVIDIA DRIVE platform.

Lucid launched the Air late last year to widespread acclaim. The luxury sedan won MotorTrend’s Car of the Year for 2022, with a chart-topping battery range of up to 516 miles and fast charging.

The newly introduced variants provide updated features for a wider audience. Air Pure is designed for agility, with a lightweight, compact battery and industry-leading aerodynamics.

Air Touring is the heart of the lineup, featuring more horsepower and battery range than the Pure and greater flexibility in customer options.

Lucid Air Pure

Gravity builds on this stellar reputation with an aerodynamic, spacious and intelligent design, all backed by the high-performance, centralized compute of NVIDIA DRIVE.

“Just as Lucid Air redefined the sedan category, so too will Gravity impact the world of luxury SUVs, setting new benchmarks across the board,” said Lucid Group CEO and CTO Peter Rawlinson.

Capable and Enjoyable

DreamDrive Pro is software defined, continuously improving via over-the-air software updates.

It uses a rich suite of 14 cameras, one lidar, five radars and 12 ultrasonics running on NVIDIA DRIVE for robust automated driving and intelligent cockpit features, including surround-view monitoring, blind-spot display and highway assist.

In addition to a diversity of sensors, Lucid’s dual-rail power system and proprietary Ethernet Ring offer a high degree of redundancy for key systems, such as braking and steering.

The DreamDrive Pro system uses an array of sensors and NVIDIA DRIVE high-performance compute for intelligent driving features.

“The Lucid Air is at its core a software-defined vehicle, meaning a large part of the experience is delivered by the software,” Rawlinson said. “This makes the Lucid Air more capable and enjoyable with every passing update.”

Prepare to Launch

These new Lucid vehicles are nearly ready for liftoff.

The Lucid Air Touring has already begun production, and Air Pure will start in December, with customer deliveries soon to follow.

The automaker will open reservations for the Lucid Gravity in the spring, slating deliveries to begin in 2024.

The post A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More

MoMA Installation Marks Breakthrough for AI Art

MoMA Installation Marks Breakthrough for AI Art

AI-generated art has arrived.

With a presentation making its debut this week at The Museum of Modern Art in New York City — perhaps the world’s premier institution devoted to modern and contemporary art — the AI technologies that have upended trillion-dollar industries worldwide over the past decade will get a formal introduction.

Created by pioneering artist Refik Anadol, the installation in the museum’s soaring Gund Lobby uses a sophisticated machine-learning model to interpret the publicly available visual and informational data of MoMA’s collection.

“Right now, we are in a renaissance,” Anadol said of the presentation “Refik Anadol: Unsupervised.” “Having AI in the medium is completely and profoundly changing the profession.”

Anadol is a digital media pioneer. Throughout his career, he’s been intrigued by the intersection between art and AI. His first encounter with AI as an artistic tool was at Google, where he used deep learning — and an NVIDIA GeForce GTX 1080 Ti — to create dynamic digital artworks.

In 2017, he started working with one of the first generative AI tools, StyleGAN, created at NVIDIA Research, which was able to generate synthetic images of faces that are incredibly realistic.

Anadol was more intrigued by the ability to use the tool to explore more abstract images, training StyleGAN not on images of faces, but of modern art, and guiding the AI’s synthesis using data streaming in from optical, temperature and acoustic sensors.

Digging Deep With MoMA

Those ideas led him to an online collaboration with The Museum of Modern Art in 2021, which was exhibited by Feral File, using more than 138,000 records from the museum’s publicly available archive. The Feral File exhibit caused an online sensation, reimagining art in real time and inspiring the wave of AI-generated art that’s spread quickly through social media communities on Instagram, Twitter, Discord and Reddit this year.

This year, he returned to MoMA to dig even deeper, collaborating again with MoMA curators Michelle Kuo and Paola Antonelli on a new major installation. On view from Nov. 19 through March 5, 2023, “Refik Anadol: Unsupervised” will use AI to interpret and transform more than 200 years of art from MoMA’s collection.

It’s an exploration not just of the world’s foremost collection of modern art — pretty much every single pioneering sculptor, painter and even game designer of the past two centuries — but a look inside the mind of AI, allowing us to see results of the algorithm processing data from MoMA’s collection, as well as ambient sound, temperature and light, and ‘dreaming,’” Anadol said.

Powering the system is a full suite of NVIDIA technologies. He relies on an NVIDIA DGX server equipped with NVIDIA A100 Tensor Core GPUs to train the model in real time. Another machine equipped with an NVIDIA RTX 4090 GPU translates the model into computer graphics, driving the exhibit’s display.

‘Bending Data’

“Refik is bending data — which we normally associate with rational systems — into a realm of surrealism and irrationality,” Michelle Kuo, the exhibit’s curator at the museum, told the New York Times. “His interpretation of MoMA’s dataset is essentially a transformation of the history of modern art.”

The installation comes amid a wave of excitement around generative AI, a technology that’s been put at the fingertips of amateur and professional artists alike with new tools such as Midjourney, OpenAI’s Dall·E, and DreamStudio.

And while Anadol’s work intersects with the surge in interest in NFT art that had the world buzzing in 2021, like AI-generated art, it goes far beyond it.

Inspired by Cutting-Edge Research

Anadol’s work digs deep into MoMA’s archives and cutting-edge AI, relying on a technology developed at NVIDIA Research called StyleGAN. David Luebke, vice president of graphics research at NVIDIA, said he first got excited about generative AI’s artistic and creative possibilities when he saw NVIDIA researcher Janne Hellsten’s demo of StyleGAN2 trained on stylized artistic portraits.

“Suddenly, one could fluidly explore the content and style of a generated image or have it react to ambient effects like sound or even weather,” Luebke said.

NVIDIA Research has been pushing forward the state of the art in generative AI since at least 2017 when NVIDIA developed “Progressive GANs,” which used AI to synthesize highly realistic, high-resolution images of human faces for the first time. This was followed by StyleGAN, which achieved even higher quality results.

Each year after that, NVIDIA released a paper that advanced the state of the art. StyleGAN has proved to be a versatile platform, Luebke explained, enabling countless other researchers and artists like Anadol to bring their ideas to life.

Democratizing Content Creation

Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces or cats or cars, and encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting, Luebke explains.

“This is exciting because it democratizes content creation,” Luebke said. “Ultimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids,” Luebke said.

Anadol’s work at MoMA offers a taste of what’s possible. “Refik Anadol: Unsupervised,” the artist’s first U.S. solo museum presentation, features three new digital artworks by the Los Angeles-based artist that use AI to dynamically explore MoMA’s collection on a vast 24-by-24-foot digital display. It’s as much a work of architecture as it is one of art.

“Often, AI is used to classify, process and generate realistic representations of the world,” the exhibition’s organizer Michelle Kuo, told Archinect, a leading publication covering contemporary art and architecture. “Anadol’s work, by contrast, is visionary: it explores dreams, hallucination and irrationality, posing an alternate understanding of modern art — and of artmaking itself.”

“Refik Anadol: Unsupervised” also hints at how AI will transform our future, and Anadol thinks it will be for the better. “This will just enhance our imagination,” Anadol said. “I’m seeing this as an extension of our minds.”

For more, see our exploration of Refik Anadol’s work in NVIDIA’s AI Art Gallery.

The post MoMA Installation Marks Breakthrough for AI Art appeared first on NVIDIA Blog.

Read More

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Gaming in the living room is getting an upgrade with GeForce NOW.

This GFN Thursday, kick off the weekend streaming GeForce NOW on Samsung TVs, with upcoming support for 4K resolution.

Get started with the 10 new titles streaming this week.

Plus, Yes by YTL Communications, a leading 5G provider in Malaysia, today announced it will soon bring GeForce NOW powered by Yes to gamers across the country. Stay tuned for more updates.

Go Big, Go Bold With 4K on Samsung Smart TVs

GeForce NOW is making its way to 2021 Samsung Smart TV models, and is already available through the Samsung Gaming Hub on 2022 Samsung TVs, so more players than ever can stream from GeForce NOW — no downloads, storage limits or console required.

Samsung Gaming Hub
Get tuned in to the cloud just in time for these TV streaming updates.

Even better, gaming on Samsung Smart TVs will look pixel perfect in 4K resolution. 2022 Samsung TVs and select 2021 Samsung TVs will be capable of streaming in 4K, as Samsung’s leadership in game-streaming technology and AI upscaling optimizes picture quality and the entire gaming experience.

The new TV firmware will start rolling out at the end of the month, enabling 4K resolution for Samsung Smart TV streamers with an RTX 3080 membership. RTX 3080 members will be able to stream up to 4K natively on Samsung Smart TVs for the first time, as well as get maximized eight-hour gaming sessions and dedicated RTX 3080 servers.

Here to Play Today

GFN Thursday delivers new games to the cloud every week. Jump into 10 new additions streaming today.

Warhammer 40000 Darktide
Delve deep into the industrial city of Tertium to combat the forces of Chaos that lurk.

Gamers who’ve preordered Warhammer 40,000: Darktide can leap thousands of years into the future a little early. Take back the city of Tertium from hordes of bloodthirsty foes in this intense, brutal action shooter streaming the Pre-Order Beta on Steam.

Members can also look for the following titles:

  • Ballads of Hongye (New release on Steam)
  • Bravery and Greed (New release on Steam)
  • TERRACOTTA (New release on Steam and Epic Games)
  • Warhammer 40,000: Darktide (New release pre-order beta access on Steam)
  • Frozen Flame (New release on Steam, Nov. 17)
  • Goat Simulator 3 (New release on Epic Games, Nov. 17)
  • Nobody — The Turnaround (New release on Steam, Nov. 17)
  • Caveblazers (Steam)
  • The Darkest Tales (Epic Games)
  • The Tenants (Epic Games)

Then jump into the new season of Rumbleverse, the play-for-free, 40-person Brawler Royale where anyone can be a champion. Take a trip on the expanded map to Low Key Key Island, master new power moves like “Jagged Edge” and earn new gear to show off your style.

And from now until Sunday, Nov. 20, snag a special upgrade to a six-month Priority Membership for just $29.99 — 40% off the standard price of $49.99. Bring a buddy to battle with you by getting them a GeForce NOW gift card.

Before you power up to play this weekend, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs appeared first on NVIDIA Blog.

Read More

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

The U.S. National Oceanic and Atmospheric Administration has selected Lockheed Martin and NVIDIA to build a prototype system to accelerate outputs of Earth Environment Monitoring and their corresponding visualizations.

Using AI techniques, such a system has the potential to reduce by an order of magnitude the amount of time necessary for the output of complex weather visualizations to be generated.

The first-of-its-kind project for a U.S. federal agency, the Global Earth Observation Digital Twin, or EODT, will provide a prototype to visualize terabytes of geophysical data from the land, ocean, cryosphere, atmosphere and space.

By fusing data from a broad variety of sensor sources, the system will be able to deliver information that’s not just up to date, but that decision-makers have confidence in, explained Lockheed Martin Space Senior Research Scientist Lynn Montgomery.

“We’re providing a one-stop shop for researchers, and for next-generation systems, not only for current, but for recent past environmental data,” Montgomery said. “Our collaboration with NVIDIA will provide NOAA a timely, global visualization of their massive datasets.”

Building on NVIDIA Omniverse

Building on NVIDIA Omniverse, the system has the potential to serve as a clearinghouse for scientists and researchers from a broad range of government agencies, one that can be extended over time to support a wide range of applications.

The support for the EODT pilot project is one of several initiatives at NVIDIA to develop tools and technologies for large-scale, even planetary simulations.

Last November, NVIDIA announced it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

NVIDIA and Lockheed Martin announced last year that they are working with the U.S. Department of Agriculture Forest Service and Colorado Division of Fire Prevention & Control to use AI and digital-twin simulation to better understand wildfires and stop their spread.

And in March, NVIDIA announced an accelerated digital twins platform for scientific computing consisting of the NVIDIA Modulus AI framework for developing physics-ML neural network models and the NVIDIA Omniverse 3D virtual-world simulation platform.

The EODT project builds on these initiatives, relying on NVIDIA Omniverse Nucleus to allow different applications to quickly import and export custom, visualizable assets to and from the effort’s central data store.

“This is a blueprint for a complex system using Omniverse, where we will have a fusion of sensor data, architectural data and AI inferred data all combined with various visualization capacities deployed to the cloud and various workstations,” said Peter Messmer, senior manager in the HPC Developer Technology group at NVIDIA. “It’s a fantastic opportunity to highlight all these components with a real-world example.”

A Fast-Moving Effort

The effort will move fast, with a demonstration of the system’s ability to visualize sea surface temperature data slated for next September. The system will take advantage of GPU computing instances from Amazon Web Services and NVIDIA DGX and OVX servers on premises.

The fast, flexible system will provide a prototype to visualize geophysical variables from NOAA satellite and ground data sources from a broad range of sources.

These include temperature and moisture profiles, sea surface temperatures, sea ice concentrations and solar wind data, among other sources.

That data will be collected by Lockheed Martin’s OpenRosetta3D software, which is widely used for sophisticated large-scale image analysis, workflow orchestration and sensor fusion by government agencies, such as NASA, and private industry.

NVIDIA will support the development of one-way connectors to import “snapshots” of processed geospatial datasets from Lockheed’s OpenRosetta3D technology into NVIDIA Omniverse Nucleus as Universal Scene Description inputs.

USD is an open source and extensible ecosystem for describing, composing, simulating and collaborating within 3D worlds, initially invented by Pixar Animation Studios.

Omniverse Nucleus will be vital to making the data available fast, in part because of Nucleus’s ability to relay just what’s changed in a dataset, Montgomery explained.

Nucleus will, in turn, deliver those USD datasets to Lockheed’s Agatha 3D viewer, based on Unity, allowing users to quickly see data from multiple sensors on an interactive 3D earth and space platform.

The result is a system that will help researchers at NOAA, and, eventually, elsewhere, make decisions faster based on the latest available data.

The post Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers appeared first on NVIDIA Blog.

Read More

GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Content creators can now pick up the GeForce RTX 4080 GPU, available from top add-in card providers including ASUS, Colorful, Gainward, Galaxy, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.

Talented filmmaker Casey Faris and his team at Release the Hounds! Studio step In the NVIDIA Studio this week to share their short, sci-fi-inspired film, Tuesday on Earth.

In addition, the November Studio Driver is ready for download to enhance existing creative app features, reduce repetitive tasks and speed up creative ones.

Plus, the NVIDIA Studio #WinterArtChallenge is underway — check out some featured artists at the end of this post.

Beyond Fast — GeForce RTX 4080 GPU Now Available

The new GeForce RTX 4080 GPU brings a massive boost in performance of up to 1.6x compared to the GeForce RTX 3080 Ti GPU, thanks to third-generation RT Cores, fourth-generation Tensor Cores, eighth-generation dual AV1 encoders and 16GB memory — plenty to edit up to 12K RAW video files or large 3D scenes.

The new GeForce RTX 4080 GPU.

3D artists can now work with accurate and realistic lighting, physics and materials while creating 3D scenes — all in real time, without proxies. DLSS 3, now available in the NVIDIA Omniverse beta, uses RTX Tensor Cores and the new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness in the viewport. Unity and Unreal Engine 5 will soon release updated versions with DLSS 3.

Video and livestreaming creative workflows are also accelerated by the new AV1 encoder, with 40% increased encoding efficiency, unlocking higher resolutions and crisper image quality. AV1 is integrated in OBS Studio, DaVinci Resolve and Adobe Premiere Pro, the latter through the Voukoder plug-in.

The new dual encoders capture up to 8K resolution at 60 FPS in real time via GeForce Experience and OBS Studio, and cut video export times nearly in half. Popular video-editing apps have released updates to enable this setting, including Adobe Premiere Pro (via the popular Voukoder plug-in) and Jianying Pro — China’s top video-editing app. Blackmagic Design’s DaVinci Resolve and MAGIX Vegas Pro also added dual-encoder support this week.

State-of-the-art AI technology — including AI image generators and new editing tools in DaVinci Resolve and Adobe apps like Photoshop and Premiere Pro — is taking creators to the next level. It allows them to brainstorm concepts quickly, helps them easily apply advanced effects, and removes their tedious, repetitive tasks. Fourth-gen Tensor Cores found on GeForce RTX 40 Series GPUs help speed all of these AI tools, delivering up to a 2x increase in performance over the previous generation.

Expand creative possibilities and pick up the GeForce RTX 4080 GPU today. Check out this product finder for retail availability and visit GeForce.com for further information.

Another Tuesday on Earth

Filmmaker Casey Faris and the team at Release the Hounds! Studio love science fiction. Their short film Tuesday on Earth is an homage to their favorite childhood sci-fi flicks, including E.T. the Extra-Terrestrial, Men in Black and Critters.

It was challenging to “create something that felt epic, but wasn’t way too big of a story to fit in a couple of minutes,” Faris said.

Preproduction was mostly done with rough sketches on an iPad using the popular digital-illustration app Procreate. Next, the team filmed all the sequences. “We spent many hours out in the forest getting eaten by mosquitos, lots of time locked in a tiny bathroom and way too many lunch breaks at the secondhand store buying spaceship parts,” joked Faris.

Are you seeing what we’re seeing? Motion blur effects applied faster with RTX GPU acceleration.

All 4K footage was copied to Blackmagic Design’s DaVinci Resolve 18 through the Hedge app that runs checksums, ensuring the video files are properly transferred and quickly generating backup footage.

“NVIDIA is the obvious choice if you talk to any creative professional. It’s never a question whether we get an NVIDIA GPU — just which one we get.” — filmmaker Casey Faris

Faris specializes in DaVinci Resolve because of its versatility. “We can do just about anything in one app, on one timeline,” he said. “This makes it really easy to iterate on our comps, re-edits and sound-mixing adjustments — all of it’s no big deal as it’s all living together.”

DaVinci Resolve is powerful, professional-grade software that relies heavily on GPU acceleration to get the job done. Faris’ GeForce RTX 3070-powered system was up to the task.

His RTX GPU afforded NVIDIA Studio benefits within DaVinci Resolve software. The RTX-accelerated hardware encoder and decoder sped up video transcoding, enabling Faris to edit faster.

Footage adjustments and movement within the timeline was seamless, with virtually no slowdown, resulting in more efficient video-bay sessions.

Even color grading was sped up due to his RTX GPU, he said.

Color grade faster with NVIDIA and GeForce RTX GPUs in DaVinci Resolve.

AI-powered features accelerated by Faris’ GeForce RTX GPU played a critical role.

The Detect Scene Cuts feature, optimized by RTX GPUs, quickly detected tag cuts in video files, eliminating painstakingly long scrubbing sessions just to make manual edits, a boon for Faris’ efficiency.

To add special effects, Faris worked within the RTX GPU-accelerated Fusion page in DaVinci Resolve, a note-based workflow with hundreds of 2D and 3D tools for creating true Hollywood-caliber effects. Blockbusters like The Hunger Games and Marvel’s The Avengers were made in Fusion.

Faris used Object Mask Tracking, powered by the DaVinci Neural Engine, to intuitively isolate subjects, all with simple paint strokes. This made it much easier to mask the male hero and apply that vibrant purple hue in the background. With the new GeForce RTX 40 Series GPUs, this feature is 70% faster than with the previous generation.

“Automatic Depth Map” powered by AI in DaVinci Resolve.

In addition, Faris used the Automatic Depth Map AI feature to instantly generate a 3D depth matte of a scene, quickly grading the foreground from the background. Then, he changed the mood of the home-arrival scene by adding environmental fog effects. Various scenes mimicked the characteristics of different high-quality lenses by adding blur or depth of field to further enhance shots.

3D animations in Blender.

Even when moving to Blender Cycles for the computer-generated imagery, RTX-accelerated OptiX ray tracing in the viewport enabled Faris to craft 3D assets with smooth, interactive movement in photorealistic detail.

Faris is thankful to be able to share these adventures with the world. “It’s cool to teach people to be creative and make their own awesome stuff,” he added. “That’s what I like the most. We can make something cool, but it’s even better if it inspires others.”

Filmmaker Casey Faris.

Faris recently acquired the new GeForce RTX 4080 GPU to further accelerate his video editing workflows.

Get his thoughts in the video above and check out Faris’ YouTube channel.

Join the #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @RippaSats, whose celestial rendering Mystic Arctic invokes the hearts and spirits of many.

Or @CrocodilePower and her animation Reflection, which delivers more than meets the eye.

And be sure to tag #WinterArtChallenge to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

It doesn’t matter if you love hockey, basketball or soccer. Thanks to the internet, there’s never been a better time to be a sports fan. 

But editing together so many social media clips, long-form YouTube highlights and other videos from global sporting events is no easy feat. So how are all of these craveable video packages made? 

Auto-magical video solutions help. And by auto-magical, of course, we mean powered by AI.

On this episode of the AI Podcast, host Noah Kravitz spoke with Amos Berkovich, algorithm group leader at WSC Sports, maker of an AI cloud platform that enables over 200 sports organizations worldwide to generate personalized and customized sports videos automatically and in real time.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming appeared first on NVIDIA Blog.

Read More

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

With the end of Moore’s law, traditional approaches to meet the insatiable demand for increased computing performance will require disproportionate increases in costs and power.

At the same time, the need to slow the effects of climate change will require more efficient data centers, which already consume more than 200 terawatt-hours of energy each year, or around 2% of the world’s energy usage.

Released today, the new Green500 list of the world’s most-efficient supercomputers demonstrates the energy efficiency of accelerated computing, which is already used in all of the top 30 systems on the list. Its impact on energy efficiency is staggering.

We estimate the TOP500 systems require more than 5 terawatt-hours of energy per year, or $750 million worth of energy, to operate.

But that could be slashed by more than 80% to just $150 million — saving 4 terawatt-hours of energy — if these systems were as efficient as the 30 greenest systems on the TOP500 list.

Conversely, with the same power budget as today’s TOP500 systems and the efficiency of the top 30 systems, these supercomputers could deliver 5x today’s performance.

And the efficiency gains highlighted by the latest Green500 systems are just the start. NVIDIA is racing to deliver continuous energy improvements across its CPUs, GPUs, software and systems portfolio.

Hopper’s Green500 Debut

NVIDIA technologies already power 23 of the top 30 systems on the latest Green500 list.

Among the highlights: the Flatiron Institute in New York City topped the Green500 list of most efficient supercomputers with an air-cooled ThinkSystem built by Lenovo featuring NVIDIA Hopper H100 GPUs.

The supercomputer, dubbed Henri, produces 65 billion double-precision, floating-point operations per watt, according to the Green500, and will be used to tackle problems in computational astrophysics, biology, mathematics, neuroscience and quantum physics.

The NVIDIA H100 Tensor Core GPU, based on the NVIDIA Hopper GPU architecture, has up to 6x more AI performance and up to 3x more HPC performance compared to the prior-generation A100 GPU. It’s designed to perform with incredible efficiency. Its second-generation Multi-Instance GPU technology can partition the GPU into smaller compute units, dramatically boosting the number of GPU clients available to data center users.

And the show floor at this year’s SC22 conference is packed with new systems featuring NVIDIA’s latest technologies from ASUS, Atos, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.

The fastest new computer on the TOP500 list, Leonardo, hosted and managed by the Cineca nonprofit consortium, and powered by nearly 14,000 NVIDIA A100 GPUs, took the No. 4 spot, while also being the 12th most energy-efficient system.

The latest TOP500 list boasts the highest number of NVIDIA technologies so far.

In total, NVIDIA technologies power 361 of the systems on the TOP500 list, including 90% of the new systems (see chart).

The Next-Generation Accelerated Data Center

NVIDIA is also developing new computing architectures to deliver even greater energy efficiency and performance to the accelerated data center.

The Grace CPU and Grace Hopper Superchips, announced earlier this year, will provide the next big boost in the energy efficiency of the NVIDIA accelerated computing platform. The Grace CPU Superchip delivers up to twice the performance per watt of a traditional CPU, thanks to the incredible efficiency of the Grace CPU and low-power LPDDR5X memory.

Assuming a 1-megawatt HPC data center with 20% of the power allocated for CPU partition and 80% toward the accelerated portion using Grace and Grace Hopper, data centers can get 1.8x more work done for the same power budget compared to a similarly partitioned x86-based data center.

DPUs Driving Additional Efficiency Gains

Along with Grace and Grace Hopper, NVIDIA networking technology is supercharging cloud-native supercomputing just as the increased usage of simulations is accelerating demand for supercomputing services.

Based on NVIDIA’s BlueField-3 DPU, the NVIDIA Quantum-2 InfiniBand platform delivers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The effort, described in a recent whitepaper, demonstrated how DPUs can be used to offload and accelerate networking, security, storage or other infrastructure functions and control-plane applications, reducing server power consumption up to 30%.

The amount of power savings increases as server load increases and can easily save $5 million in electricity costs for a large data center with 10,000 servers over the three-year lifespan of the servers, plus additional savings in cooling, power delivery, rack space and server capital costs.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient.

More With Less

Breakthroughs like these come as the scientific method is rapidly transforming into an approach driven by data analytics, AI and physics-based simulation, making more efficient computers key to the next generation of scientific breakthroughs.

By providing researchers with a multi-discipline, high-performance computing platform optimized for this new approach — and able to deliver both performance and efficiency — NVIDIA gives scientists an instrument to make critical discoveries that will benefit us all.

More Resources

The post Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward appeared first on NVIDIA Blog.

Read More

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

A finalist for the Gordon Bell special prize for high performance computing-based COVID-19 research has taught large language models (LLMs) a new lingo — gene sequences — that can unlock insights in genomics, epidemiology and protein engineering.

Published in October, the groundbreaking work is a collaboration by more than two dozen academic and commercial researchers from Argonne National Laboratory, NVIDIA, the University of Chicago and others.

The research team trained an LLM to track genetic mutations and predict variants of concern in SARS-CoV-2, the virus behind COVID-19. While most LLMs applied to biology to date have been trained on datasets of small molecules or proteins, this project is one of the first models trained on raw nucleotide sequences — the smallest units of DNA and RNA.

“We hypothesized that moving from protein-level to gene-level data might help us build better models to understand COVID variants,” said Arvind Ramanathan, computational biologist at Argonne, who led the project. “By training our model to track the entire genome and all the changes that appear in its evolution, we can make better predictions about not just COVID, but any disease with enough genomic data.”

The Gordon Bell awards, regarded as the Nobel Prize of high performance computing, will be presented at this week’s SC22 conference by the Association for Computing Machinery, which represents around 100,000 computing experts worldwide. Since 2020, the group has awarded a special prize for outstanding research that advances the understanding of COVID with HPC.

Training LLMs on a Four-Letter Language

LLMs have long been trained on human languages, which usually comprise a couple dozen letters that can be arranged into tens of thousands of words, and joined together into longer sentences and paragraphs. The language of biology, on the other hand, has only four letters representing nucleotides — A, T, G and C in DNA, or A, U, G and C in RNA — arranged into different sequences as genes.

While fewer letters may seem like a simpler challenge for AI, language models for biology are actually far more complicated. That’s because the genome — made up of over 3 billion nucleotides in humans, and about 30,000 nucleotides in coronaviruses — is difficult to break down into distinct, meaningful units.

“When it comes to understanding the code of life, a major challenge is that the sequencing information in the genome is quite vast,” Ramanathan said. “The meaning of a nucleotide sequence can be affected by another sequence that’s much further away than the next sentence or paragraph would be in human text. It could reach over the equivalent of chapters in a book.”

NVIDIA collaborators on the project designed a hierarchical diffusion method that enabled the LLM to treat long strings of around 1,500 nucleotides as if they were sentences.

“Standard language models have trouble generating coherent long sequences and learning the underlying distribution of different variants,” said paper co-author Anima Anandkumar, senior director of AI research at NVIDIA and Bren professor in the computing + mathematical sciences department at Caltech. “We developed a diffusion model that operates at a higher level of detail that allows us to generate realistic variants and capture better statistics.”

Predicting COVID Variants of Concern

Using open-source data from the Bacterial and Viral Bioinformatics Resource Center, the team first pretrained its LLM on more than 110 million gene sequences from prokaryotes, which are single-celled organisms like bacteria. It then fine-tuned the model using 1.5 million high-quality genome sequences for the COVID virus.

By pretraining on a broader dataset, the researchers also ensured their model could generalize to other prediction tasks in future projects — making it one of the first whole-genome-scale models with this capability.

Once fine-tuned on COVID data, the LLM was able to distinguish between genome sequences of the virus’ variants. It was also able to generate its own nucleotide sequences, predicting potential mutations of the COVID genome that could help scientists anticipate future variants of concern.

visualization of sequenced covid genomes
Trained on a year’s worth of SARS-CoV-2 genome data, the model can infer the distinction between various viral strains. Each dot on the left corresponds to a sequenced SARS-CoV-2 viral strain, color-coded by variant. The figure on the right zooms into one particular strain of the virus, which captures evolutionary couplings across the viral proteins specific to this strain. Image courtesy of Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

“Most researchers have been tracking mutations in the spike protein of the COVID virus, specifically the domain that binds with human cells,” Ramanathan said. “But there are other proteins in the viral genome that go through frequent mutations and are important to understand.”

The model could also integrate with popular protein-structure-prediction models like AlphaFold and OpenFold, the paper stated, helping researchers simulate viral structure and study how genetic mutations impact a virus’ ability to infect its host. OpenFold is one of the pretrained language models included in the NVIDIA BioNeMo LLM service for developers applying LLMs to digital biology and chemistry applications.

Supercharging AI Training With GPU-Accelerated Supercomputers

The team developed its AI models on supercomputers powered by NVIDIA A100 Tensor Core GPUs — including Argonne’s Polaris, the U.S. Department of Energy’s Perlmutter, and NVIDIA’s in-house Selene system. By scaling up to these powerful systems, they achieved performance of more than 1,500 exaflops in training runs, creating the largest biological language models to date.

“We’re working with models today that have up to 25 billion parameters, and we expect this to significantly increase in the future,” said Ramanathan. “The model size, the genetic sequence lengths and the amount of training data needed means we really need the computational complexity provided by supercomputers with thousands of GPUs.”

The researchers estimate that training a version of their model with 2.5 billion parameters took over a month on around 4,000 GPUs. The team, which was already investigating LLMs for biology, spent about four months on the project before publicly releasing the paper and code. The GitHub page includes instructions for other researchers to run the model on Polaris and Perlmutter.

The NVIDIA BioNeMo framework, available in early access on the NVIDIA NGC hub for GPU-optimized software, supports researchers scaling large biomolecular language models across multiple GPUs. Part of the NVIDIA Clara Discovery collection of drug discovery tools, the framework will support chemistry, protein, DNA and RNA data formats.

Find NVIDIA at SC22.

Image at top represents COVID strains sequenced by the researchers’ LLM. Each dot is color-coded by COVID variant. Image courtesy of  Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

The post Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants appeared first on NVIDIA Blog.

Read More