A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

Meet the electric SUV with magnetic appeal.

Lucid Group unveiled its next act, the Gravity SUV, during the AutoMobility Los Angeles auto show. The automaker also launched additional versions of the hit Lucid Air sedan — Air Pure and Air Touring.

Both models offer the future-ready DreamDrive Pro driver-assistance system, powered by the NVIDIA DRIVE platform.

Lucid launched the Air late last year to widespread acclaim. The luxury sedan won MotorTrend’s Car of the Year for 2022, with a chart-topping battery range of up to 516 miles and fast charging.

The newly introduced variants provide updated features for a wider audience. Air Pure is designed for agility, with a lightweight, compact battery and industry-leading aerodynamics.

Air Touring is the heart of the lineup, featuring more horsepower and battery range than the Pure and greater flexibility in customer options.

Lucid Air Pure

Gravity builds on this stellar reputation with an aerodynamic, spacious and intelligent design, all backed by the high-performance, centralized compute of NVIDIA DRIVE.

“Just as Lucid Air redefined the sedan category, so too will Gravity impact the world of luxury SUVs, setting new benchmarks across the board,” said Lucid Group CEO and CTO Peter Rawlinson.

Capable and Enjoyable

DreamDrive Pro is software defined, continuously improving via over-the-air software updates.

It uses a rich suite of 14 cameras, one lidar, five radars and 12 ultrasonics running on NVIDIA DRIVE for robust automated driving and intelligent cockpit features, including surround-view monitoring, blind-spot display and highway assist.

In addition to a diversity of sensors, Lucid’s dual-rail power system and proprietary Ethernet Ring offer a high degree of redundancy for key systems, such as braking and steering.

The DreamDrive Pro system uses an array of sensors and NVIDIA DRIVE high-performance compute for intelligent driving features.

“The Lucid Air is at its core a software-defined vehicle, meaning a large part of the experience is delivered by the software,” Rawlinson said. “This makes the Lucid Air more capable and enjoyable with every passing update.”

Prepare to Launch

These new Lucid vehicles are nearly ready for liftoff.

The Lucid Air Touring has already begun production, and Air Pure will start in December, with customer deliveries soon to follow.

The automaker will open reservations for the Lucid Gravity in the spring, slating deliveries to begin in 2024.

The post A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More

MoMA Installation Marks Breakthrough for AI Art

MoMA Installation Marks Breakthrough for AI Art

AI-generated art has arrived.

With a presentation making its debut this week at The Museum of Modern Art in New York City — perhaps the world’s premier institution devoted to modern and contemporary art — the AI technologies that have upended trillion-dollar industries worldwide over the past decade will get a formal introduction.

Created by pioneering artist Refik Anadol, the installation in the museum’s soaring Gund Lobby uses a sophisticated machine-learning model to interpret the publicly available visual and informational data of MoMA’s collection.

“Right now, we are in a renaissance,” Anadol said of the presentation “Refik Anadol: Unsupervised.” “Having AI in the medium is completely and profoundly changing the profession.”

Anadol is a digital media pioneer. Throughout his career, he’s been intrigued by the intersection between art and AI. His first encounter with AI as an artistic tool was at Google, where he used deep learning — and an NVIDIA GeForce GTX 1080 Ti — to create dynamic digital artworks.

In 2017, he started working with one of the first generative AI tools, StyleGAN, created at NVIDIA Research, which was able to generate synthetic images of faces that are incredibly realistic.

Anadol was more intrigued by the ability to use the tool to explore more abstract images, training StyleGAN not on images of faces, but of modern art, and guiding the AI’s synthesis using data streaming in from optical, temperature and acoustic sensors.

Digging Deep With MoMA

Those ideas led him to an online collaboration with The Museum of Modern Art in 2021, which was exhibited by Feral File, using more than 138,000 records from the museum’s publicly available archive. The Feral File exhibit caused an online sensation, reimagining art in real time and inspiring the wave of AI-generated art that’s spread quickly through social media communities on Instagram, Twitter, Discord and Reddit this year.

This year, he returned to MoMA to dig even deeper, collaborating again with MoMA curators Michelle Kuo and Paola Antonelli on a new major installation. On view from Nov. 19 through March 5, 2023, “Refik Anadol: Unsupervised” will use AI to interpret and transform more than 200 years of art from MoMA’s collection.

It’s an exploration not just of the world’s foremost collection of modern art — pretty much every single pioneering sculptor, painter and even game designer of the past two centuries — but a look inside the mind of AI, allowing us to see results of the algorithm processing data from MoMA’s collection, as well as ambient sound, temperature and light, and ‘dreaming,’” Anadol said.

Powering the system is a full suite of NVIDIA technologies. He relies on an NVIDIA DGX server equipped with NVIDIA A100 Tensor Core GPUs to train the model in real time. Another machine equipped with an NVIDIA RTX 4090 GPU translates the model into computer graphics, driving the exhibit’s display.

‘Bending Data’

“Refik is bending data — which we normally associate with rational systems — into a realm of surrealism and irrationality,” Michelle Kuo, the exhibit’s curator at the museum, told the New York Times. “His interpretation of MoMA’s dataset is essentially a transformation of the history of modern art.”

The installation comes amid a wave of excitement around generative AI, a technology that’s been put at the fingertips of amateur and professional artists alike with new tools such as Midjourney, OpenAI’s Dall·E, and DreamStudio.

And while Anadol’s work intersects with the surge in interest in NFT art that had the world buzzing in 2021, like AI-generated art, it goes far beyond it.

Inspired by Cutting-Edge Research

Anadol’s work digs deep into MoMA’s archives and cutting-edge AI, relying on a technology developed at NVIDIA Research called StyleGAN. David Luebke, vice president of graphics research at NVIDIA, said he first got excited about generative AI’s artistic and creative possibilities when he saw NVIDIA researcher Janne Hellsten’s demo of StyleGAN2 trained on stylized artistic portraits.

“Suddenly, one could fluidly explore the content and style of a generated image or have it react to ambient effects like sound or even weather,” Luebke said.

NVIDIA Research has been pushing forward the state of the art in generative AI since at least 2017 when NVIDIA developed “Progressive GANs,” which used AI to synthesize highly realistic, high-resolution images of human faces for the first time. This was followed by StyleGAN, which achieved even higher quality results.

Each year after that, NVIDIA released a paper that advanced the state of the art. StyleGAN has proved to be a versatile platform, Luebke explained, enabling countless other researchers and artists like Anadol to bring their ideas to life.

Democratizing Content Creation

Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces or cats or cars, and encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting, Luebke explains.

“This is exciting because it democratizes content creation,” Luebke said. “Ultimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids,” Luebke said.

Anadol’s work at MoMA offers a taste of what’s possible. “Refik Anadol: Unsupervised,” the artist’s first U.S. solo museum presentation, features three new digital artworks by the Los Angeles-based artist that use AI to dynamically explore MoMA’s collection on a vast 24-by-24-foot digital display. It’s as much a work of architecture as it is one of art.

“Often, AI is used to classify, process and generate realistic representations of the world,” the exhibition’s organizer Michelle Kuo, told Archinect, a leading publication covering contemporary art and architecture. “Anadol’s work, by contrast, is visionary: it explores dreams, hallucination and irrationality, posing an alternate understanding of modern art — and of artmaking itself.”

“Refik Anadol: Unsupervised” also hints at how AI will transform our future, and Anadol thinks it will be for the better. “This will just enhance our imagination,” Anadol said. “I’m seeing this as an extension of our minds.”

For more, see our exploration of Refik Anadol’s work in NVIDIA’s AI Art Gallery.

The post MoMA Installation Marks Breakthrough for AI Art appeared first on NVIDIA Blog.

Read More

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Gaming in the living room is getting an upgrade with GeForce NOW.

This GFN Thursday, kick off the weekend streaming GeForce NOW on Samsung TVs, with upcoming support for 4K resolution.

Get started with the 10 new titles streaming this week.

Plus, Yes by YTL Communications, a leading 5G provider in Malaysia, today announced it will soon bring GeForce NOW powered by Yes to gamers across the country. Stay tuned for more updates.

Go Big, Go Bold With 4K on Samsung Smart TVs

GeForce NOW is making its way to 2021 Samsung Smart TV models, and is already available through the Samsung Gaming Hub on 2022 Samsung TVs, so more players than ever can stream from GeForce NOW — no downloads, storage limits or console required.

Samsung Gaming Hub
Get tuned in to the cloud just in time for these TV streaming updates.

Even better, gaming on Samsung Smart TVs will look pixel perfect in 4K resolution. 2022 Samsung TVs and select 2021 Samsung TVs will be capable of streaming in 4K, as Samsung’s leadership in game-streaming technology and AI upscaling optimizes picture quality and the entire gaming experience.

The new TV firmware will start rolling out at the end of the month, enabling 4K resolution for Samsung Smart TV streamers with an RTX 3080 membership. RTX 3080 members will be able to stream up to 4K natively on Samsung Smart TVs for the first time, as well as get maximized eight-hour gaming sessions and dedicated RTX 3080 servers.

Here to Play Today

GFN Thursday delivers new games to the cloud every week. Jump into 10 new additions streaming today.

Warhammer 40000 Darktide
Delve deep into the industrial city of Tertium to combat the forces of Chaos that lurk.

Gamers who’ve preordered Warhammer 40,000: Darktide can leap thousands of years into the future a little early. Take back the city of Tertium from hordes of bloodthirsty foes in this intense, brutal action shooter streaming the Pre-Order Beta on Steam.

Members can also look for the following titles:

  • Ballads of Hongye (New release on Steam)
  • Bravery and Greed (New release on Steam)
  • TERRACOTTA (New release on Steam and Epic Games)
  • Warhammer 40,000: Darktide (New release pre-order beta access on Steam)
  • Frozen Flame (New release on Steam, Nov. 17)
  • Goat Simulator 3 (New release on Epic Games, Nov. 17)
  • Nobody — The Turnaround (New release on Steam, Nov. 17)
  • Caveblazers (Steam)
  • The Darkest Tales (Epic Games)
  • The Tenants (Epic Games)

Then jump into the new season of Rumbleverse, the play-for-free, 40-person Brawler Royale where anyone can be a champion. Take a trip on the expanded map to Low Key Key Island, master new power moves like “Jagged Edge” and earn new gear to show off your style.

And from now until Sunday, Nov. 20, snag a special upgrade to a six-month Priority Membership for just $29.99 — 40% off the standard price of $49.99. Bring a buddy to battle with you by getting them a GeForce NOW gift card.

Before you power up to play this weekend, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs appeared first on NVIDIA Blog.

Read More

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

The U.S. National Oceanic and Atmospheric Administration has selected Lockheed Martin and NVIDIA to build a prototype system to accelerate outputs of Earth Environment Monitoring and their corresponding visualizations.

Using AI techniques, such a system has the potential to reduce by an order of magnitude the amount of time necessary for the output of complex weather visualizations to be generated.

The first-of-its-kind project for a U.S. federal agency, the Global Earth Observation Digital Twin, or EODT, will provide a prototype to visualize terabytes of geophysical data from the land, ocean, cryosphere, atmosphere and space.

By fusing data from a broad variety of sensor sources, the system will be able to deliver information that’s not just up to date, but that decision-makers have confidence in, explained Lockheed Martin Space Senior Research Scientist Lynn Montgomery.

“We’re providing a one-stop shop for researchers, and for next-generation systems, not only for current, but for recent past environmental data,” Montgomery said. “Our collaboration with NVIDIA will provide NOAA a timely, global visualization of their massive datasets.”

Building on NVIDIA Omniverse

Building on NVIDIA Omniverse, the system has the potential to serve as a clearinghouse for scientists and researchers from a broad range of government agencies, one that can be extended over time to support a wide range of applications.

The support for the EODT pilot project is one of several initiatives at NVIDIA to develop tools and technologies for large-scale, even planetary simulations.

Last November, NVIDIA announced it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

NVIDIA and Lockheed Martin announced last year that they are working with the U.S. Department of Agriculture Forest Service and Colorado Division of Fire Prevention & Control to use AI and digital-twin simulation to better understand wildfires and stop their spread.

And in March, NVIDIA announced an accelerated digital twins platform for scientific computing consisting of the NVIDIA Modulus AI framework for developing physics-ML neural network models and the NVIDIA Omniverse 3D virtual-world simulation platform.

The EODT project builds on these initiatives, relying on NVIDIA Omniverse Nucleus to allow different applications to quickly import and export custom, visualizable assets to and from the effort’s central data store.

“This is a blueprint for a complex system using Omniverse, where we will have a fusion of sensor data, architectural data and AI inferred data all combined with various visualization capacities deployed to the cloud and various workstations,” said Peter Messmer, senior manager in the HPC Developer Technology group at NVIDIA. “It’s a fantastic opportunity to highlight all these components with a real-world example.”

A Fast-Moving Effort

The effort will move fast, with a demonstration of the system’s ability to visualize sea surface temperature data slated for next September. The system will take advantage of GPU computing instances from Amazon Web Services and NVIDIA DGX and OVX servers on premises.

The fast, flexible system will provide a prototype to visualize geophysical variables from NOAA satellite and ground data sources from a broad range of sources.

These include temperature and moisture profiles, sea surface temperatures, sea ice concentrations and solar wind data, among other sources.

That data will be collected by Lockheed Martin’s OpenRosetta3D software, which is widely used for sophisticated large-scale image analysis, workflow orchestration and sensor fusion by government agencies, such as NASA, and private industry.

NVIDIA will support the development of one-way connectors to import “snapshots” of processed geospatial datasets from Lockheed’s OpenRosetta3D technology into NVIDIA Omniverse Nucleus as Universal Scene Description inputs.

USD is an open source and extensible ecosystem for describing, composing, simulating and collaborating within 3D worlds, initially invented by Pixar Animation Studios.

Omniverse Nucleus will be vital to making the data available fast, in part because of Nucleus’s ability to relay just what’s changed in a dataset, Montgomery explained.

Nucleus will, in turn, deliver those USD datasets to Lockheed’s Agatha 3D viewer, based on Unity, allowing users to quickly see data from multiple sensors on an interactive 3D earth and space platform.

The result is a system that will help researchers at NOAA, and, eventually, elsewhere, make decisions faster based on the latest available data.

The post Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers appeared first on NVIDIA Blog.

Read More

GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Content creators can now pick up the GeForce RTX 4080 GPU, available from top add-in card providers including ASUS, Colorful, Gainward, Galaxy, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.

Talented filmmaker Casey Faris and his team at Release the Hounds! Studio step In the NVIDIA Studio this week to share their short, sci-fi-inspired film, Tuesday on Earth.

In addition, the November Studio Driver is ready for download to enhance existing creative app features, reduce repetitive tasks and speed up creative ones.

Plus, the NVIDIA Studio #WinterArtChallenge is underway — check out some featured artists at the end of this post.

Beyond Fast — GeForce RTX 4080 GPU Now Available

The new GeForce RTX 4080 GPU brings a massive boost in performance of up to 1.6x compared to the GeForce RTX 3080 Ti GPU, thanks to third-generation RT Cores, fourth-generation Tensor Cores, eighth-generation dual AV1 encoders and 16GB memory — plenty to edit up to 12K RAW video files or large 3D scenes.

The new GeForce RTX 4080 GPU.

3D artists can now work with accurate and realistic lighting, physics and materials while creating 3D scenes — all in real time, without proxies. DLSS 3, now available in the NVIDIA Omniverse beta, uses RTX Tensor Cores and the new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness in the viewport. Unity and Unreal Engine 5 will soon release updated versions with DLSS 3.

Video and livestreaming creative workflows are also accelerated by the new AV1 encoder, with 40% increased encoding efficiency, unlocking higher resolutions and crisper image quality. AV1 is integrated in OBS Studio, DaVinci Resolve and Adobe Premiere Pro, the latter through the Voukoder plug-in.

The new dual encoders capture up to 8K resolution at 60 FPS in real time via GeForce Experience and OBS Studio, and cut video export times nearly in half. Popular video-editing apps have released updates to enable this setting, including Adobe Premiere Pro (via the popular Voukoder plug-in) and Jianying Pro — China’s top video-editing app. Blackmagic Design’s DaVinci Resolve and MAGIX Vegas Pro also added dual-encoder support this week.

State-of-the-art AI technology — including AI image generators and new editing tools in DaVinci Resolve and Adobe apps like Photoshop and Premiere Pro — is taking creators to the next level. It allows them to brainstorm concepts quickly, helps them easily apply advanced effects, and removes their tedious, repetitive tasks. Fourth-gen Tensor Cores found on GeForce RTX 40 Series GPUs help speed all of these AI tools, delivering up to a 2x increase in performance over the previous generation.

Expand creative possibilities and pick up the GeForce RTX 4080 GPU today. Check out this product finder for retail availability and visit GeForce.com for further information.

Another Tuesday on Earth

Filmmaker Casey Faris and the team at Release the Hounds! Studio love science fiction. Their short film Tuesday on Earth is an homage to their favorite childhood sci-fi flicks, including E.T. the Extra-Terrestrial, Men in Black and Critters.

It was challenging to “create something that felt epic, but wasn’t way too big of a story to fit in a couple of minutes,” Faris said.

Preproduction was mostly done with rough sketches on an iPad using the popular digital-illustration app Procreate. Next, the team filmed all the sequences. “We spent many hours out in the forest getting eaten by mosquitos, lots of time locked in a tiny bathroom and way too many lunch breaks at the secondhand store buying spaceship parts,” joked Faris.

Are you seeing what we’re seeing? Motion blur effects applied faster with RTX GPU acceleration.

All 4K footage was copied to Blackmagic Design’s DaVinci Resolve 18 through the Hedge app that runs checksums, ensuring the video files are properly transferred and quickly generating backup footage.

“NVIDIA is the obvious choice if you talk to any creative professional. It’s never a question whether we get an NVIDIA GPU — just which one we get.” — filmmaker Casey Faris

Faris specializes in DaVinci Resolve because of its versatility. “We can do just about anything in one app, on one timeline,” he said. “This makes it really easy to iterate on our comps, re-edits and sound-mixing adjustments — all of it’s no big deal as it’s all living together.”

DaVinci Resolve is powerful, professional-grade software that relies heavily on GPU acceleration to get the job done. Faris’ GeForce RTX 3070-powered system was up to the task.

His RTX GPU afforded NVIDIA Studio benefits within DaVinci Resolve software. The RTX-accelerated hardware encoder and decoder sped up video transcoding, enabling Faris to edit faster.

Footage adjustments and movement within the timeline was seamless, with virtually no slowdown, resulting in more efficient video-bay sessions.

Even color grading was sped up due to his RTX GPU, he said.

Color grade faster with NVIDIA and GeForce RTX GPUs in DaVinci Resolve.

AI-powered features accelerated by Faris’ GeForce RTX GPU played a critical role.

The Detect Scene Cuts feature, optimized by RTX GPUs, quickly detected tag cuts in video files, eliminating painstakingly long scrubbing sessions just to make manual edits, a boon for Faris’ efficiency.

To add special effects, Faris worked within the RTX GPU-accelerated Fusion page in DaVinci Resolve, a note-based workflow with hundreds of 2D and 3D tools for creating true Hollywood-caliber effects. Blockbusters like The Hunger Games and Marvel’s The Avengers were made in Fusion.

Faris used Object Mask Tracking, powered by the DaVinci Neural Engine, to intuitively isolate subjects, all with simple paint strokes. This made it much easier to mask the male hero and apply that vibrant purple hue in the background. With the new GeForce RTX 40 Series GPUs, this feature is 70% faster than with the previous generation.

“Automatic Depth Map” powered by AI in DaVinci Resolve.

In addition, Faris used the Automatic Depth Map AI feature to instantly generate a 3D depth matte of a scene, quickly grading the foreground from the background. Then, he changed the mood of the home-arrival scene by adding environmental fog effects. Various scenes mimicked the characteristics of different high-quality lenses by adding blur or depth of field to further enhance shots.

3D animations in Blender.

Even when moving to Blender Cycles for the computer-generated imagery, RTX-accelerated OptiX ray tracing in the viewport enabled Faris to craft 3D assets with smooth, interactive movement in photorealistic detail.

Faris is thankful to be able to share these adventures with the world. “It’s cool to teach people to be creative and make their own awesome stuff,” he added. “That’s what I like the most. We can make something cool, but it’s even better if it inspires others.”

Filmmaker Casey Faris.

Faris recently acquired the new GeForce RTX 4080 GPU to further accelerate his video editing workflows.

Get his thoughts in the video above and check out Faris’ YouTube channel.

Join the #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @RippaSats, whose celestial rendering Mystic Arctic invokes the hearts and spirits of many.

Or @CrocodilePower and her animation Reflection, which delivers more than meets the eye.

And be sure to tag #WinterArtChallenge to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

It doesn’t matter if you love hockey, basketball or soccer. Thanks to the internet, there’s never been a better time to be a sports fan. 

But editing together so many social media clips, long-form YouTube highlights and other videos from global sporting events is no easy feat. So how are all of these craveable video packages made? 

Auto-magical video solutions help. And by auto-magical, of course, we mean powered by AI.

On this episode of the AI Podcast, host Noah Kravitz spoke with Amos Berkovich, algorithm group leader at WSC Sports, maker of an AI cloud platform that enables over 200 sports organizations worldwide to generate personalized and customized sports videos automatically and in real time.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming appeared first on NVIDIA Blog.

Read More

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

With the end of Moore’s law, traditional approaches to meet the insatiable demand for increased computing performance will require disproportionate increases in costs and power.

At the same time, the need to slow the effects of climate change will require more efficient data centers, which already consume more than 200 terawatt-hours of energy each year, or around 2% of the world’s energy usage.

Released today, the new Green500 list of the world’s most-efficient supercomputers demonstrates the energy efficiency of accelerated computing, which is already used in all of the top 30 systems on the list. Its impact on energy efficiency is staggering.

We estimate the TOP500 systems require more than 5 terawatt-hours of energy per year, or $750 million worth of energy, to operate.

But that could be slashed by more than 80% to just $150 million — saving 4 terawatt-hours of energy — if these systems were as efficient as the 30 greenest systems on the TOP500 list.

Conversely, with the same power budget as today’s TOP500 systems and the efficiency of the top 30 systems, these supercomputers could deliver 5x today’s performance.

And the efficiency gains highlighted by the latest Green500 systems are just the start. NVIDIA is racing to deliver continuous energy improvements across its CPUs, GPUs, software and systems portfolio.

Hopper’s Green500 Debut

NVIDIA technologies already power 23 of the top 30 systems on the latest Green500 list.

Among the highlights: the Flatiron Institute in New York City topped the Green500 list of most efficient supercomputers with an air-cooled ThinkSystem built by Lenovo featuring NVIDIA Hopper H100 GPUs.

The supercomputer, dubbed Henri, produces 65 billion double-precision, floating-point operations per watt, according to the Green500, and will be used to tackle problems in computational astrophysics, biology, mathematics, neuroscience and quantum physics.

The NVIDIA H100 Tensor Core GPU, based on the NVIDIA Hopper GPU architecture, has up to 6x more AI performance and up to 3x more HPC performance compared to the prior-generation A100 GPU. It’s designed to perform with incredible efficiency. Its second-generation Multi-Instance GPU technology can partition the GPU into smaller compute units, dramatically boosting the number of GPU clients available to data center users.

And the show floor at this year’s SC22 conference is packed with new systems featuring NVIDIA’s latest technologies from ASUS, Atos, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.

The fastest new computer on the TOP500 list, Leonardo, hosted and managed by the Cineca nonprofit consortium, and powered by nearly 14,000 NVIDIA A100 GPUs, took the No. 4 spot, while also being the 12th most energy-efficient system.

The latest TOP500 list boasts the highest number of NVIDIA technologies so far.

In total, NVIDIA technologies power 361 of the systems on the TOP500 list, including 90% of the new systems (see chart).

The Next-Generation Accelerated Data Center

NVIDIA is also developing new computing architectures to deliver even greater energy efficiency and performance to the accelerated data center.

The Grace CPU and Grace Hopper Superchips, announced earlier this year, will provide the next big boost in the energy efficiency of the NVIDIA accelerated computing platform. The Grace CPU Superchip delivers up to twice the performance per watt of a traditional CPU, thanks to the incredible efficiency of the Grace CPU and low-power LPDDR5X memory.

Assuming a 1-megawatt HPC data center with 20% of the power allocated for CPU partition and 80% toward the accelerated portion using Grace and Grace Hopper, data centers can get 1.8x more work done for the same power budget compared to a similarly partitioned x86-based data center.

DPUs Driving Additional Efficiency Gains

Along with Grace and Grace Hopper, NVIDIA networking technology is supercharging cloud-native supercomputing just as the increased usage of simulations is accelerating demand for supercomputing services.

Based on NVIDIA’s BlueField-3 DPU, the NVIDIA Quantum-2 InfiniBand platform delivers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The effort, described in a recent whitepaper, demonstrated how DPUs can be used to offload and accelerate networking, security, storage or other infrastructure functions and control-plane applications, reducing server power consumption up to 30%.

The amount of power savings increases as server load increases and can easily save $5 million in electricity costs for a large data center with 10,000 servers over the three-year lifespan of the servers, plus additional savings in cooling, power delivery, rack space and server capital costs.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient.

More With Less

Breakthroughs like these come as the scientific method is rapidly transforming into an approach driven by data analytics, AI and physics-based simulation, making more efficient computers key to the next generation of scientific breakthroughs.

By providing researchers with a multi-discipline, high-performance computing platform optimized for this new approach — and able to deliver both performance and efficiency — NVIDIA gives scientists an instrument to make critical discoveries that will benefit us all.

More Resources

The post Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward appeared first on NVIDIA Blog.

Read More

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

A finalist for the Gordon Bell special prize for high performance computing-based COVID-19 research has taught large language models (LLMs) a new lingo — gene sequences — that can unlock insights in genomics, epidemiology and protein engineering.

Published in October, the groundbreaking work is a collaboration by more than two dozen academic and commercial researchers from Argonne National Laboratory, NVIDIA, the University of Chicago and others.

The research team trained an LLM to track genetic mutations and predict variants of concern in SARS-CoV-2, the virus behind COVID-19. While most LLMs applied to biology to date have been trained on datasets of small molecules or proteins, this project is one of the first models trained on raw nucleotide sequences — the smallest units of DNA and RNA.

“We hypothesized that moving from protein-level to gene-level data might help us build better models to understand COVID variants,” said Arvind Ramanathan, computational biologist at Argonne, who led the project. “By training our model to track the entire genome and all the changes that appear in its evolution, we can make better predictions about not just COVID, but any disease with enough genomic data.”

The Gordon Bell awards, regarded as the Nobel Prize of high performance computing, will be presented at this week’s SC22 conference by the Association for Computing Machinery, which represents around 100,000 computing experts worldwide. Since 2020, the group has awarded a special prize for outstanding research that advances the understanding of COVID with HPC.

Training LLMs on a Four-Letter Language

LLMs have long been trained on human languages, which usually comprise a couple dozen letters that can be arranged into tens of thousands of words, and joined together into longer sentences and paragraphs. The language of biology, on the other hand, has only four letters representing nucleotides — A, T, G and C in DNA, or A, U, G and C in RNA — arranged into different sequences as genes.

While fewer letters may seem like a simpler challenge for AI, language models for biology are actually far more complicated. That’s because the genome — made up of over 3 billion nucleotides in humans, and about 30,000 nucleotides in coronaviruses — is difficult to break down into distinct, meaningful units.

“When it comes to understanding the code of life, a major challenge is that the sequencing information in the genome is quite vast,” Ramanathan said. “The meaning of a nucleotide sequence can be affected by another sequence that’s much further away than the next sentence or paragraph would be in human text. It could reach over the equivalent of chapters in a book.”

NVIDIA collaborators on the project designed a hierarchical diffusion method that enabled the LLM to treat long strings of around 1,500 nucleotides as if they were sentences.

“Standard language models have trouble generating coherent long sequences and learning the underlying distribution of different variants,” said paper co-author Anima Anandkumar, senior director of AI research at NVIDIA and Bren professor in the computing + mathematical sciences department at Caltech. “We developed a diffusion model that operates at a higher level of detail that allows us to generate realistic variants and capture better statistics.”

Predicting COVID Variants of Concern

Using open-source data from the Bacterial and Viral Bioinformatics Resource Center, the team first pretrained its LLM on more than 110 million gene sequences from prokaryotes, which are single-celled organisms like bacteria. It then fine-tuned the model using 1.5 million high-quality genome sequences for the COVID virus.

By pretraining on a broader dataset, the researchers also ensured their model could generalize to other prediction tasks in future projects — making it one of the first whole-genome-scale models with this capability.

Once fine-tuned on COVID data, the LLM was able to distinguish between genome sequences of the virus’ variants. It was also able to generate its own nucleotide sequences, predicting potential mutations of the COVID genome that could help scientists anticipate future variants of concern.

visualization of sequenced covid genomes
Trained on a year’s worth of SARS-CoV-2 genome data, the model can infer the distinction between various viral strains. Each dot on the left corresponds to a sequenced SARS-CoV-2 viral strain, color-coded by variant. The figure on the right zooms into one particular strain of the virus, which captures evolutionary couplings across the viral proteins specific to this strain. Image courtesy of Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

“Most researchers have been tracking mutations in the spike protein of the COVID virus, specifically the domain that binds with human cells,” Ramanathan said. “But there are other proteins in the viral genome that go through frequent mutations and are important to understand.”

The model could also integrate with popular protein-structure-prediction models like AlphaFold and OpenFold, the paper stated, helping researchers simulate viral structure and study how genetic mutations impact a virus’ ability to infect its host. OpenFold is one of the pretrained language models included in the NVIDIA BioNeMo LLM service for developers applying LLMs to digital biology and chemistry applications.

Supercharging AI Training With GPU-Accelerated Supercomputers

The team developed its AI models on supercomputers powered by NVIDIA A100 Tensor Core GPUs — including Argonne’s Polaris, the U.S. Department of Energy’s Perlmutter, and NVIDIA’s in-house Selene system. By scaling up to these powerful systems, they achieved performance of more than 1,500 exaflops in training runs, creating the largest biological language models to date.

“We’re working with models today that have up to 25 billion parameters, and we expect this to significantly increase in the future,” said Ramanathan. “The model size, the genetic sequence lengths and the amount of training data needed means we really need the computational complexity provided by supercomputers with thousands of GPUs.”

The researchers estimate that training a version of their model with 2.5 billion parameters took over a month on around 4,000 GPUs. The team, which was already investigating LLMs for biology, spent about four months on the project before publicly releasing the paper and code. The GitHub page includes instructions for other researchers to run the model on Polaris and Perlmutter.

The NVIDIA BioNeMo framework, available in early access on the NVIDIA NGC hub for GPU-optimized software, supports researchers scaling large biomolecular language models across multiple GPUs. Part of the NVIDIA Clara Discovery collection of drug discovery tools, the framework will support chemistry, protein, DNA and RNA data formats.

Find NVIDIA at SC22.

Image at top represents COVID strains sequenced by the researchers’ LLM. Each dot is color-coded by COVID variant. Image courtesy of  Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

The post Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants appeared first on NVIDIA Blog.

Read More

Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge

Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge

Collaboration among researchers, like the scientific community itself, spans the globe.

Universities and enterprises sharing work over long distances require a common language and secure pipeline to get every device — from microscopes and sensors to servers and campus networks — to see and understand the data each is transmitting. The increasing amount of data that needs to be stored, transmitted and analyzed only compounds the challenge.

To overcome this problem, NVIDIA has introduced a high performance computing platform that combines edge computing and AI to capture and consolidate streaming data from scientific edge instruments, and then allow the devices to talk to each other over long distances.

The platform consists of three major components. NVIDIA Holoscan is a software development kit that data scientists and domain experts can use to build GPU-accelerated pipelines for sensors that stream data. MetroX-3 is a new long-haul system that extends the connectivity of the NVIDIA Quantum-2 InfiniBand platform. And NVIDIA BlueField-3 DPUs provide secure and intelligent data migration.

Researchers can use the new NVIDIA platform for HPC edge computing to  securely communicate and collaborate on solving problems and bring their disparate devices and algorithms together to operate as one large supercomputer.

Holoscan for HPC at the Edge

Accelerated by GPU computing platforms — including NVIDIA IGX, HGX, DGX systems — NVIDIA Holoscan delivers the extreme performance required to process massive streams of data generated by the world’s scientific instruments.

NVIDIA Holoscan for HPC includes new APIs for C++ and Python that HPC researchers can use to build sensor data processing workflows that are flexible enough for non-image formats and scalable enough to translate raw data into real-time insights.

Holoscan also manages memory allocation to ensure zero-copy data exchanges, so developers can focus on the workflow logic and not worry about managing file and memory I/O.

The new features in Holoscan will be available to all the HPC developers next month. Sign up to be notified of early access to Holoscan 0.4 SDK.

MetroX-3 Goes the Distance

The NVIDIA MetroX-3 long-haul system, available next month, extends the latest cloud-native capabilities of the NVIDIA Quantum-2 InfiniBand platform from the edge to the HPC data center core. It enables GPUs between sites to securely share data over the InfiniBand network up to 25 miles (40km) away.

Taking advantage of native remote direct memory access, users can easily migrate data and compute jobs from one InfiniBand-connected mini-cluster to the main data center, or combine geographically dispersed compute clusters for higher overall performance and scalability.

Data center operators can efficiently provision, monitor and operate across all the InfiniBand-connected data center networks by using the NVIDIA Unified Fabric Manager to manage their MetroX-3 systems.

BlueField for Secure, Efficient HPC

NVIDIA BlueField data processing units offload, accelerate and isolate advanced networking, storage and security services to boost performance and efficiency for modern HPC.

During SC22, system software company Zettar is demonstrating its data migration and storage offload solution based on BlueField-3. Zettar software can consolidate data migration tasks to a data center footprint of 4U rack space, which today requires 13U with x86-based solutions.

Learn more about the new NVIDIA platform for HPC computing at the edge.

The post Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge appeared first on NVIDIA Blog.

Read More

Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators

Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators

The technologies powering the world’s 7 million data centers are changing rapidly. The latest have allowed IT organizations to reduce costs even while dealing with exponential data growth.

Simulation and digital twins can help data center designers, builders and operators create highly efficient and performant facilities. But building a digital twin that can accurately represent all components of an AI supercomputing facility is a massive, complex undertaking.

The NVIDIA Omniverse simulation platform helps address this challenge by streamlining the process for collaborative virtual design. An Omniverse demo at SC22 showcased how the people behind data centers can use this open development platform to enhance the design and development of complex supercomputing facilities.

Omniverse, for the first time, lets data center operators aggregate real-time data inputs from their core third-party computer-aided design, simulation and monitoring applications so they can see and work with their complete datasets in real time.

The demo shows how Omniverse allows users to tap into the power of accelerated computing, simulation and operational digital twins connected to real-time monitoring and AI. This enables teams to streamline facility design, accelerate construction and deployment, and optimize ongoing operations.

The demo also highlighted NVIDIA Air, a data center simulation platform designed to work in conjunction with Omniverse to simulate the network — the central nervous system of the data center. With NVIDIA Air, teams can model the entire network stack, allowing them to automate and validate network hardware and software prior to bring-up.

Creating Digital Twins to Elevate Design and Simulation

In planning and constructing one of NVIDIA’s latest AI supercomputers, multiple engineering CAD datasets were collected from third-party industry tools such as Autodesk Revit, PTC Creo and Trimble SketchUp. This allowed designers and engineers to view the Universal Scene Description-based model in full fidelity, and they could collaboratively iterate on the design in real time.

PATCH MANAGER is an enterprise software application for planning cabling, assets and physical layer point-to-point connectivity in network domains. With PATCH MANAGER connected to Omniverse, the complex topology of port-to-port connections, rack and node layouts, and cabling can be integrated directly into the live model. This enables data center engineers to see the full view of the model and its dependencies.

To predict airflow and heat transfers, engineers used Cadence 6SigmaDCX, a software for computational fluid dynamics. Engineers can also use AI surrogates trained with NVIDIA Modulus for “what-if” analysis in near-real time. This lets teams simulate changes in complex thermals and cooling, and they can see the results instantly.

And with NVIDIA Air, the exact network topology — including protocols, monitoring and automation — can be simulated and prevalidated.

Once construction of a data center is complete, its sensors, control system and telemetry can be connected to the digital twin inside Omniverse, enabling real-time monitoring of operations.

With a perfectly synchronized digital twin, engineers can simulate common dangers such as power peaking or cooling system failures. Operators can benefit from AI-recommended changes that optimize for key priorities like boosting energy efficiency and reducing carbon footprint. The digital twin also allows them to test and validate software and component upgrades before deploying to the physical data center.

Catch up on the latest announcements by watching NVIDIA’s SC22 special address, and learn more about NVIDIA Omniverse.

The post Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators appeared first on NVIDIA Blog.

Read More