GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Content creators can now pick up the GeForce RTX 4080 GPU, available from top add-in card providers including ASUS, Colorful, Gainward, Galaxy, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.

Talented filmmaker Casey Faris and his team at Release the Hounds! Studio step In the NVIDIA Studio this week to share their short, sci-fi-inspired film, Tuesday on Earth.

In addition, the November Studio Driver is ready for download to enhance existing creative app features, reduce repetitive tasks and speed up creative ones.

Plus, the NVIDIA Studio #WinterArtChallenge is underway — check out some featured artists at the end of this post.

Beyond Fast — GeForce RTX 4080 GPU Now Available

The new GeForce RTX 4080 GPU brings a massive boost in performance of up to 1.6x compared to the GeForce RTX 3080 Ti GPU, thanks to third-generation RT Cores, fourth-generation Tensor Cores, eighth-generation dual AV1 encoders and 16GB memory — plenty to edit up to 12K RAW video files or large 3D scenes.

The new GeForce RTX 4080 GPU.

3D artists can now work with accurate and realistic lighting, physics and materials while creating 3D scenes — all in real time, without proxies. DLSS 3, now available in the NVIDIA Omniverse beta, uses RTX Tensor Cores and the new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness in the viewport. Unity and Unreal Engine 5 will soon release updated versions with DLSS 3.

Video and livestreaming creative workflows are also accelerated by the new AV1 encoder, with 40% increased encoding efficiency, unlocking higher resolutions and crisper image quality. AV1 is integrated in OBS Studio, DaVinci Resolve and Adobe Premiere Pro, the latter through the Voukoder plug-in.

The new dual encoders capture up to 8K resolution at 60 FPS in real time via GeForce Experience and OBS Studio, and cut video export times nearly in half. Popular video-editing apps have released updates to enable this setting, including Adobe Premiere Pro (via the popular Voukoder plug-in) and Jianying Pro — China’s top video-editing app. Blackmagic Design’s DaVinci Resolve and MAGIX Vegas Pro also added dual-encoder support this week.

State-of-the-art AI technology — including AI image generators and new editing tools in DaVinci Resolve and Adobe apps like Photoshop and Premiere Pro — is taking creators to the next level. It allows them to brainstorm concepts quickly, helps them easily apply advanced effects, and removes their tedious, repetitive tasks. Fourth-gen Tensor Cores found on GeForce RTX 40 Series GPUs help speed all of these AI tools, delivering up to a 2x increase in performance over the previous generation.

Expand creative possibilities and pick up the GeForce RTX 4080 GPU today. Check out this product finder for retail availability and visit GeForce.com for further information.

Another Tuesday on Earth

Filmmaker Casey Faris and the team at Release the Hounds! Studio love science fiction. Their short film Tuesday on Earth is an homage to their favorite childhood sci-fi flicks, including E.T. the Extra-Terrestrial, Men in Black and Critters.

It was challenging to “create something that felt epic, but wasn’t way too big of a story to fit in a couple of minutes,” Faris said.

Preproduction was mostly done with rough sketches on an iPad using the popular digital-illustration app Procreate. Next, the team filmed all the sequences. “We spent many hours out in the forest getting eaten by mosquitos, lots of time locked in a tiny bathroom and way too many lunch breaks at the secondhand store buying spaceship parts,” joked Faris.

Are you seeing what we’re seeing? Motion blur effects applied faster with RTX GPU acceleration.

All 4K footage was copied to Blackmagic Design’s DaVinci Resolve 18 through the Hedge app that runs checksums, ensuring the video files are properly transferred and quickly generating backup footage.

“NVIDIA is the obvious choice if you talk to any creative professional. It’s never a question whether we get an NVIDIA GPU — just which one we get.” — filmmaker Casey Faris

Faris specializes in DaVinci Resolve because of its versatility. “We can do just about anything in one app, on one timeline,” he said. “This makes it really easy to iterate on our comps, re-edits and sound-mixing adjustments — all of it’s no big deal as it’s all living together.”

DaVinci Resolve is powerful, professional-grade software that relies heavily on GPU acceleration to get the job done. Faris’ GeForce RTX 3070-powered system was up to the task.

His RTX GPU afforded NVIDIA Studio benefits within DaVinci Resolve software. The RTX-accelerated hardware encoder and decoder sped up video transcoding, enabling Faris to edit faster.

Footage adjustments and movement within the timeline was seamless, with virtually no slowdown, resulting in more efficient video-bay sessions.

Even color grading was sped up due to his RTX GPU, he said.

Color grade faster with NVIDIA and GeForce RTX GPUs in DaVinci Resolve.

AI-powered features accelerated by Faris’ GeForce RTX GPU played a critical role.

The Detect Scene Cuts feature, optimized by RTX GPUs, quickly detected tag cuts in video files, eliminating painstakingly long scrubbing sessions just to make manual edits, a boon for Faris’ efficiency.

To add special effects, Faris worked within the RTX GPU-accelerated Fusion page in DaVinci Resolve, a note-based workflow with hundreds of 2D and 3D tools for creating true Hollywood-caliber effects. Blockbusters like The Hunger Games and Marvel’s The Avengers were made in Fusion.

Faris used Object Mask Tracking, powered by the DaVinci Neural Engine, to intuitively isolate subjects, all with simple paint strokes. This made it much easier to mask the male hero and apply that vibrant purple hue in the background. With the new GeForce RTX 40 Series GPUs, this feature is 70% faster than with the previous generation.

“Automatic Depth Map” powered by AI in DaVinci Resolve.

In addition, Faris used the Automatic Depth Map AI feature to instantly generate a 3D depth matte of a scene, quickly grading the foreground from the background. Then, he changed the mood of the home-arrival scene by adding environmental fog effects. Various scenes mimicked the characteristics of different high-quality lenses by adding blur or depth of field to further enhance shots.

3D animations in Blender.

Even when moving to Blender Cycles for the computer-generated imagery, RTX-accelerated OptiX ray tracing in the viewport enabled Faris to craft 3D assets with smooth, interactive movement in photorealistic detail.

Faris is thankful to be able to share these adventures with the world. “It’s cool to teach people to be creative and make their own awesome stuff,” he added. “That’s what I like the most. We can make something cool, but it’s even better if it inspires others.”

Filmmaker Casey Faris.

Faris recently acquired the new GeForce RTX 4080 GPU to further accelerate his video editing workflows.

Get his thoughts in the video above and check out Faris’ YouTube channel.

Join the #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @RippaSats, whose celestial rendering Mystic Arctic invokes the hearts and spirits of many.

Or @CrocodilePower and her animation Reflection, which delivers more than meets the eye.

And be sure to tag #WinterArtChallenge to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post GeForce RTX 4080 GPU Launches, Unlocking 1.6x Performance for Creators This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming

It doesn’t matter if you love hockey, basketball or soccer. Thanks to the internet, there’s never been a better time to be a sports fan. 

But editing together so many social media clips, long-form YouTube highlights and other videos from global sporting events is no easy feat. So how are all of these craveable video packages made? 

Auto-magical video solutions help. And by auto-magical, of course, we mean powered by AI.

On this episode of the AI Podcast, host Noah Kravitz spoke with Amos Berkovich, algorithm group leader at WSC Sports, maker of an AI cloud platform that enables over 200 sports organizations worldwide to generate personalized and customized sports videos automatically and in real time.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post Attention, Sports Fans! WSC Sports’ Amos Berkovich on How AI Keeps the Highlights Coming appeared first on NVIDIA Blog.

Read More

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward

With the end of Moore’s law, traditional approaches to meet the insatiable demand for increased computing performance will require disproportionate increases in costs and power.

At the same time, the need to slow the effects of climate change will require more efficient data centers, which already consume more than 200 terawatt-hours of energy each year, or around 2% of the world’s energy usage.

Released today, the new Green500 list of the world’s most-efficient supercomputers demonstrates the energy efficiency of accelerated computing, which is already used in all of the top 30 systems on the list. Its impact on energy efficiency is staggering.

We estimate the TOP500 systems require more than 5 terawatt-hours of energy per year, or $750 million worth of energy, to operate.

But that could be slashed by more than 80% to just $150 million — saving 4 terawatt-hours of energy — if these systems were as efficient as the 30 greenest systems on the TOP500 list.

Conversely, with the same power budget as today’s TOP500 systems and the efficiency of the top 30 systems, these supercomputers could deliver 5x today’s performance.

And the efficiency gains highlighted by the latest Green500 systems are just the start. NVIDIA is racing to deliver continuous energy improvements across its CPUs, GPUs, software and systems portfolio.

Hopper’s Green500 Debut

NVIDIA technologies already power 23 of the top 30 systems on the latest Green500 list.

Among the highlights: the Flatiron Institute in New York City topped the Green500 list of most efficient supercomputers with an air-cooled ThinkSystem built by Lenovo featuring NVIDIA Hopper H100 GPUs.

The supercomputer, dubbed Henri, produces 65 billion double-precision, floating-point operations per watt, according to the Green500, and will be used to tackle problems in computational astrophysics, biology, mathematics, neuroscience and quantum physics.

The NVIDIA H100 Tensor Core GPU, based on the NVIDIA Hopper GPU architecture, has up to 6x more AI performance and up to 3x more HPC performance compared to the prior-generation A100 GPU. It’s designed to perform with incredible efficiency. Its second-generation Multi-Instance GPU technology can partition the GPU into smaller compute units, dramatically boosting the number of GPU clients available to data center users.

And the show floor at this year’s SC22 conference is packed with new systems featuring NVIDIA’s latest technologies from ASUS, Atos, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.

The fastest new computer on the TOP500 list, Leonardo, hosted and managed by the Cineca nonprofit consortium, and powered by nearly 14,000 NVIDIA A100 GPUs, took the No. 4 spot, while also being the 12th most energy-efficient system.

The latest TOP500 list boasts the highest number of NVIDIA technologies so far.

In total, NVIDIA technologies power 361 of the systems on the TOP500 list, including 90% of the new systems (see chart).

The Next-Generation Accelerated Data Center

NVIDIA is also developing new computing architectures to deliver even greater energy efficiency and performance to the accelerated data center.

The Grace CPU and Grace Hopper Superchips, announced earlier this year, will provide the next big boost in the energy efficiency of the NVIDIA accelerated computing platform. The Grace CPU Superchip delivers up to twice the performance per watt of a traditional CPU, thanks to the incredible efficiency of the Grace CPU and low-power LPDDR5X memory.

Assuming a 1-megawatt HPC data center with 20% of the power allocated for CPU partition and 80% toward the accelerated portion using Grace and Grace Hopper, data centers can get 1.8x more work done for the same power budget compared to a similarly partitioned x86-based data center.

DPUs Driving Additional Efficiency Gains

Along with Grace and Grace Hopper, NVIDIA networking technology is supercharging cloud-native supercomputing just as the increased usage of simulations is accelerating demand for supercomputing services.

Based on NVIDIA’s BlueField-3 DPU, the NVIDIA Quantum-2 InfiniBand platform delivers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The effort, described in a recent whitepaper, demonstrated how DPUs can be used to offload and accelerate networking, security, storage or other infrastructure functions and control-plane applications, reducing server power consumption up to 30%.

The amount of power savings increases as server load increases and can easily save $5 million in electricity costs for a large data center with 10,000 servers over the three-year lifespan of the servers, plus additional savings in cooling, power delivery, rack space and server capital costs.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient.

More With Less

Breakthroughs like these come as the scientific method is rapidly transforming into an approach driven by data analytics, AI and physics-based simulation, making more efficient computers key to the next generation of scientific breakthroughs.

By providing researchers with a multi-discipline, high-performance computing platform optimized for this new approach — and able to deliver both performance and efficiency — NVIDIA gives scientists an instrument to make critical discoveries that will benefit us all.

More Resources

The post Going Green: New Generation of NVIDIA-Powered Systems Show Way Forward appeared first on NVIDIA Blog.

Read More

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants

A finalist for the Gordon Bell special prize for high performance computing-based COVID-19 research has taught large language models (LLMs) a new lingo — gene sequences — that can unlock insights in genomics, epidemiology and protein engineering.

Published in October, the groundbreaking work is a collaboration by more than two dozen academic and commercial researchers from Argonne National Laboratory, NVIDIA, the University of Chicago and others.

The research team trained an LLM to track genetic mutations and predict variants of concern in SARS-CoV-2, the virus behind COVID-19. While most LLMs applied to biology to date have been trained on datasets of small molecules or proteins, this project is one of the first models trained on raw nucleotide sequences — the smallest units of DNA and RNA.

“We hypothesized that moving from protein-level to gene-level data might help us build better models to understand COVID variants,” said Arvind Ramanathan, computational biologist at Argonne, who led the project. “By training our model to track the entire genome and all the changes that appear in its evolution, we can make better predictions about not just COVID, but any disease with enough genomic data.”

The Gordon Bell awards, regarded as the Nobel Prize of high performance computing, will be presented at this week’s SC22 conference by the Association for Computing Machinery, which represents around 100,000 computing experts worldwide. Since 2020, the group has awarded a special prize for outstanding research that advances the understanding of COVID with HPC.

Training LLMs on a Four-Letter Language

LLMs have long been trained on human languages, which usually comprise a couple dozen letters that can be arranged into tens of thousands of words, and joined together into longer sentences and paragraphs. The language of biology, on the other hand, has only four letters representing nucleotides — A, T, G and C in DNA, or A, U, G and C in RNA — arranged into different sequences as genes.

While fewer letters may seem like a simpler challenge for AI, language models for biology are actually far more complicated. That’s because the genome — made up of over 3 billion nucleotides in humans, and about 30,000 nucleotides in coronaviruses — is difficult to break down into distinct, meaningful units.

“When it comes to understanding the code of life, a major challenge is that the sequencing information in the genome is quite vast,” Ramanathan said. “The meaning of a nucleotide sequence can be affected by another sequence that’s much further away than the next sentence or paragraph would be in human text. It could reach over the equivalent of chapters in a book.”

NVIDIA collaborators on the project designed a hierarchical diffusion method that enabled the LLM to treat long strings of around 1,500 nucleotides as if they were sentences.

“Standard language models have trouble generating coherent long sequences and learning the underlying distribution of different variants,” said paper co-author Anima Anandkumar, senior director of AI research at NVIDIA and Bren professor in the computing + mathematical sciences department at Caltech. “We developed a diffusion model that operates at a higher level of detail that allows us to generate realistic variants and capture better statistics.”

Predicting COVID Variants of Concern

Using open-source data from the Bacterial and Viral Bioinformatics Resource Center, the team first pretrained its LLM on more than 110 million gene sequences from prokaryotes, which are single-celled organisms like bacteria. It then fine-tuned the model using 1.5 million high-quality genome sequences for the COVID virus.

By pretraining on a broader dataset, the researchers also ensured their model could generalize to other prediction tasks in future projects — making it one of the first whole-genome-scale models with this capability.

Once fine-tuned on COVID data, the LLM was able to distinguish between genome sequences of the virus’ variants. It was also able to generate its own nucleotide sequences, predicting potential mutations of the COVID genome that could help scientists anticipate future variants of concern.

visualization of sequenced covid genomes
Trained on a year’s worth of SARS-CoV-2 genome data, the model can infer the distinction between various viral strains. Each dot on the left corresponds to a sequenced SARS-CoV-2 viral strain, color-coded by variant. The figure on the right zooms into one particular strain of the virus, which captures evolutionary couplings across the viral proteins specific to this strain. Image courtesy of Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

“Most researchers have been tracking mutations in the spike protein of the COVID virus, specifically the domain that binds with human cells,” Ramanathan said. “But there are other proteins in the viral genome that go through frequent mutations and are important to understand.”

The model could also integrate with popular protein-structure-prediction models like AlphaFold and OpenFold, the paper stated, helping researchers simulate viral structure and study how genetic mutations impact a virus’ ability to infect its host. OpenFold is one of the pretrained language models included in the NVIDIA BioNeMo LLM service for developers applying LLMs to digital biology and chemistry applications.

Supercharging AI Training With GPU-Accelerated Supercomputers

The team developed its AI models on supercomputers powered by NVIDIA A100 Tensor Core GPUs — including Argonne’s Polaris, the U.S. Department of Energy’s Perlmutter, and NVIDIA’s in-house Selene system. By scaling up to these powerful systems, they achieved performance of more than 1,500 exaflops in training runs, creating the largest biological language models to date.

“We’re working with models today that have up to 25 billion parameters, and we expect this to significantly increase in the future,” said Ramanathan. “The model size, the genetic sequence lengths and the amount of training data needed means we really need the computational complexity provided by supercomputers with thousands of GPUs.”

The researchers estimate that training a version of their model with 2.5 billion parameters took over a month on around 4,000 GPUs. The team, which was already investigating LLMs for biology, spent about four months on the project before publicly releasing the paper and code. The GitHub page includes instructions for other researchers to run the model on Polaris and Perlmutter.

The NVIDIA BioNeMo framework, available in early access on the NVIDIA NGC hub for GPU-optimized software, supports researchers scaling large biomolecular language models across multiple GPUs. Part of the NVIDIA Clara Discovery collection of drug discovery tools, the framework will support chemistry, protein, DNA and RNA data formats.

Find NVIDIA at SC22.

Image at top represents COVID strains sequenced by the researchers’ LLM. Each dot is color-coded by COVID variant. Image courtesy of  Argonne National Laboratory’s Bharat Kale, Max Zvyagin and Michael E. Papka. 

The post Speaking the Language of the Genome: Gordon Bell Finalist Applies Large Language Models to Predict New COVID Variants appeared first on NVIDIA Blog.

Read More

Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge

Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge

Collaboration among researchers, like the scientific community itself, spans the globe.

Universities and enterprises sharing work over long distances require a common language and secure pipeline to get every device — from microscopes and sensors to servers and campus networks — to see and understand the data each is transmitting. The increasing amount of data that needs to be stored, transmitted and analyzed only compounds the challenge.

To overcome this problem, NVIDIA has introduced a high performance computing platform that combines edge computing and AI to capture and consolidate streaming data from scientific edge instruments, and then allow the devices to talk to each other over long distances.

The platform consists of three major components. NVIDIA Holoscan is a software development kit that data scientists and domain experts can use to build GPU-accelerated pipelines for sensors that stream data. MetroX-3 is a new long-haul system that extends the connectivity of the NVIDIA Quantum-2 InfiniBand platform. And NVIDIA BlueField-3 DPUs provide secure and intelligent data migration.

Researchers can use the new NVIDIA platform for HPC edge computing to  securely communicate and collaborate on solving problems and bring their disparate devices and algorithms together to operate as one large supercomputer.

Holoscan for HPC at the Edge

Accelerated by GPU computing platforms — including NVIDIA IGX, HGX, DGX systems — NVIDIA Holoscan delivers the extreme performance required to process massive streams of data generated by the world’s scientific instruments.

NVIDIA Holoscan for HPC includes new APIs for C++ and Python that HPC researchers can use to build sensor data processing workflows that are flexible enough for non-image formats and scalable enough to translate raw data into real-time insights.

Holoscan also manages memory allocation to ensure zero-copy data exchanges, so developers can focus on the workflow logic and not worry about managing file and memory I/O.

The new features in Holoscan will be available to all the HPC developers next month. Sign up to be notified of early access to Holoscan 0.4 SDK.

MetroX-3 Goes the Distance

The NVIDIA MetroX-3 long-haul system, available next month, extends the latest cloud-native capabilities of the NVIDIA Quantum-2 InfiniBand platform from the edge to the HPC data center core. It enables GPUs between sites to securely share data over the InfiniBand network up to 25 miles (40km) away.

Taking advantage of native remote direct memory access, users can easily migrate data and compute jobs from one InfiniBand-connected mini-cluster to the main data center, or combine geographically dispersed compute clusters for higher overall performance and scalability.

Data center operators can efficiently provision, monitor and operate across all the InfiniBand-connected data center networks by using the NVIDIA Unified Fabric Manager to manage their MetroX-3 systems.

BlueField for Secure, Efficient HPC

NVIDIA BlueField data processing units offload, accelerate and isolate advanced networking, storage and security services to boost performance and efficiency for modern HPC.

During SC22, system software company Zettar is demonstrating its data migration and storage offload solution based on BlueField-3. Zettar software can consolidate data migration tasks to a data center footprint of 4U rack space, which today requires 13U with x86-based solutions.

Learn more about the new NVIDIA platform for HPC computing at the edge.

The post Going the Distance: NVIDIA Platform Solves HPC Problems at the Edge appeared first on NVIDIA Blog.

Read More

Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators

Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators

The technologies powering the world’s 7 million data centers are changing rapidly. The latest have allowed IT organizations to reduce costs even while dealing with exponential data growth.

Simulation and digital twins can help data center designers, builders and operators create highly efficient and performant facilities. But building a digital twin that can accurately represent all components of an AI supercomputing facility is a massive, complex undertaking.

The NVIDIA Omniverse simulation platform helps address this challenge by streamlining the process for collaborative virtual design. An Omniverse demo at SC22 showcased how the people behind data centers can use this open development platform to enhance the design and development of complex supercomputing facilities.

Omniverse, for the first time, lets data center operators aggregate real-time data inputs from their core third-party computer-aided design, simulation and monitoring applications so they can see and work with their complete datasets in real time.

The demo shows how Omniverse allows users to tap into the power of accelerated computing, simulation and operational digital twins connected to real-time monitoring and AI. This enables teams to streamline facility design, accelerate construction and deployment, and optimize ongoing operations.

The demo also highlighted NVIDIA Air, a data center simulation platform designed to work in conjunction with Omniverse to simulate the network — the central nervous system of the data center. With NVIDIA Air, teams can model the entire network stack, allowing them to automate and validate network hardware and software prior to bring-up.

Creating Digital Twins to Elevate Design and Simulation

In planning and constructing one of NVIDIA’s latest AI supercomputers, multiple engineering CAD datasets were collected from third-party industry tools such as Autodesk Revit, PTC Creo and Trimble SketchUp. This allowed designers and engineers to view the Universal Scene Description-based model in full fidelity, and they could collaboratively iterate on the design in real time.

PATCH MANAGER is an enterprise software application for planning cabling, assets and physical layer point-to-point connectivity in network domains. With PATCH MANAGER connected to Omniverse, the complex topology of port-to-port connections, rack and node layouts, and cabling can be integrated directly into the live model. This enables data center engineers to see the full view of the model and its dependencies.

To predict airflow and heat transfers, engineers used Cadence 6SigmaDCX, a software for computational fluid dynamics. Engineers can also use AI surrogates trained with NVIDIA Modulus for “what-if” analysis in near-real time. This lets teams simulate changes in complex thermals and cooling, and they can see the results instantly.

And with NVIDIA Air, the exact network topology — including protocols, monitoring and automation — can be simulated and prevalidated.

Once construction of a data center is complete, its sensors, control system and telemetry can be connected to the digital twin inside Omniverse, enabling real-time monitoring of operations.

With a perfectly synchronized digital twin, engineers can simulate common dangers such as power peaking or cooling system failures. Operators can benefit from AI-recommended changes that optimize for key priorities like boosting energy efficiency and reducing carbon footprint. The digital twin also allows them to test and validate software and component upgrades before deploying to the physical data center.

Catch up on the latest announcements by watching NVIDIA’s SC22 special address, and learn more about NVIDIA Omniverse.

The post Supercomputing Superpowers: NVIDIA Brings Digital Twin Simulation to HPC Data Center Operators appeared first on NVIDIA Blog.

Read More

NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22

NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22

Whether focused on tiny atoms or the immensity of outer space, supercomputing workloads benefit from the flexibility that the largest systems provide scientists and researchers.

To meet the needs of organizations with such large AI and high performance computing (HPC) workloads, Dell Technologies today unveiled the Dell PowerEdge XE9680 system — its first system with eight NVIDIA GPUs interconnected with NVIDIA NVLink — at SC22, an international supercomputing conference running through Friday.

The Dell PowerEdge XE9680 system is built on the NVIDIA HGX H100 architecture and packs eight NVIDIA H100 Tensor Core GPUs to serve the growing demand for large-scale AI and HPC workflows.

These include large language models for communications, chemistry and biology, as well as simulation and research in industries spanning aerospace, agriculture, climate, energy and manufacturing.

The XE9680 system is arriving alongside other new Dell servers announced today with NVIDIA Hopper architecture GPUs, including the Dell PowerEdge XE8640.

“Organizations working on advanced research and development need both speed and efficiency to accelerate discovery,” said Ian Buck, vice president of Hyperscale and High Performance Computing, NVIDIA. “Whether researchers are building more efficient rockets or investigating the behavior of molecules, Dell Technologies’ new PowerEdge systems provide the compute power and efficiency needed for massive AI and HPC workloads.”

“Dell Technologies and NVIDIA have been working together to serve customers for decades,” said Rajesh Pohani, vice president of portfolio and product management for PowerEdge, HPC and Core Compute at Dell Technologies. “As enterprise needs have grown, the forthcoming Dell PowerEdge servers with NVIDIA Hopper Tensor Core GPUs provide leaps in performance, scalability and security to accelerate the largest workloads.”

NVIDIA H100 to Turbocharge Dell Customer Data Centers

Fresh off setting world records in the MLPerf AI training benchmarks earlier this month, NVIDIA H100 is the world’s most advanced GPU. It’s packed with 80 billion transistors and features major advances to accelerate AI, HPC, memory bandwidth and interconnects at data center scale.

H100 is the engine of AI factories that organizations use to process and refine large datasets to produce intelligence and accelerate their AI-driven businesses. It features a dedicated Transformer Engine and fourth generation NVIDIA NVLink interconnect to accelerate exascale workloads.

Each system built on the NVIDIA HGX H100 platform features four or eight Hopper GPUs to deliver the highest AI performance with 3.5x more energy efficiency compared with the prior generation, saving development costs while accelerating discoveries.

Powerful Performance and Customer Options for AI, HPC Workloads

Dell systems power the work of leading organizations, and the forthcoming Hopper-based systems will broaden Dell’s portfolio of solutions for its customers around the world.

With its enhanced, air-cooled design and support for eight NVIDIA H100 GPUs with built-in NVLink connectivity, the PowerEdge XE9680 is purpose-built for optimal performance to help modernize operations and infrastructure to drive AI initiatives.

The PowerEdge XE8640, Dell’s new HGX H100 system with four Hopper GPUs, enables businesses to develop, train and deploy AI and machine learning models. A 4U rack system, the XE8540 delivers faster AI training performance and increased core capabilities with up to four PCIe Gen5 slots, NVIDIA Multi-Instance GPU  (MIG) technology and NVIDIA GPUDirect Storage support.

Availability

The Dell PowerEdge XE9680 and XE8640 will be available from Dell starting in the first half of 2023.

Customers can now try NVIDIA H100 GPUs on Dell PowerEdge servers on NVIDIA LaunchPad, which provides free hands-on experiences and gives companies access to the latest hardware and NVIDIA AI software.

To take a first look at Dell’s new servers with NVIDIA H100 GPUs at SC22, visit Dell in booth 2443.

The post NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22 appeared first on NVIDIA Blog.

Read More

Give the Gift of Gaming With GeForce NOW Gift Cards

Give the Gift of Gaming With GeForce NOW Gift Cards

The holiday season is approaching, and GeForce NOW has everyone covered. This GFN Thursday brings an easy way to give the gift of gaming with GeForce NOW gift cards, for yourself or for a gamer in your life.

Plus, stream 10 new games from the cloud this week, including the first story downloadable content (DLC) for Dying Light 2.

No Time Like the Present

For those seeking the best present to give any gamer, look no further than a GeForce NOW membership.

With digital gift cards, NVIDIA makes it easy for anyone to give an upgrade to GeForce PC performance in the cloud at any time of the year. And just in time for the holidays, physical gift cards will be available as well. For a limited time, these new $50 physical gift cards will ship with a special GeForce NOW holiday gift box at no additional cost, perfect to put in someone’s stocking.

Powerful PC gaming, perfectly packaged.

These new gift cards can be redeemed for the membership level of preference, whether for three months of an RTX 3080 membership or six months of a Priority membership. Both let PC gamers stream over 1,400 games from popular digital gaming stores like Steam, Epic Games Store, Ubisoft Connect, Origin and GOG.com, all from GeForce-powered PCs in the cloud.

That means high-performance streaming on nearly any device, including PCs, Macs, Android mobile devices, iOS devices, SHIELD TV and Samsung and LG TVs. GeForce NOW is the only way to play Genshin Impact on Macs, one of the 100 free-to-play games in the GeForce NOW library.

GeForce NOW Devices
Stream across nearly any device.

RTX 3080 members get extra gaming goodness with dedicated access to the highest-performance servers, eight-hour gaming sessions and the ability to stream up to 4K at 60 frames per second or 1440p at 120 FPS, all at ultra-low latency.

Gift cards can be redeemed with an active GFN membership. Gift one to yourself or a buddy for hours of fun cloud gaming.

Learn more about GeForce NOW gift cards and get started with gift giving today.

Stayin’ Alive

Dying Light 2’s “Bloody Ties” DLC is available now, and GeForce NOW members can stream it today.

Dying Light 2 on GeForce NOW
Become a Parkour champion to survive in this horror survival game.

Embark on a new story adventure and gain access to “The Carnage Hall” — an old opera building full of challenges and quests — including surprising new weapon types, character interactions and more discoveries to uncover.

Priority and RTX 3080 members can explore Villedor with NVIDIA DLSS and RTX ON for cinematic, real-time ray tracing — all while keeping an eye on their meter to avoid becoming infected themselves.

Put a Bow on It

The Unliving on GeForce NOW
Be a fearsome Necromancer in the dark world of The Unliving.

There’s always a new adventure streaming from the cloud. Here are the 10 titles joining the GeForce NOW library this week:

  • The Unliving (New release on Steam)
  • A LIttle to the Left (New release on Steam)
  • Alba: A Wildlife Adventure (Free on Epic Games from Nov. 10-17)
  • Shadow Tactics: Blades of the Shogun (Free on Epic Games from Nov. 10-17)
  • Yum Yum Cookstar (New release on Steam, Nov. 11)
  • Guns, Gore and Cannoli 2 (Steam)
  • Heads Will Roll: Downfall (Steam)
  • Hidden Through Time (Steam)
  • The Legend of Tianding (Steam)
  • Railgrade (Epic Games)

Members can still upgrade to a six-month Priority membership for 40% off the normal price. Better hurry though, as this offer ends on Sunday, Nov. 20.

Before we wrap up this GFN Thursday, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Give the Gift of Gaming With GeForce NOW Gift Cards appeared first on NVIDIA Blog.

Read More

What Is Denoising?

What Is Denoising?

Anyone who’s taken a photo with a digital camera is likely familiar with a “noisy” image: discolored spots that make the photo lose clarity and sharpness.

Many photographers have tips and tricks to reduce noise in images, including fixing the settings on the camera lens or taking photos in different lighting. But it isn’t just photographs that can look discolored — noise is common in computer graphics, too.

Noise refers to the random variations of brightness and color that aren’t part of the original image. Removing noise from imagery — which is becoming more common in the field of image processing and computer vision — is known as denoising.

Image denoising uses advanced algorithms to remove noise from graphics and renders, making a huge difference to the quality of images. Photorealistic visuals and immersive renders could not be possible without denoising technology.

What Is Denoising?

In computer graphics, images can be made up of both useful information and noise. The latter reduces clarity. The ideal end product of denoising would be a crisp image that only preserves the useful information. When denoising an image, it’s also important to keep visual details and components such as edges, corners, textures and other sharp structures.

To reduce noise without affecting the visual details, three types of signals in an image must be targeted by denoising:

  • Diffuse — scattered lighting reflected in all directions;
  • Specular or reflections — lighting reflected in a particular direction; and
  • Infinite light-source shadows — sunlight, shadows and any other visible light source.

To create the clearest image, a user must cast thousands of rays in directions following the diffuse and specular signals. Often in real-time ray tracing, however, only one ray per pixel or even less is used.

Denoising is necessary in real-time ray tracing because of the relatively low ray counts to maintain interactive performance.

Noisy image with one ray per pixel.

How Does Denoising Work?

Image denoising is commonly based on three techniques: spatial filtering, temporal accumulation, and machine learning and deep learning reconstruction.

Example of a spatially and temporally denoised final image.

Spatial filtering selectively alters parts of an image by reusing similar neighboring pixels. The advantage of spatial filtering is that it doesn’t produce temporal lag, which is the inability to immediately respond to changing flow conditions. However, spatial filtering introduces blurriness and muddiness, as well as temporal instability, which refers to flickering and visual imperfections in the image.

Temporal accumulation reuses data from the previous frame to determine if there are any artifacts — or visual anomalies — in the current frame that can be corrected. Although temporal accumulation introduces temporal lag, it doesn’t produce blurriness. Instead, it adds temporal stability to reduce flickering and artifacts over multiple frames.

Example of temporal accumulation at 20 frames.

Machine learning and deep learning reconstruction uses a neural network to reconstruct the signal. The neural network is trained using various noisy and reference signals. Though the reconstructed signal for a single frame can look  complete, it can become temporally unstable over time, so a form of temporal stabilization is needed.

Denoising in Images

Denoising provides users with immediate visual feedback, so they can see and interact with graphics and designs. This allows them to experiment with variables like light, materials, viewing angle and shadows.

Solutions like NVIDIA Real-Time Denoisers (NRD) make denoising techniques more accessible for developers to integrate into pipelines. NRD is a spatio-temporal denoising library that’s agnostic to application programming interfaces and designed to work with low rays per pixel.

NRD uses input signals and environmental conditions to deliver results comparable to ground-truth images. See NRD in action below:

With NRD, developers can achieve real-time results using a limited budget of rays per pixel. In the video above, viewers can see the heavy lifting that NRD does in real time to resolve image noise.

Popular games such as Dying Light 2 and Hitman III use NRD for denoising.

NRD highlighted in Techland’s Dying Light 2 Stay Human.

NRD supports the denoising of diffuse, specular or reflections, and shadow signals. The denoisers included in NRD are:

  • ReBLUR — based on the idea of self-stabilizing, recurrent blurring. It’s designed to work with diffuse and specular signals generated with low ray budgets.
  • SIGMA — a fast shadow denoiser. It supports shadows from any type of light source, like the sun and local lights.
  • ReLAX — preserves lighting details produced by NVIDIA RTX Direct Illumination, a framework that enables developers to render scenes with millions of dynamic area lights in real time. ReLAX also yields better temporal stability and remains responsive to changing lighting conditions.

See NRD in action with Hitman 3:

Learn about more technologies in game development.

The post What Is Denoising? appeared first on NVIDIA Blog.

Read More

NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform

NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform

Just like many businesses, the world of industrial scientific computing has a data problem.

Solving seemingly intractable challenges — from developing new energy sources and creating new modes of transportation, to addressing mission-critical issues such as driving operational efficiencies and improving customer support — requires massive amounts of high performance computing.

Instead of having to architect, engineer and build ever-more supercomputers, companies such as Electrolux, Denso, Samsung and Virgin Orbit are embracing benefits offered by Rescale’s cloud platform. This makes it possible to scale their accelerated computing in an energy-efficient way and to speed their innovation.

Addressing the industrial scientific community’s rising demand for AI in the cloud, NVIDIA founder and CEO Jensen Huang joined Rescale founder and CEO Joris Poort at the Rescale Big Compute virtual conference, where they announced that Rescale is adopting the NVIDIA AI software portfolio.

NVIDIA AI will bring new capabilities to Rescale’s HPC-as-a-service offerings, which include simulation and engineering software used by hundreds of customers across industries. NVIDIA is also accelerating the Rescale Compute Recommendation Engine announced today, which enables customers to identify the right infrastructure options to optimize cost and speed objectives.

“Fusing principled and data-driven methods, physics-ML AI models let us explore our design space at speeds and scales many orders of magnitude greater than ever before,” Huang said. “Rescale is at the intersection of these major trends. NVIDIA’s accelerated and AI computing platform perfectly complements Rescale to advance industrial scientific computing.”

“Engineers and scientists working on breakthrough innovations need integrated cloud platforms that put R&D software and accelerated computing at their fingertips,” said Poort. “We’ve helped customers speed discoveries and save costs with NVIDIA-accelerated HPC, and adding NVIDIA AI Enterprise to the Rescale platform will bring together the most advanced computing capabilities with the best of AI, and support an even broader range of AI-powered workflows R&D leaders can run on any cloud of their choice.”

Expanding HPC to New Horizons in the Cloud With NVIDIA AI

The companies announced that they are working to bring NVIDIA AI Enterprise to Rescale, broadening the cloud platform’s offerings to include NVIDIA-supported AI workflows and processing engines. Once it’s available, customers will be able to develop AI applications in any leading cloud, with support from NVIDIA.

The globally adopted software of the NVIDIA AI platform, NVIDIA AI Enterprise includes essential processing engines for each step of the AI workflow, from data processing and AI model training to simulation and large-scale deployment.

NVIDIA AI enables organizations to develop predictive models to complement and expand industrial HPC research and development with applications such as computer vision, route and supply chain optimization, robotics simulations and more.

The Rescale software catalog provides access to hundreds of NVIDIA-accelerated containerized applications and pretrained AI models on NVIDIA NGC, and allows customers to run simulations on demand and scale up or down as needed.

NVIDIA Modulus to Speed Physics-Based Machine Learning

Rescale now offers the NVIDIA Modulus framework for developing physics machine learning neural network models to support a broad range of engineering use cases.

Modulus blends the power of physics with data to build high-fidelity models that enable near-real-time simulations. With just a few clicks on the Rescale platform, Modulus will allow customers to run their entire AI-driven simulation workflow, from data pre-processing and model training to inference and model deployment.

On-Prem to Cloud Workflow Orchestration Expands Flexibility

Rescale is additionally integrating the NVIDIA Base Command Platform AI developer workflow management software, which can orchestrate workloads across clouds to on-premises NVIDIA DGX systems.

Rescale’s HPC-as-a-service platform is accelerated by NVIDIA on leading cloud service provider platforms, including Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure. Rescale is a member of the NVIDIA Inception program.

To learn more, watch Huang and Poort discuss the news in the replay of the Big Compute keynote address.

The post NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform appeared first on NVIDIA Blog.

Read More