What Is NVLink?

What Is NVLink?

Accelerated computing —  a capability once confined to high-performance computers in government research labs — has gone mainstream.

Banks, car makers, factories, hospitals, retailers and others are adopting AI supercomputers to tackle the growing mountains of data they need to process and understand.

These powerful, efficient systems are superhighways of computing. They carry data and calculations over parallel paths on a lightning journey to actionable results.

GPU and CPU processors are the resources along the way, and their onramps are fast interconnects. The gold standard in interconnects for accelerated computing is NVLink.

So, What Is NVLink?

NVLink is a high-speed connection for GPUs and CPUs formed by a robust software protocol, typically riding on multiple pairs of wires printed on a computer board. It lets processors send and receive data from shared pools of memory at lightning speed.

A diagram showing two NVLink uses

Now in its fourth generation, NVLink connects host and accelerated processors at rates up to 900 gigabytes per second (GB/s).

That’s more than 7x the bandwidth of PCIe Gen 5, the interconnect used in conventional x86 servers. And NVLink sports 5x the energy efficiency of PCIe Gen 5, thanks to data transfers that consume just 1.3 picojoules per bit.

The History of NVLink

First introduced as a GPU interconnect with the NVIDIA P100 GPU, NVLink has advanced in lockstep with each new NVIDIA GPU architecture.

A chart of the basic specifications for NVLink

In 2018, NVLink hit the spotlight in high performance computing when it debuted connecting GPUs and CPUs in two of the world’s most powerful supercomputers, Summit and Sierra.

The systems, installed at Oak Ridge and Lawrence Livermore National Laboratories, are pushing the boundaries of science in fields such as drug discovery, natural disaster prediction and more.

Bandwidth Doubles, Then Grows Again

In 2020, the third-generation NVLink doubled its max bandwidth per GPU to 600GB/s, packing a dozen interconnects in every NVIDIA A100 Tensor Core GPU.

The A100 powers AI supercomputers in enterprise data centers, cloud computing services and HPC labs across the globe.

Today, 18 fourth-generation NVLink interconnects are embedded in a single NVIDIA H100 Tensor Core GPU. And the technology has taken on a new, strategic role that will enable the most advanced CPUs and accelerators on the planet.

A Chip-to-Chip Link

NVIDIA NVLink-C2C is a version of the board-level interconnect to join two processors inside a single package, creating a superchip. For example, it connects two CPU chips to deliver 144 Arm Neoverse V2 cores in the NVIDIA Grace CPU Superchip, a processor built to deliver energy-efficient performance for cloud, enterprise and HPC users.

NVIDIA NVLink-C2C also joins a Grace CPU and a Hopper GPU to create the Grace Hopper Superchip. It packs accelerated computing for the world’s toughest HPC and AI jobs into a single chip.

Alps, an AI supercomputer planned for the Swiss National Computing Center, will be among the first to use Grace Hopper. When it comes online later this year, the high-performance system will work on big science problems in fields from astrophysics to quantum chemistry.

The Grace CPU uses NVLink-C2C
The Grace CPU packs 144 Arm Neoverse V2 cores across two die connected by NVLink-C2C.

Grace and Grace Hopper are also great for bringing energy efficiency to demanding cloud computing workloads.

For example, Grace Hopper is an ideal processor for recommender systems. These economic engines of the internet need fast, efficient access to lots of data to serve trillions of results to billions of users daily.

A chart showing how Grace Hopper uses NVLink to deliver leading performance on recommendation systems
Recommenders get up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional CPUs.

In addition, NVLink is used in a powerful system-on-chip for automakers that includes NVIDIA Hopper, Grace and Ada Lovelace processors. NVIDIA DRIVE Thor is a car computer that unifies intelligent functions such as digital instrument cluster, infotainment, automated driving, parking and more into a single architecture.

LEGO Links of Computing

NVLink also acts like the socket stamped into a LEGO piece. It’s the basis for building supersystems to tackle the biggest HPC and AI jobs.

For example, NVLinks on all eight GPUs in an NVIDIA DGX system share fast, direct connections via NVSwitch chips. Together, they enable an NVLink network where every GPU in the server is part of a single system.

To get even more performance, DGX systems can themselves be stacked into modular units of 32 servers, creating a powerful, efficient computing cluster.

A picture of the DGX family of server products that use NVLink
NVLink is one of the key technologies that let users easily scale modular NVIDIA DGX systems to a SuperPOD with up to an exaflop of AI performance.

Users can connect a modular block of 32 DGX systems into a single AI supercomputer using a combination of an NVLink network inside the DGX and NVIDIA Quantum-2 switched Infiniband fabric between them. For example, an NVIDIA DGX H100 SuperPOD packs 256 H100 GPUs to deliver up to an exaflop of peak AI performance.

To get even more performance, users can tap into the AI supercomputers in the cloud such as the one Microsoft Azure is building with tens of thousands of A100 and H100 GPUs. It’s a service used by groups like OpenAI to train some of the world’s largest generative AI models.

And it’s one more example of the power of accelerated computing.

Read More

GeForce NOW Springs Into March With 19 New Games in the Cloud, Including ‘Disney Dreamlight Valley’

GeForce NOW Springs Into March With 19 New Games in the Cloud, Including ‘Disney Dreamlight Valley’

March is already here and a new month always means new games, with a total of 19 joining the GeForce NOW library.

Set off on a magical journey to restore Disney magic when Disney Dreamlight Valley joins the cloud later this month. Plus, the hunt is on with Capcom’s Monster Hunter Rise now available for all members to stream, as is major new content for Battlefield 2042 and Destiny 2.

Stay tuned to GFN Thursday for future updates on the first Microsoft titles coming to GeForce NOW.

Once Upon a Time in the Cloud

Disney Dreamlight Valley on GeForce NOW
Live the Disney dream life in the cloud.

Embark on a dream adventure when Disney Dreamlight Valley from Gameloft releases in the cloud on Thursday, March 16. In this life-sim adventure game, Disney and Pixar characters live in harmony until the Forgetting threatens to destroy the wonderful memories created by its inhabitants. Help restore Disney magic to the Valley and go on an enchanting journey — full of quests, exploration and beloved Disney and Pixar friends.

Live the Disney dream life while collecting thousands of decorative items inspired by Disney and Pixar worlds to personalize gamers’ own unique homes in the Valley. The game’s latest free update, “A Festival of Friendship,” brings even more features, items and characters to interact with.

Disney fans of all ages will enjoy seeing their favorite characters, from Disney Encanto’s Mirabel to The Lion King’s Scar, throughout the game when it launches in the cloud later this month. Members can jump onto their PC, Mac and other devices to start the adventure without having to worry about download times, system requirements or storage space.

March Madness

Starting off the month is Capcom’s popular action role-playing game Monster Hunter Rise: Sunbreak, including Free Title Update 4, which brings the return of the Elder Dragon Velkhana, lord of the tundra that freezes all in its path. The game is now available for GeForce NOW members to stream, so new and returning Hunters can seamlessly bring their monster hunting careers to the cloud.  

Battlefield Season 4 on GeForce NOW
Dominate the battlefield.

New content is also available for members to stream this week for blockbuster titles. Eleventh Hour is the latest season release for Battlefield 2042, including a new map, specialist, weapon and vehicle to help players dominate the battle.

Destiny 2 Lightfall on GeForce NOW
Eyes up, Guardians.

Lightfall, Destiny 2’s latest expansion following last year’s The Witch Queen, brings Guardians one step closer to the conclusion of the “Light and Darkness saga.” Experience a brand new campaign, Exotic gear and weapons, a new six-player raid, and more as players prepare for the beginning of the end.

On top of all that, here are the three new games being added this week:

Here’s what the rest of March looks like:

  • Hotel Renovator (New release on Steam, Mar. 7)
  • Clash: Artifacts of Chaos (New release on Steam, Mar. 9)
  • Figment 2: Creed Valley (New release on Steam, Mar. 9)
  • Monster Energy Supercross – The Official Videogame 6 (New release on Steam, Mar. 9)
  • Big Ambitions (New release on Steam, Mar. 10)
  • The Legend of Heroes: Trails to Azure (New release on Steam, Mar. 14)
  • Smalland: Survive the Wilds (New release on Steam, Mar. 29)
  • Ravenbound (New release on Steam, Mar. 30)
  • DREDGE (New release on Steam, Mar. 30)
  • The Great War: Western Front (New release on Steam, Mar. 30)
  • System Shock (New release on Steam and Epic Games Store)
  • Amberial Dreams (Steam)
  • Disney Dreamlight Valley (Steam and Epic Games Store)
  • No One Survived (Steam)
  • Symphony of War: The Nephilim Saga (Steam)
  • Tower of Fantasy (Steam)

Extra, Extra!

While February is the shortest month, there was no shortage of games. Four extra games were added to the cloud for GeForce NOW members on top of the 25 games announced:

A few games announced didn’t make it into February due to shifts in their release dates, including Above Snakes and Heads Will Roll: Reforged. Command & Conquer Remastered Collection was removed from GeForce NOW on March 1 due to a technical issue. Additionally, PERISH and the Dark and Darker playtest didn’t make it to the cloud this month. Look for updates in a future GFN Thursday on some of these titles.

Finally, we’ve got a question to start your weekend gaming adventures. Let us know your answer in the comments below or on Twitter and Facebook.

Read More

What Is Confidential Computing?

What Is Confidential Computing?

Cloud and edge networks are setting up a new line of defense, called confidential computing, to protect the growing wealth of data users process in those environments.

Confidential Computing Defined

Confidential computing is a way of protecting data in use, for example while in memory or during computation, and preventing anyone from viewing or altering the work.

Using cryptographic keys linked to the processors, confidential computing creates a trusted execution environment or secure enclave. That safe digital space supports a cryptographically signed proof, called attestation, that the hardware and firmware is correctly configured to prevent the viewing or alteration of their data or application code.

In the language of security specialists, confidential computing provides assurances of data and code privacy as well as data and code integrity.

What Makes Confidential Computing Unique?

Confidential computing is a relatively new capability for protecting data in use.

For many years, computers have used encryption to protect data that’s in transit on a network and data at rest, stored in a drive or non-volatile memory chip. But with no practical way to run calculations on encrypted data, users faced a risk of having their data seen, scrambled or stolen while it was in use inside a processor or main memory.

With confidential computing, systems can now cover all three legs of the data-lifecycle stool, so data is never in the clear.

Confidential computing protects data in use
Confidential computing adds a new layer in computer security — protecting data in use while running on a processor.

In the past, computer security mainly focused on protecting data on systems users owned, like their enterprise servers. In this scenario, it’s okay that system software sees the user’s data and code.

With the advent of cloud and edge computing, users now routinely run their workloads on computers they don’t own.  So confidential computing flips the focus to protecting the users’ data from whoever owns the machine.

With confidential computing, software running on the cloud or edge computer, like an operating system or hypervisor, still manages work. For example, it allocates memory to the user program, but it can never read or alter the data in memory allocated by the user.

How Confidential Computing Got Its Name

A 2015 research paper was one of several using new Security Guard Extensions (Intel SGX) in x86 CPUs to show what’s possible. It called its approach VC3, for Verifiable Confidential Cloud Computing, and the name — or at least part of it — stuck.

“We started calling it confidential cloud computing,” said Felix Schuster, lead author on the 2015 paper.

Four years later, Schuster co-founded Edgeless Systems, a company in Bochum, Germany, that develops tools so users can create their own confidential-computing apps to improve data protection.

Confidential computing is “like attaching a contract to your data that only allows certain things to be done with it,” he said.

How Does Confidential Computing Work?

Taking a deeper look, confidential computing sits on a foundation called a root of trust, which is based on a secured key unique to each processor.

The processor checks it has the right firmware to start operating with what’s called a secure, measured boot. That process spawns reference data, verifying the chip is in a known safe state to start work.

Next, the processor establishes a secure enclave or trusted execution environment (TEE) sealed off from the rest of the system where the user’s application runs. The app brings encrypted data into the TEE, decrypts it, runs the user’s program, encrypts the result and sends it off.

At no time could the machine owner view the user’s code or data.

One other piece is crucial: It proves to the user no one could tamper with the data or software.

How attestation works in confidential computing
Attestation uses a private key to create security certificates stored in public logs. Users can access them with the web’s transport layer security (TLS) to verify confidentiality defenses are intact, protecting their workloads.

The proof is delivered through a multi-step process called attestation (see diagram above).

The good news is researchers and commercially available services have demonstrated confidential computing works, often providing data security without significantly impacting performance.

Diagram of how confidential computing works
A high-level look at how confidential computing works.

Shrinking the Security Perimeters

As a result, users no longer need to trust all the software and systems administrators in separate cloud and edge companies at remote locations.

Confidential computing closes many doors hackers like to use. It isolates programs and their data from attacks that could come from firmware, operating systems, hypervisors, virtual machines — even physical interfaces like a USB port or PCI Express connector on the computer.

The new level of security promises to reduce data breaches that rose from 662 in 2010 to more than 1,000 by 2021 in the U.S. alone, according to a report from the Identity Theft Resource Center.

That said, no security measure is a panacea, but confidential computing is a great security tool, placing control directly in the hands of “data owners”.

Use Cases for Confidential Computing

Users with sensitive datasets and regulated industries like banks, healthcare providers and governments are among the first to use confidential computing. But that’s just the start.

Because it protects sensitive data and intellectual property, confidential computing will let groups feel they can collaborate safely. They share an attested proof their content and code was secured.

Example applications for confidential computing include:

  • Companies executing smart contracts with blockchains
  • Research hospitals collaborating to train AI models that analyze trends in patient data
  • Retailers, telecom providers and others at the network’s edge, protecting personal information in locations where physical access to the computer is possible
  • Software vendors can distribute products which include AI models and proprietary algorithms while preserving their intellectual property

While confidential computing is getting its start in public cloud services, it will spread rapidly.

Users need confidential computing to protect edge servers in unattended or hard-to-reach locations. Enterprise data centers can use it to guard against insider attacks and protect one confidential workload from another.

growth forecast for confidential computing
Market researchers at Everest Group estimate the available market for confidential computing could grow 26x in five years.

So far, most users are in a proof-of-concept stage with hopes of putting workloads into production soon, said Schuster.

Looking forward, confidential computing will not be limited to special-purpose or sensitive workloads. It will be used broadly, like the cloud services hosting this new level of security.

Indeed, experts predict confidential computing will become as widely used as encryption.

The technology’s potential motivated vendors in 2019 to launch the Confidential Computing Consortium, part of the Linux Foundation. CCC’s members include processor and cloud leaders as well as dozens of software companies.

The group’s projects include the Open Enclave SDK, a framework for building trusted execution environments.

“Our biggest mandate is supporting all the open-source projects that are foundational parts of the ecosystem,” said Jethro Beekman, a member of the CCC’s technical advisory council and vice president of technology at Fortanix, one of the first startups founded to develop confidential computing software.

“It’s a compelling paradigm to put security at the data level, rather than worry about the details of the infrastructure — that should result in not needing to read about data breaches in the paper every day,” said Beekman, who wrote his 2016 Ph.D. dissertation on confidential computing.

Chart of companies active in confidential computing
A growing sector of security companies is working in confidential computing and adjacent areas. (Source: GradientFlow)

How Confidential Computing Is Evolving

Implementations of confidential computing are evolving rapidly.

At the CPU level, AMD has released Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP). It extends the process-level protection in Intel SGX to full virtual machines, so users can implement confidential computing without needing to rewrite their applications.

Top processor makers have aligned on supporting this approach. Intel’s support comes via new Trusted Domain Extensions. Arm has described its implementation, called Realms.

Proponents of the RISC-V processor architecture are implementing confidential computing in an open-source project called Keystone.

Accelerating Confidential Computing

NVIDIA is bringing GPU acceleration to VM-style confidential computing to market with its Hopper architecture GPUs.

The H100 Tensor Core GPUs enable confidential computing for a broad swath of AI and high performance computing use cases. This gives users of these security services access to accelerated computing.

How GPUs and CPUs collaborate in NVIDIA's implementation of confidential computing
An example of how GPUs and CPUs work together to deliver an accelerated confidential computing service.

Meanwhile, cloud service providers are offering services today based on one or more of the underlying technologies or their own unique hybrids.

What’s Next for Confidential Computing

Over time, industry guidelines and standards will emerge and evolve for aspects of confidential computing such as attestation and efficient, secure I/O, said Beekman of CCC.

While it’s a relatively new privacy tool, confidential computing’s ability to protect code and data and provide guarantees of confidentiality makes it a powerful one.

Looking ahead, experts expect confidential computing will be blended with other privacy methods like fully homomorphic encryption (FHE), federated learning, differential privacy, and other forms of multiparty computing.

Using all the elements of the modern privacy toolbox will be key to success as demand for AI and privacy grows.

So, there are many moves ahead in the great chess game of security to overcome the challenges and realize the benefits of confidential computing.

Take a Deeper Dive

To learn more, watch “Hopper Confidential Computing: How it Works Under the Hood,” session S51709 at GTC on March 22 or later (free with registration).

Check out “Confidential Computing: The Developer’s View to Secure an Application and Data on NVIDIA H100,” session S51684 on March 23 or later.

You also can attend a March 15 panel discussion at the Open Confidential Computing Conference moderated by Schuster and featuring Ian Buck, NVIDIA’s vice president of hyperscale and HPC. And watch the video below.

Read More

Glean Founders Talk AI-Powered Enterprise Search

Glean Founders Talk AI-Powered Enterprise Search

The quest for knowledge at work can feel like searching for a needle in a haystack. But what if the haystack itself could reveal where the needle is?

That’s the promise of large language models, or LLMs, the subject of this week’s episode of the NVIDIA AI Podcast featuring DeeDee Das and Eddie Zhou, founding engineers at Silicon Valley-based startup Glean, in conversation with our host, Noah Kravitz.

With LLMs, the haystack can become a source of intelligence, helping guide knowledge workers on what they need to know.

Glean is focused on providing better tools for enterprise search by indexing everything employees have access to in the company, including Slack, Confluence, GSuite and much more. The company raised a series C financing round last year, valuing the company at $1 billion.

Large language models can provide a comprehensive view of the enterprise and its data, which makes finding the information needed to get work done easier.

In the podcast, Das and Zhou discuss the challenges and opportunities of bringing LLMs into the enterprise, and how this technology can help people spend less time searching and more time working.

You Might Also Like

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI

Pat Grady and Sonya Huang, partners at Sequoia Capital, to discuss their recent essay, “Generative AI: A Creative New World.” The authors delve into the potential of generative AI to enable new forms of creativity and expression, as well as the challenges and ethical considerations of this technology. They also offer insights into the future of generative AI.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast on Your Favorite Platform

You can now listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

 

 

Read More

Generative AI at GTC: Dozens of Sessions to Feature Luminaries Speaking on Tech’s Hottest Topic

Generative AI at GTC: Dozens of Sessions to Feature Luminaries Speaking on Tech’s Hottest Topic

As the meteoric rise of ChatGPT demonstrates, generative AI can unlock enormous potential for companies, teams and individuals. 

Whether simplifying time-consuming tasks or accelerating 3D workflows to boost creativity and productivity, generative AI is already making an impact across industries — and there’s much more to come.

How generative AI is paving the way for the future will be a key topic at NVIDIA GTC, a free, global conference for the era of AI and the metaverse, taking place online March 20-23. 

Dozens of sessions will dive into topics around generative AI — from conversational text to the creation of virtual worlds from images. Here’s a sampling: 

Many more sessions on generative AI are available to explore at GTC, and registration is free. Join to discover the latest AI technology innovations and breakthroughs.

Featured image courtesy of Refik Anadol.

Read More

Fusion Reaction: How AI, HPC Are Energizing Science

Fusion Reaction: How AI, HPC Are Energizing Science

Brian Spears says his children will enjoy a more sustainable planet, thanks in part to AI and high performance computing (HPC) simulations.

“I believe I’ll see fusion energy in my lifetime, and I’m confident my daughters will see a fusion-powered world,” said the 45-year-old principal investigator at Lawrence Livermore National Laboratory who helped demonstrate the physics of the clean and abundant power source, making headlines worldwide.

Results from the experiment hit Spears’ inbox at 5:30 a.m. on Dec. 5 last year.

“I had to rub my eyes to make sure I wasn’t misreading the numbers,” he recalled.

A Nuclear Family  

Once he assured himself, he scurried downstairs to share the news with his wife, a chemical engineer at the lab who’s pioneering ways to 3D print glass, and also once worked on the fusion program.

LLNL principal investigator Brian Spears
Brian Spears

“One of my friends described us as a Star Trek household — I work on the warp core and she works on the replicator,” he quipped.

In a tweet storm after the lab formally announced the news, Spears shared his excitement with the world.

“Exhausted by an amazing day … Daughters sending me screenshots with breaking news about Mom and Dad’s work … Being a part of something amazing for humanity.”

In another tweet, he shared the technical details.

“Used two million joules of laser energy to crush a capsule 100x smoother than a mirror. It imploded to half the thickness of a hair. For 100 trillionths of a second, we produced ten petawatts of power. It was the brightest thing in the solar system.”

AI Helps Call the Shots

A week before the experiment, Spears’ team analyzed its precision HPC design, then predicted the result with AI. Two atoms would fuse into one, releasing energy in a process simply called ignition.

It was the most exciting of thousands of AI predictions in what’s become the two-step dance of modern science. Teams design experiments in HPC simulations, then use data from the actual results to train AI models that refine the next simulation.

AI uncovers details about the experiments hard for humans to see. For example, it tracked the impact of minute imperfections in the imploding capsule researchers blasted with 192 lasers to achieve fusion.

LLNL nuclear fusion experiment explained
A look inside the fusion experiment. Graphic courtesy of Lawrence Livermore National Laboratory.

“You need AI to understand the complete picture,” Spears said.

It’s a big canvas, filled with math describing the complex details of atomic physics.

A single experiment can require hundreds of thousands of relatively small simulations. Each takes a half day on a single node of a supercomputer.

The largest 3D simulations — called the kitchen sinks — consume about half of Sierra, the world’s sixth fastest HPC system, packing 17,280 NVIDIA GPUs.

Edge AI Guides Experiments

AI also helps scientists create self-driving experiments. Neural networks can make split-second decisions about which way to take an experiment based on results they process in real time.

For example, Spears, his colleagues and NVIDIA collaborated on an AI-guided experiment last year that fired lasers up to three times a second. It created the kind of proton beams that could someday treat a cancer patient.

“In the course of a day, you can get the kind of bright beam that may have taken you months or years of human-designed experiments,” Spears said. “This approach of AI at the edge will save orders of magnitude of time for our subject-matter experts.”

Directing lasers fired many times a second will also be a key job inside tomorrow’s nuclear fusion reactors.

Navigating the Data Deluge

AI’s impacts will be felt broadly across both scientific and industrial fields, Spears believes.

“Over the last decade we’ve produced more simulation and experimental data than we’re trained to deal with,” he said.

That deluge, once a burden for scientists, is now fuel for machine learning.

“AI is putting scientists back in the driver seat so we can move much more quickly,” he said.

Brian Spears interviewed on nuclear fusion experiment
Spears explained the ignition result in an interview (starting 8:19) with Government Matters.

Spears also directs an AI initiative at the lab that depends on collaborations with companies including NVIDIA.

“NVIDIA helps us look over the horizon, so we can take the next step in using AI for science,” he said

A Brighter Future

It’s hard work with huge impacts, like leaving a more resilient planet for the next generation.

Asked whether his two daughters plan a career in science, Spears beams. They’re both competitive swimmers who play jazz trumpet with interests in everything from bioengineering to art.

“As we say in science, they’re four pi, they cover the whole sky,” he said.

Read More

Flawless Fractal Food Featured This Week ‘In the NVIDIA Studio’

Flawless Fractal Food Featured This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.

ManvsMachine steps In the NVIDIA Studio this week to share insights behind fractal art — which uses algorithms to artistically represent calculations — derived from geometric objects as digital images and animations.

Ethos Reflected

Founded in London in 2007, ManvsMachine is a multidimensional creative company specializing in design, film and visual arts.

 

ManvsMachine works closely with the world’s leading brands and agencies, including Volvo, Adidas, Nike and more, to produce award-winning creative content.

 

The team at ManvsMachine finds inspiration from a host of places: nature and wildlife, conversations, films, documentaries, as well as new and historic artists of all mediums.

Fractal Food

For fans of romanesco broccoli, the edible flower bud resembling cauliflower in texture and broccoli in taste might conjure mild, nutty, sweet notes that lend well to savory pairings. For ManvsMachine, it presented an artistic opportunity.

Romanesco broccoli is the inspiration behind ‘Roving Romanesco.’

The Roving Romanesco animation started out as a series of explorations based on romanesco broccoli, a prime example of a fractal found in nature.

ManvsMachine’s goal was to find an efficient way of recreating it in 3D and generate complex geometry using a simple setup.

The genesis of the animation revolved around creating a phyllotaxis pattern, an arrangement of leaves on a plant stem, using the high-performance expression language VEX in SideFX’s Houdini software.

Points offset at 137.5 degrees, known as the golden angle.

This was achieved by creating numerous points and offsetting each from the previous one by 137.5 degrees, known as the golden or “perfect circular” angle, while moving outward from the center. The built-in RTX-accelerated Karma XPU renderer enabled fast simulation models powered by the team’s GeForce RTX 3090 GPUs.

Individual florets begin to form.

The team added simple height and width to the shapes using ramp controls then copied geometry onto those points inside a loop.

Romanesco broccoli starts to come together.

With the basic structure intact, ManvsMachine sculpted florets individually to create a stunning 3D model in the shape of romanesco broccoli. The RTX-accelerated Karma XPU renderer dramatically sped up animations of the shape, as well.

“Creativity is enhanced by faster ray-traced rendering, smoother 3D viewports, quicker simulations and AI-enhanced image denoising upscaling — all accelerated by NVIDIA RTX GPUs.” — ManvsMachine

The project was then imported to Foundry’s Nuke software for compositing and final touch-ups. When pursuing a softer look, ManvsMachine counteracted the complexity of the animation with some “easy-on-the-eyes” materials and color choices with a realistic depth of field.

Many advanced nodes in Nuke are GPU accelerated, which gave the team another speed advantage.

Projects like Roving Romanesco represent the high-quality work ManvsMachine strives to deliver for clients.

“Our ethos is reflected in our name,” said ManvsMachine. “Equal importance is placed on ideas and execution. Rather than sell an idea and then work out how to make it later, the preference is to present clients with the full picture, often leading with technique to inform the creative.”

Designers, directors, visual effects artists and creative producers — team ManvsMachine.

Check out @man.vs.machine on Instagram for more inspirational work.

Artists looking to hone their Houdini skills can access Studio Shortcuts and Sessions on the NVIDIA Studio YouTube channel. Discover exclusive step-by-step tutorials from industry-leading artists, watch inspiring community showcases and more, powered by NVIDIA Studio hardware and software.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Pixel Perfect: RTX Video Super Resolution Now Available for GeForce RTX 40 and 30 Series GPUs

Pixel Perfect: RTX Video Super Resolution Now Available for GeForce RTX 40 and 30 Series GPUs

Streaming video on PCs through Google Chrome and Microsoft Edge browsers is getting a GeForce RTX-sized upgrade today with the release of RTX Video Super Resolution (VSR).

Nearly 80% of internet bandwidth today is streaming video. And 90% of that content streams at 1080p or lower, including from popular sources like Twitch.tv, YouTube, Netflix, Disney+ and Hulu.

However, when viewers use displays higher than 1080p — as do many PC users — the browser must scale the video to match the resolution of the display. Most browsers use basic upscaling techniques, which result in final images that are soft or blurry.

With RTX VSR, GeForce RTX 40 and 30 Series GPU users can tap AI to upscale lower-resolution content up to 4K, matching their display resolution. The AI removes blocky compression artifacts and improves the video’s sharpness and clarity.

Just like putting on a pair of prescription glasses can instantly snap the world into focus, RTX Video Super Resolution gives viewers on GeForce RTX 40 and 30 Series PCs a clear picture into the world of streaming video.

RTX VSR is available now as part of the latest GeForce Game Ready Driver, which delivers the best experience for new game launches like Atomic Heart and THE FINALS closed beta.

The Evolution of AI Upscaling

AI upscaling is the process of converting lower-resolution media to a higher resolution by putting low-resolution images through a deep learning model to predict the high-resolution versions. To make these predictions with high accuracy, a neural network model must be trained on countless images at different resolutions.

4K displays can muddy visuals by having to stretch lower-resolution images to fit their screen. Using AI to upscale streamed video makes lower-resolution images fit with unrivaled crispness.

The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details that no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.

In 2019, an early version of this technology was released with SHIELD TV. It was a breakthrough that improved streamed content targeted for TVs, mostly ranging from 480p to 1080p, and optimized for a 10-foot viewing experience.

PC viewers are typically seated much closer than TV viewers to their displays, requiring a higher level of processing and refinement for upscaling. With GeForce RTX 40 and 30 Series GPUs, users now have extremely powerful AI processors with Tensor Cores, enabling a new generation of AI upscaling through RTX VSR.

How RTX Video Super Resolution Works

RTX VSR is a breakthrough in AI pixel processing that dramatically improves the quality of streamed video content beyond edge detection and feature sharpening.

Blocky compression artifacts are a persistent issue in streamed video. Whether the fault of the server, the client or the content itself, issues often become amplified with traditional upscaling, leaving a less pleasant visual experience for those watching streamed content.

Click the image to see the differences between bicubic upscaling (left) and RTX Video Super Resolution.

RTX VSR reduces or eliminates artifacts caused by compressing video — such as blockiness, ringing artifacts around edges, washout of high-frequency details and banding on flat areas — while reducing lost textures. It also sharpens edges and details.

The technology uses a deep learning network that performs upscaling and compression artifact reduction in a single pass. The network analyzes the lower-resolution video frame and predicts the residual image at the target resolution. This residual image is then superimposed on top of a traditional upscaled image, correcting artifact errors and sharpening edges to match the output resolution.

The deep learning network is trained on a wide range of content with various compression levels. It learns about types of compression artifacts present in low-resolution or low-quality videos that are otherwise absent in uncompressed images as a reference for network training. Extensive visual evaluation is employed to ensure that the generated model is effective on nearly all real-world and gaming content.

Getting Started

RTX VSR requires a GeForce RTX 40 or 30 Series GPU and works with nearly all content streamed in Google Chrome and Microsoft Edge.

The feature also requires updating to the latest GeForce Game Ready Driver, available today, or the next NVIDIA Studio Driver releasing in March. Both Chrome (version 110.0.5481.105 or higher) and Edge (version 110.0.1587.56) have updated recently to support RTX VSR.

To enable it, launch the NVIDIA Control Panel and open “Adjust video image settings.” Check the super resolution box under “RTX video enhancement” and select a quality from one to four — ranging from the lowest impact on GPU performance to the highest level of upscaling improvement.

Learn more, including other setup configurations, in this NVIDIA Knowledge Base article.

Read More

NVIDIA Chief Scientist Inducted Into Silicon Valley’s Engineering Hall of Fame

NVIDIA Chief Scientist Inducted Into Silicon Valley’s Engineering Hall of Fame

From scaling mountains in the annual California Death Ride bike challenge to creating a low-cost, open-source ventilator in the early days of the COVID-19 pandemic, NVIDIA Chief Scientist Bill Dally is no stranger to accomplishing near-impossible feats.

On Friday, he achieved another rare milestone: induction into the Silicon Valley Engineering Council’s Hall of Fame.

The aim of the council — a coalition of engineering societies, including the Institute of Electrical and Electronics Engineers, SAE International and the Association for Computing Machinery — is to promote engineering programs and enhance society through science.

Since 1990, its Hall of Fame has honored engineers who have accomplished significant professional achievements while serving their profession and the wider community.

Previous inductees include industry luminaries such as Intel founders Robert Noyce and Gordon Moore, former president of Stanford University and MIPS founder John Hennessy, and Google distinguished engineer and professor emeritus at UC Berkeley David Patterson.

Recognizing ‘an Industry Leader’

In accepting the distinction, Dally said, “I am honored to be inducted into the Silicon Valley Hall of Fame. The work for which I am being recognized is part of a large team effort. Many faculty and students participated in the stream processing research at Stanford, and a very large team at NVIDIA was involved in translating this research into GPU computing. It is a really exciting time to be a computer engineer.”

“The future is bright with a lot more demanding applications waiting to be accelerated using the principles of stream processing and accelerated computing.”

His induction kicked off with a video featuring colleagues and friends, spanning his career across Caltech, MIT,  Stanford and NVIDIA.

In the video, NVIDIA founder and CEO Jensen Huang describes Dally as “an extraordinary scientist, engineer, leader and amazing person.”

Fei-Fei Li, professor of computer science at Stanford and co-director of the Stanford Institute for Human-Centered AI, commended Dally’s journey “from an academic scholar and a world-class researcher to an industry leader” who is spearheading one of the “biggest digital revolutions of our time in terms of AI — both software and hardware.”

Following the tribute video, Fred Barez, chair of the Hall of Fame committee and professor of mechanical engineering at San Jose State University, took the stage. He said of Dally: “This year’s inductee has made significant contributions, not just to his profession, but to Silicon Valley and beyond.”

Underpinning the GPU Revolution

As the leader of NVIDIA Research for nearly 15 years, Dally has built a team of more than 300 scientists around the globe, with groups covering a wide range of topics, including AI, graphics, simulation, computer vision, self-driving cars and robotics.

Prior to NVIDIA, Dally advanced the state of the art in engineering at some of the world’s top academic institutions. His development of stream processing at Stanford led directly to GPU computing, and his contributions are responsible for much of the technology used today in high-performance computing networks.

Read More

NVIDIA Unveils GPU-Accelerated AI-on-5G System for Edge AI, 5G and Omniverse Digital Twins

NVIDIA Unveils GPU-Accelerated AI-on-5G System for Edge AI, 5G and Omniverse Digital Twins

Telcos are seeking industry-standard solutions that can run 5G, AI applications and immersive graphics workloads on the same server — including for computer vision and the metaverse.

To meet this need, NVIDIA is developing a new AI-on-5G solution that combines 5G vRAN, edge AI and digital twin workloads on an all-in-one, hyperconverged and GPU-accelerated system.

The lower cost of ownership enabled by such a system would help telcos drive revenue growth in smart cities, as well as the retail, entertainment and manufacturing industries, to support a multitrillion-dollar, 5G-enabled ecosystem.

The AI-on-5G system consists of:

  • Fujitsu’s virtualized 5G Open RAN product suite, which was developed as part of the 5G Open RAN ecosystem experience (OREX) project promoted by NTT DOCOMO. It also includes Fujitsu’s virtualized central unit (vCU) and distributed unit (vDU), plus other virtualized software functions of vRAN from Fujitsu.
  • The NVIDIA Aerial™ software development kit for 5G vRAN; NVIDIA Omniverse for building and operating custom 3D pipelines and large-scale simulations; NVIDIA RTX Virtual Workstation (vWS) software; and NVIDIA CloudXR for streaming extended reality.
  • Hardware includes the NVIDIA A100X and L40 converged accelerators.

OREC has supported performance verification and evaluation tests for this system.

Collaborating With Fujitsu

“Fujitsu is delivering a fully virtualized 5G vRAN together with multi-access edge computing on the same high-performance, energy-efficient, versatile and scalable computing infrastructure,” said Masaki Taniguchi, senior vice president and head of mobile systems at Fujitsu. “This combination, powered by AI and XR applications, enables telcos to deliver ultra-low latency services, highly optimized TCO and energy-efficient performance.”

The announcement is a step toward accomplishing the O-RAN alliance’s goal of enabling software-defined, AI-driven, cloud-native, fully programmable, energy-efficient and commercially ready telco-grade 5G Open RAN solutions. It’s also consistent with OREC’s goal of implementing a widely adopted, high-performance and multi-vendor 5G vRAN for both public and enterprise 5G deployments.

The all-in-one system uses GPUs to accelerate the software-defined 5G vRAN, as well as the edge AI and graphics applications, without bespoke hardware accelerators nor a specific telecom CPU. This ensures that the GPUs can accelerate the vRAN (based on NVIDIA Aerial), AI video analytics (based on NVIDIA Metropolis), streaming immersive extended reality (XR) experiences (based on NVIDIA CloudXR) and digital twins (based on NVIDIA Omniverse).

“Telcos and their customers are exploring new ways to boost productivity, efficiency and creativity through immersive experiences delivered over 5G networks,” said Ronnie Vasishta, senior vice president of telecom at NVIDIA. “At Mobile World Congress, we are bringing those visions into reality, showcasing how a single GPU-enabled server can support workloads such as NVIDIA Aerial for 5G, CloudXR for streaming virtual reality and Omniverse for digital twins.”

The AI-on-5G system is part of a growing portfolio of 5G solutions from NVIDIA that are driving transformation in the telecommunications industry. Anchored on the NVIDIA Aerial SDK and A100X converged accelerators — combined with BlueField DPUs and a suite of AI frameworks — NVIDIA provides a high-performance, software-defined, cloud-native, AI-enabled 5G for on-premises and telco operators’ RAN.

Telcos working with NVIDIA can gain access to thousands of software vendors and applications in the ecosystem, which can help address enterprise needs in smart cities, retail, manufacturing, industrial and mining.

NVIDIA and Fujitsu will demonstrate the new AI-on-5G system at Mobile World Congress in Barcelona, running Feb. 27-March 2, at hall 4, stand 4E20.

Read More