Scientists Building Digital Twins in NVIDIA Omniverse to Accelerate Clean Energy Research

As global climate change accelerates, finding and securing clean energy is a crucial challenge for many researchers, organizations and governments.

The U.K.’s Atomic Energy Authority (UKAEA), through an evaluation project at the University of Manchester, has been testing the NVIDIA Omniverse simulation platform to accelerate the design and development of a full-scale fusion powerplant that could put clean power on the grid in the coming years.

Over the past several decades, scientists have experimented with ways to create fusion energy, which produces zero carbon and low radioactivity. Such technology could provide virtually limitless clean, safe and affordable energy to meet the world’s growing demand.

Fusion is the principle that energy can be released by combining atomic nuclei together. But fusion energy has not yet been successfully scaled for production due to high energy input needs and unpredictable behavior of the fusion reaction.

Fusion reaction powers the sun, where massive gravitational pressures allow fusion to happen naturally at temperatures around 27 million degrees Fahrenheit. But Earth doesn’t have the same gravitational pressure as the sun, and this means that temperatures to produce fusion need to be much higher — above 180 million degrees.

To replicate the power of the sun on Earth, researchers and engineers are using the latest advances in data science and extreme-scale computing to develop designs for fusion powerplants. With NVIDIA Omniverse, researchers could potentially build a fully functioning digital twin of a powerplant, helping ensure the most efficient designs are selected for construction.

Accelerating Design, Simulation and Real-Time Collaboration

Building a digital twin that accurately represents all powerplant components, the plasma, and the control and maintenance systems is a massive challenge — one that can benefit greatly from AI, exascale GPU computing, and physically accurate simulation software.

It starts with the design of a fusion powerplant, which requires a large number of parts and inputs from large teams of engineering, design and research experts throughout the process. “There are many different components, and we have to take into account lots of different areas of physics and engineering,” said Lee Margetts, UKAEA chair of digital engineering for nuclear fusion at the University of Manchester. “If we make a design change in one system, this has a knock-on effect on other systems.”

Experts from various domains are involved in the project. Each team member uses different computer-aided design applications or simulation tools, and an expert’s work in one domain depends on the data from others working in different domains.

The UKAEA team is exploring Omniverse to help them work together in a real-time simulation environment, so they can see the design of the whole machine rather than only individual subcomponents.

Omniverse has been critical in keeping all these moving parts in sync. By enabling all tools and applications to connect, Omniverse allows the engineers working on the powerplant design to simultaneously collaborate from a single source of truth.

“We can see three different engineers, from three different locations, working on three different components of a powerplant in three different packages,” said Muhammad Omer, a researcher on the project.

Omer explained that when experimenting in Omniverse, the team achieved photorealism in their powerplant designs using the platform’s core abilities to import full-fidelity 3D data. They could also visualize in real time with the RTX Renderer, which made it easy for them to compare different design options for components.

Simulation of fusion plasma is also a challenge. The teams developed Python-based Omniverse Extensions with Omniverse Kit to connect and ingest data from industry simulation software Monte Carlo Neutronics Code Geant4. This allows them to simulate neutron transport in the powerplant core, which is what carries energy out of the powerplant.

They also built Omniverse Extensions to view JOREK plasma simulation code, which simulates visible light emissions, giving the researchers insight into the plasma’s state. The scientists will begin to explore the NVIDIA Modulus AI-physics framework to use with their existing simulation data to develop AI surrogate models to accelerate the fusion plasma simulations.

Simulation of Monte Carlo Neutronics Code Geant4 in Omniverse.

Using AI to Optimize Designs and Enhance Digital Twins

In addition to helping design, operate and control the powerplant, Omniverse can help assist in the training of future AI-driven or AI-augmented robotic control and maintenance systems. These will be essential for maintaining plants in the radiation environment of the powerplant.

Using Omniverse Replicator, a software development kit for building custom synthetic data-generation tools and datasets, researchers can generate large quantities of physically accurate synthetic data of the powerplant and plasma behavior to train robotic systems. By learning in simulation, the robots can correctly handle tasks more accurately in the real world, improving predictive maintenance and reducing downtime.

In the future, sensor models could livestream observation data to the Omniverse digital twin, constantly keeping the virtual twin synchronized to the powerplant’s physical state. Researchers will be able to explore various hypothetical scenarios by first testing in the virtual twin before deploying changes to the physical powerplant.

Overall, Margetts and the team at UKAEA saw many unique opportunities and benefits in using Omniverse to build digital twins for fusion powerplants. Omniverse provides the possibility of a real-time platform that researchers can use to develop first-of-a-kind powerplant technology. The platform also allows engineers to seamlessly work together on powerplant designs. And teams can access integrated AI tools that will enable users to optimize future power plants.

“We’re delighted about what we’ve seen. We believe it’s got great potential as a platform for digital engineering,” said Margetts.

Watch the demo and learn more about NVIDIA Omniverse.

Featured image courtesy of Brigantium Engineering and Bentley Systems.

The post Scientists Building Digital Twins in NVIDIA Omniverse to Accelerate Clean Energy Research appeared first on NVIDIA Blog.

Read More

HPC Researchers Seed the Future of In-Network Computing With NVIDIA BlueField DPUs

Across Europe and the U.S., HPC developers are supercharging supercomputers with the power of Arm cores and accelerators inside NVIDIA BlueField-2 DPUs.

At Los Alamos National Laboratory (LANL) that work is one part of a broad, multiyear collaboration with NVIDIA that targets 30x speedups in computational multi-physics applications.

LANL researchers foresee significant performance gains using data processing units (DPUs) running on NVIDIA Quantum InfiniBand networks. They will pioneer techniques in computational storage, pattern matching and more using BlueField and its NVIDIA DOCA software framework.

An Open API for DPUs

The efforts also will help further define OpenSNAPI, an application interface anyone can use to harness DPUs. It’s a project of the Unified Communication Framework, a consortium enabling heterogeneous computing for HPC apps whose members include Arm, IBM, NVIDIA, U.S. national labs and U.S. universities.

LANL is already feeling the power of in-network computing, thanks to a DPU-powered storage system it created.

The Accelerated Box of Flash (ABoF, pictured below) combines solid-state storage with DPU and InfiniBand accelerators to speed up performance-critical parts of a Linux file system. It’s up to 30x faster than similar storage systems and set to become a key component in LANL’s infrastructure.

ABoF places computation near storage to minimize data movement and improve the efficiency of both simulation and data-analysis pipelines, a researcher said in a recent LANL blog.

Texas Rides a Cloud-Native Super

The Texas Advanced Computing Center (TACC) is the latest to adopt BlueField-2 in Dell PowerEdge servers. It will use the DPUs on an InfiniBand network to make its Lonestar6 system a development platform for cloud-native supercomputing.

TACC’s Lonestar6 serves a wide swath of HPC developers at Texas A&M University, Texas Tech University, and the University of North Texas, as well as a number of research centers and the faculty.

MPI Gets Accelerated

Twelve hundred miles to the northeast, researchers at Ohio State University showed how DPUs can make one of HPC’s most popular programming models run up to 26 percent faster.

By offloading critical parts of the message passing interface (MPI), they accelerated P3DFFT, a library used in many large-scale HPC simulations.

“DPUs are like assistants that handle work for busy executives, and they will go mainstream because they can make all workloads run faster,” said Dhabaleswar K. (DK) Panda, a professor of computer science and engineering at Ohio State who led the DPU work using his team’s MVAPICH open source software.

DPUs in HPC Centers, Clouds

Double-digit boosts are huge for supercomputers running HPC simulations like drug discovery or aircraft design. And cloud services can use such gains to increase their customers’ productivity, said Panda, who’s had requests from multiple HPC centers for his code.

Quantum InfiniBand networks with features like NVIDIA SHARP help make his work possible.

“Others are talking about in-network computing, but InfiniBand supports it today,” he said.

Durham Does Load Balancing

Multiple research teams in Europe are accelerating MPI and other HPC workloads with BlueField DPUs.

For example, Durham University, in northern England, is developing software for load balancing MPI jobs using BlueField DPUs on a 16-node Dell PowerEdge cluster. Its work will pave the way for more efficient processing of better algorithms for HPC facilities around the world, said Tobias Weinzierl, principal investigator for the project.

DPUs in Cambridge, Munich

Researchers in Cambridge, London and Munich are also using DPUs.

For its part, University College London is exploring how to schedule tasks for a host system on BlueField-2 DPUs. It’s a capability that could be used, for example, to move data between host processors so it’s there when they need it.

BlueField DPUs inside Dell PowerEdge servers in the Cambridge Service for Data Driven Discovery offload security policies, storage frameworks and other jobs from host CPUs, maximizing the system’s performance.

Meanwhile, researchers in the computer architecture and parallel systems group at the Technical University of Munich are seeking ways to offload both MPI and operating system tasks with DPUs as part of a EuroHPC project.

Back in the U.S., researchers at Georgia Tech are collaborating with the Sandia National Laboratories to speed work in molecular dynamics using BlueField-2 DPUs. A paper describing their work so far shows algorithms can be accelerated by up to 20 percent with no loss in the accuracy of simulations.

An Expanding Network

Earlier this month, researchers in Japan announced a system using the latest NVIDIA H100 Tensor Core GPUs riding our fastest and smartest network ever, the NVIDIA Quantum-2 InfiniBand platform.

NEC will build the approximately 6 PFLOPS, H100-based supercomputer for the Center for Computational Sciences at the University of Tsukuba. Researchers will use it for climatology, astrophysics, big data, AI and more.

Meanwhile, researchers like Panda are already thinking about how they’ll use the cores in BlueField-3 DPUs.

“It will be like hiring executive assistants with college degrees instead of ones with high school diplomas, so I’m hopeful more and more offloading will get done,” he quipped.

The post HPC Researchers Seed the Future of In-Network Computing With NVIDIA BlueField DPUs appeared first on NVIDIA Blog.

Read More

Hyperscale Digital Twins to Give Us “Amazing Superpowers,” NVIDIA Exec Says at ISC 2022

Highly accurate digital representations of physical objects or systems, or “digital twins,” will enable the next era of industrial virtualization and AI, executives from NVIDIA and BMW said Tuesday.

Kicking off the ISC 2022 conference in Hamburg, Germany, NVIDIA’s Rev Lebaredian (left), vice president for Omniverse and simulation technology, was joined by Michele Melchiorre, senior vice president for product system, technical planning, and tool shop at BMW Group.

“If you can construct a virtual world that matches the real world in its complexity, in its scale and in its precision, then there are a great many things you can do with this,” Lebaredian said.

While Lebaredian outlined the broad trends and technological advancements driving the evolution of digital twin simulations, Melchiorre offered a detailed look at how BMW has put digital twins to work in its own factories.

Melchiorre explained BMW’s plans to use digital twins as a tool to become more “lean, green and digital,” describing real-time collaboration with digital twins and opportunities for training AIs as a “revolution in factory planning.”

Digital twins such as the BMW iFACTORY initiative described by Melchiorre — which harnesses real-time data, simulation and machine learning — are an example of how swiftly digital twins have become workhorses for industrial companies such as Amazon Robotics, BMW and others.

These systems will link our representations of the world with data streaming in, in real-time, from these worlds, Lebaredian explained.

“What we’re trying to introduce now is a mechanism by which we can link the two together, where we can detect all the changes in the physical version, and reflect them in the digital world,” Lebaredian said. “If we can establish that link we gain some amazing superpowers.”

Supercomputing Is Transforming Every Field of Discovery

And it’s another powerful example of how technologies from the supercomputing industry — particularly its focus on simulation and data center scale GPU computing — are spilling over into the broader world.

At the same time, converging technologies have transformed high-performance computing, Lebaredian said. GPU-accelerated systems have become a mainstay not just in scientific computing, but edge computing, data centers and cloud systems.

NVIDIA’s Rev Lebaredian, vice president for Omniverse and simulation technology, speaking at ISC 2022.

And AI-accelerated GPU computing has also become a cornerstone of modern high-performance computing. That’s positioned supercomputing to realize the original intent of computer graphics: simulation.

Computers, algorithms and AI have all matured enough that we can begin simulating worlds that are complex enough to be useful on an industrial scale, even using these simulations as training grounds for AI.

World Simulation at an Inflection Point

With digital twins, a new class of simulation is possible, Lebaredian said.

These require precision timing — the ability to simulate multiple autonomous systems at the same time.

They require physically accurate simulation.

And they require accurate ingestion of information from the “real twin,” and continuous synchronization.

These digital twin simulations will give us “superpowers.”

The first one Lebaradian dug into was teleportation. “Just like in a multiplayer video game any human anywhere on earth can teleport into that virtual world,” Lebaradian said.

The next: time travel.

“If you record the state of the world over time, you can recall it at any point, this allows time travel,” Lebaradian said.

“You can not only now teleport to that world, but you can scrub your timeline and go backwards to any point in time, and explore that space at any point in time,” he added.

And, finally, these simulations, if accurate enough, will let us understand what’s next.

“If you have a simulator that is extremely accurate and actually predictive of what will happen in the future if you understand the laws of physics well enough you essentially get time travel to the future,” Lebaredian said.

“You can compute not just one possible future, but many possible futures,” he added, outlining how this could let city planners see what could happen as they modify a city, plan the road and change the traffic systems to find “the best possible future.”

Modern supercomputing is unlocking these digital twins, which are extremely compute-intensive and require precision timing networking with extremely low latency.

“We need a new kind of supercomputer, one that can really accelerate artificial intelligence and run these massive simulations in true real-time,” Lebaradian said.

That will require GPU-accelerated systems that are optimized at every layer of the system to enable precision timing.

These systems will need to run not just on the data center, but reach the edge of networks to bring data into virtual simulations with precision timing.

Such systems will be key to advances on scales both small — such as drug discovery, and large — such as climate simulation.

“We need to simulate our climate we need to look really far out, we need to do so at a precision that’s never been done before, and we need to be able to trust our simulations are actually predictive and accurate, if we do that we have some hope we can deal with this climate change situation,” Lebaradian said.

BMW’s iFACTORY: “Lean, Green and Digital”

BMW’s Melchiorre provided an example of how this broad vision is being put to work today at BMW, as the automaker seeks to become “lean, green and digital.”

Michelle Melchiorre, senior vice president for product system, technical planning, and tool shop at BMW Group.

BMW has built exceptionally complex digital twins, simulating its factories with humans and robots interacting in the same space, at the same time.

It’s an effort that stretches from the factory floor to the company’s data center, to its entire supply chain. This digital twin involves millions of moving parts and pieces that are connected to an enormous supply chain.

Melchiorre walked his audience through a number of examples of how digital twins simulate various pieces of the plant, simulating how industrial machinery, robots, and people will move together.

Inside the digital twin of BMW’s assembly system, powered by Omniverse, an entire factory in simulation.

And he explained how they are leveraging NVIDIA technology to simulate entire factories before they’re even built.

Melchiorre showed an aerial image of the site where BMW is building a new factory in Hungary. While the real-world factory is still mostly open field, the digital factory is 80% complete.

“This will be the first plant where we will have a complete digital twin much before production starts,” Melchiorre said.

In the future, the iFACTORY will be real in all of BMW’s plants, Melchiorre explained, from BMW’s 100-year-old home plant in Munich to its forthcoming plant in Debrecen, Hungary.​

“This is our production network, not just one factory – each and every plant will go in this direction, every plant will develop into a BMW iFACTORY, this is our master plan for our future,” Melchiorre said.

The post Hyperscale Digital Twins to Give Us “Amazing Superpowers,” NVIDIA Exec Says at ISC 2022 appeared first on NVIDIA Blog.

Read More

A Devotion to Emotion: Hume AI’s Alan Cowen on the Intersection of AI and Empathy

Can machines experience emotions? They might, according to Hume AI, an AI research lab and technology company that aims to “ensure artificial intelligence is built to serve human goals and emotional well-being.”

So how can AI genuinely understand how we are feeling, and respond appropriately?

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Alan Cowen, founder of Hume AI and The Hume Initiative. Cowen — a former researcher at Google who holds a Ph.D. in Psychology from UC Berkeley — talks about the latest work at the intersection of computing and human emotion.

You Might Also Like

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

Companies use automated chatbots to help customers solve issues, but conversations with these chatbots can sometimes be a tiring affair. ZeroShotBot CEO Jason Mars explains how he’s trying to change that by using AI to improve automated chatbots.

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges, and the fun, involved in teaching machines to listen.

Lilt CEO Spence Green Talks Removing Language Barriers in Business

When large organizations require translation services, there’s no room for the amusing errors often produced by automated apps. Lilt CEO Spence Green aims to correct that using a human-in-the-loop process to achieve fast, accurate and affordable translation.

Subscribe to the AI Podcast: Now available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post A Devotion to Emotion: Hume AI’s Alan Cowen on the Intersection of AI and Empathy appeared first on NVIDIA Blog.

Read More

Ready, Set, Game: GFN Thursday Brings 10 New Titles to GeForce NOW

It’s a beautiful day to play video games. And it’s GFN Thursday, which means we’ve got those games.

Ten total titles join the GeForce NOW library of over 1,300 games, starting with the release of Roller Champions – a speedy, free-to-play roller skating title launching with competitive season 0.

Rollin’ Into the Weekend

Roll with the best or get left behind with the rest in the newest free-to-play sports game from Ubisoft, Roller Champions.

Roller Champions on GeForce NOW
Skate, tackle and roll your way to glory in Roller Champions. Discover a free-to-play, team PvP sports game like no other.

Become a sports legend and compete for fame in fast-paced 3v3 matches. The rules are simple: take the ball, make a lap while maintaining team possession and score. Take advantage of passes, tackles and team moves to win against opponents and climb the leaderboard kicking off with the Kickoff Season today.

Stream the game across nearly all devices, even on Mac or mobile. RTX 3080 members can take their experience to the next level, playing at up to 4K resolution and 60 frames per second from the PC and Mac apps. They can also zoom around in next-to-native time with ultra-low latency for eight hour-long gaming sessions.

Start playing the game for free today, streaming on GeForce NOW.

On top of that, members can look for the following games streaming this week:

Finally, Star Conflict (Steam) was announced to arrive this month but will be coming to the cloud at a future date.

The weekend fun is about to begin. There’s only one question left – who is on your roller derby dream team? Let us know on Twitter or in the comments below.

The post Ready, Set, Game: GFN Thursday Brings 10 New Titles to GeForce NOW appeared first on NVIDIA Blog.

Read More

Deciphering the Future: HPE Switches on AI Supercomputer in France

Recalling the French linguist who deciphered the Rosetta Stone 150 years ago, Hewlett Packard Enterprise today switched on a tool to unravel its customers’ knottiest problems.

The Champollion AI supercomputer takes its name from Jean-François Champollion (1790-1832), who decoded hieroglyphics that opened a door to study of ancient Egypt’s culture. Like Champollion, the mega-system resides in Grenoble, France, where it will seek patterns in massive datasets at HPE’s Centre of Excellence.

The work will include AI model development and training, as well as advanced simulations for users in science and industry.

Among the system’s global user community, researchers in France’s AI for Humanity program will use Champollion to advance industries and boost economic growth with machine learning.

Inside an AI Supercomputer 

Champollion will help HPE’s customers explore new opportunities with accelerated computing.  The system is based on a cluster of 20 HPE Apollo 6500 Gen10 Plus systems running the HPE Machine Learning Development Environment, a software stack to build and train AI models at scale.

It’s powered in part by 160 NVIDIA A100 Tensor Core GPUs, delivering 100 petaflops of peak AI performance for the cluster. They’re linked on high-throughput, low-latency NVIDIA Quantum InfiniBand that sports in-network computing.

The system can access NGC, NVIDIA’s online catalog for HPC and AI software, including tools like NVIDIA Triton Inference Server that orchestrates AI deployments, and application frameworks like NVIDIA Clara for healthcare.

Users can test and benchmark their own workloads on Champollion to speed their work into production. It’s the perfect tool for Grenoble, home to a dense cluster of research centers for companies in energy, medicine and high tech.

Powerful Possibilities

The system could help find molecular patterns for a new, more effective drug or therapy. It could build a digital twin to explore more efficient ways of routing logistics in a warehouse or factory.

The possibilities are as varied as the number of industries and research fields harnessing the power of high performance computing.

So, it’s appropriate that the Champollion system debuts ahead of ISC, Europe’s largest gathering of HPC developers. This year’s event in Hamburg will provide an in-person experience for the first time since the pandemic.

Whether you will be in Hamburg or online, join NVIDIA and watch the conference keynote, Supercomputing: The Key to Unlocking the Next Level of Digital Twins, to learn more about the potential of HPC+AI to transform every field.

Rev Lebaredian, who leads NVIDIA Omniverse and simulation technology at NVIDIA, along with Michele Melchiorre, a senior vice president at BMW Group, will show how supercomputing can unlock a new level of opportunities with digital twins.

Feature image credit: Steven Zucker, Smarthistory.

The post Deciphering the Future: HPE Switches on AI Supercomputer in France appeared first on NVIDIA Blog.

Read More

NVIDIA Brings Data Center, Robotics, Edge Computing, Gaming and Content Creation Innovations to COMPUTEX 2022

Digital twins that revolutionize the way the most complex products are produced. Silicon and software that transforms data centers into AI factories. Gaming advances that bring the world’s most popular games to life.

Taiwan has become the engine that brings the latest innovations to the world. So it only makes sense that NVIDIA leaders brought their best ideas to this week’s COMPUTEX technology conference in Taipei.

“Taiwan is the birthplace of the PC ecosystem and the spirit of COMPUTEX is to celebrate the incredible journey that built this $500 billion industry,” Jeff Fisher, senior vice president for gaming products at NVIDIA told attendees.

The headline news:

  • NVIDIA announced Taiwan’s leading computer makers will release the first wave of systems powered by the NVIDIA Grace CPU Superchip and Grace Hopper Superchip for workloads such as digital twins, AI, high-performance computing, cloud graphics and gaming.
  • NVIDIA announced liquid-cooled NVIDIA A100 GPUs for data centers. They’ll be available in the fall as a PCIe card and will ship from OEMs with the HGX A100 server. The H100 Liquid Cooled will be available in the HGX H100 server, and as the H100 PCIe in early 2023.
  • Partners creating products around the NVIDIA Jetson edge AI and robotics platform announced more than 30 servers and appliances based on the NVIDIA Orin system-on-module.
  • Momentum for NVIDIA RTX is growing, with over 250 RTX games and applications available, double that at last year’s COMPUTEX. And GeForce gamers continue to upgrade, with over 30% now on RTX hardware, logging over 1 and a half billion hours of playtime with RTX. And DLSS is in the games that gamers want to play, with 12 new added to the ever-growing library.

The announcements punctuated a talk from six NVIDIA leaders who wove together advances from robotics to AI, silicon to software and highlighted the work of partners throughout the industry.

Clockwise, from top left: NVIDIA VP for Accelerated Computing Ian Buck, Senior VP for Hardware Engineering Brian Kelleher, Director of Product Management for Accelerated Computing Ying Yin Shih, CTO Michael Kagan, Senior VP for GeForce Jeff Fisher, VP of Embedded and Edge Computing Deepu Talla.
Clockwise, from top left: NVIDIA VP for Accelerated Computing Ian Buck, Senior VP for Hardware Engineering Brian Kelleher, Director of Product Management for Accelerated Computing Ying Yin Shih, CTO Michael Kagan, Senior VP for GeForce Jeff Fisher, VP of Embedded and Edge Computing Deepu Talla.

Transforming Data Centers

First up, NVIDIA VP for Hyperscale and HPC Ian Buck detailed how data centers are transforming into AI factories.

“This transformation requires us to reimagine the data center at every level, from hardware to software, from chips to infrastructure to systems,” Buck said.

This will drive massive business opportunities for NVIDIA’s partners in data centers, HPC, in digital twins and cloud-based gaming referencing a “half-trillion market opportunity.”

Powering these modern AI factories requires end-to-end innovation at every level, Buck said.

And with data centers becoming “AI factories,” data processing is essential.

These include NVIDIA Hopper GPUs, NVIDIA Grace CPUs and NVIDIA BlueField DPUs as the building blocks networked together by NVIDIA Quantum and Spectrum switches.

“The Bluefield DPU along with the Quantum and Spectrum networking switches comprise the infrastructure platform for the AI factory of the future,” said CTO Michael Kagan

NVIDIA technologies will be featured in a wide range of server designs, including NVIDIA CGX for cloud gaming, OVX for digital twins, and HGX Grace and HGX Grace Hopper for science, data analytics and AI.

NVIDIA announced the first wave of systems powered by the NVIDIA Grace CPU Superchip and Grace Hopper Superchip are expected starting in the first half of 2023.

“Grace will be amazing at AI, data analytics, scientific computing, and hyperscale computing,” said NVIDIA senior VP for hardware engineering Brian Kelleher. “And, of course, the full suite of NVIDIA software platforms will run on Grace.”

The Grace-powered systems from ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro and Wiwynn join x86 and other Arm-based servers to offer customers a broad range of choices.

“All of these servers are optimized for NVIDIA accelerated computing software stacks, and can be qualified as part of our NVIDIA-Certified Systems lineup,” said Director of Product Management for Accelerated Computing Ying Yin Shih

To provide enterprises with options to deploy green data centers, NVIDIA also announced its first data center PCIe GPU with direct chip liquid cooling.

The liquid-cooled A100 PCIe GPUs will be supported in mainstream servers by at least a dozen system builders, with the first shipping in the third quarter of this year.

“All of these combine to deliver the infrastructure of the data center of the future that handles these massive workloads,” Buck said.

Finally, getting all of this to run seamlessly requires NVIDIA AI Enterprise software, which delivers robust 24/7 AI deployment, Buck said.

“When it comes to reimagining the data center, NVIDIA has the complete, open platform of hardware and software to build the AI factories of the future,” Buck said.

Revolutionizing Robotics with AI

AI is also reaching more deeply into the world around us.

Deepu Talla, VP of Embedded and Edge Computing, spoke about how the global drive to automation makes robotics a major new application for AI.

NVIDIA announced this week that more than 30 leading partners worldwide will be among those offering the first wave of NVIDIA Jetson AGX Orin-powered production systems at COMPUTEX in Taipei.

New products are coming from a dozen Taiwan-based camera, sensor and hardware providers for use in edge AI, AIoT, robotics and embedded applications.

“We are entering the age of robotics — autonomous machines that are keenly aware of their environment and that can make smart decisions about their actions,” Talla said.

Available worldwide since GTC in March, the NVIDIA Jetson AGX Orin developer kit delivers 275 trillion operations per second, packing over 8x the processing power of its predecessor, NVIDIA AGX Xavier, in the same pin-compatible form factor.

Jetson Orin features the NVIDIA Ampere architecture GPU, Arm Cortex-A78AE CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, faster memory bandwidth, and multimodal sensor support capable of feeding multiple, concurrent AI application pipelines.

Offering server-class performance for edge AI, new Jetson AGX Orin production modules will be available in July, while Orin NX modules are coming in September.

Such modules are key to embedding smarter devices in the world around us, Talla said.

NVIDIA Isaac, the company’s robotics platform, has four pillars, he explained.

The first pillar is about creating the AI, “a very time-consuming and difficult process that we are making fast and easy,” Talla said, highlighting how tools such as the Isaac Replicator for synthetic data generation, NVIDIA pre-trained models available on NGC, and the NVIDIA TAO toolkit are addressing this challenge.

The second pillar is simulating the operation of the robot in the virtual world before it is deployed in the real world with Isaac Sim, Talla explained.

The third pillar is building the physical robots.

And the fourth pillar is about managing the fleet of robots over their lifetimes, typically many years if not more than a decade, Talla said.

As part of that, Talla detailed Isaac Nova Orin, a reference design for state-of-the-art compute and sensors for autonomous mobile robots (AMR) — packed with technologies such as DeepMap, CuOpt and Metropolis.

And he explained how NVIDIA Fleet Command provides secure management for fleets of AMRs.

“This is the industry’s most comprehensive end-to-end robotics platform and we continue to invest in it,” Talla said.

More Power for Gaming and Content Creation

Speaking last, Fisher detailed how NVIDIA is working to deliver innovation to gamers and content creators.

Over the past 20 years, NVIDIA and its partners have dedicated themselves to building the best platform for gaming and creating, Fisher said.

“Hundreds of millions now count on it to play, work and learn,” he said.

NVIDIA RTX, introduced in 2018, has reinvented graphics thanks to advanced features such as real-time ray tracing — and the momentum around it continues to grow.

There are now over 250 RTX-enabled games and applications, doubling since last Computex, Fisher said.

NVIDIA DLSS continues to set the standard for super resolution with best in class performance and image quality, and is now integrated into more than 180 games and applications.

At COMPUTEX, DLSS is in the games that gamers want to play, with 12 new games added to the ever-growing library.

Among the highlights: the developers of the critically-acclaimed HITMAN 3 announced they will be adding NVIDIA DLSS along with ray-traced reflections and ray-traced shadows on May 24.

In addition, NVIDIA Reflex is now supported in 38 games, 22 displays, and 45 mice. With over 20M gamers playing with Reflex ON every month, Reflex has become one of NVIDIA’s most successful technologies.

The Reflex ecosystem is continuing to grow: ASUS debuted the world’s first 500Hz G-SYNC display, the ASUS ROG Swift 500Hz gaming monitor. Acer also launched the Predator X28 G-SYNC display. Meanwhile, Cooler Master introduced the MM310 and MM730 gaming mice with Reflex.

Gaming laptops continue to be the fastest-growing PC category and 4th generation Max-Q Technologies — the latest iteration of NVIDIA’s design for thin and light laptops — is delivering a new level of power efficiency. GeForce RTX laptop models now total over 180.

“These are our most portable, highest performance laptops ever,” Fisher said.

These powerful systems are being used to help build massive, interconnected 3D destinations.

NVIDIA Studio, the RTX-powered platform that includes dozens of SDKs and accelerates the top creative apps and tools, and NVIDIA Omniverse, the company’s platform for building interconnected 3D virtual worlds, are designed to enable collaboration and construction of these virtual worlds, Fisher said.

Omniverse is getting a number of updates to accelerate creator workflows. Omniverse Cloud Simple Share, now in closed early access, allows users to send an Omniverse scene for others to view with a single click. Audio2Emotion will soon be coming to Audio2Face, providing an AI-powered animation feature that generates realistic facial expressions based on an audio file, Fisher said.

In addition, the Omniverse XR App is now available in beta. With it you can open your photorealistic Omniverse scene and experience it, fully immersive, in Virtual Reality, Fisher said. And Omniverse Machinima has been updated to make it easier than ever for 3D artists to create animated shorts.

“Omniverse is the future of 3D content creation and how virtual worlds will be built,” Fisher said.

“Over the past 20 years, NVIDIA and our partners have dedicated ourselves to building the best platform for gaming and creating,” Fisher said. “Hundreds of millions now count on it to play, work, and learn.”

Featured image credit: ynes95, some rights reserved.

The post NVIDIA Brings Data Center, Robotics, Edge Computing, Gaming and Content Creation Innovations to COMPUTEX 2022 appeared first on NVIDIA Blog.

Read More

NVIDIA Adds Liquid-Cooled GPUs for Sustainable, Efficient Computing

In the worldwide effort to halt climate change, Zac Smith is part of a growing movement to build data centers that deliver both high performance and energy efficiency.

He’s head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers and is committed to becoming the first in its sector to be climate neutral.

“We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way,” said Smith, a Julliard grad who got into tech in the early 2000’s building websites for fellow musicians in New York City.

Marking Progress in Efficiency

As of April, Equinix has issued $4.9 billion in green bonds. They’re investment-grade instruments Equinix will apply to reducing environmental impact through optimizing power usage effectiveness (PUE), an industry metric of how much of the energy a data center uses goes directly to computing tasks.

Data center operators are trying to shave that ratio ever closer to the ideal of 1.0 PUE.  Equinix facilities have an average 1.48 PUE today with its best new data centers hitting less than 1.2.

Equinix drives data center efficiency with liquid cooled GPUs
Equinix is making steady progress in the energy efficiency of its data centers as measured by PUE (inset).

In another step forward, Equinix opened in January a dedicated facility to pursue advances in energy efficiency. One part of that work focuses on liquid cooling.

Born in the mainframe era, liquid cooling is maturing in the age of AI. It’s now widely used inside the world’s fastest supercomputers in a modern form called direct-chip cooling.

Liquid cooling is the next step in accelerated computing for NVIDIA GPUs that already deliver up to 20x better energy efficiency on AI inference and high performance computing jobs than CPUs.

Efficiency Through Acceleration 

If you switched all the CPU-only servers running AI and HPC worldwide to GPU-accelerated systems, you could save a whopping 11 trillion watt-hours of energy a year. That’s like saving the energy more than 1.5 million homes consume in a year.

Today, NVIDIA adds to its sustainability efforts with the release of our first data center PCIe GPU using direct-chip cooling.

Equinix is qualifying the A100 80GB PCIe Liquid-Cooled GPU for use in its data centers as part of a comprehensive approach to sustainable cooling and heat capture. The GPUs are sampling now and will be generally available this summer.

Saving Water and Power

“This marks the first liquid-cooled GPU introduced to our lab, and that’s exciting for us because our customers are hungry for sustainable ways to harness AI,” said Smith.

Data center operators aim to eliminate chillers that evaporate millions of gallons a water a year to cool the air inside data centers. Liquid cooling promises systems that recycle small amounts of fluids in closed systems focused on key hot spots.

“We’ll turn a waste into an asset,” he said.

Same Performance, Less Power

In separate tests, both Equinix and NVIDIA found a data center using liquid cooling could run the same workloads as an air-cooled facility while using about 30 percent less energy. NVIDIA estimates the liquid-cooled data center could hit 1.15 PUE, far below 1.6 for its air-cooled cousin.

Liquid-cooled data centers can pack twice as much computing into the same space, too. That’s because the A100 GPUs use just one PCIe slot; air-cooled A100 GPUs fill two.

NVIDIA drives efficiency with liquid cooled GPUs
NVIDIA sees power savings, density gains with liquid cooling.

At least a dozen system makers plan to incorporate these GPUs into their offerings later this year. They include ASUS, ASRock Rack, Foxconn Industrial Internet, GIGABYTE, H3C, Inspur, Inventec, Nettrix, QCT, Supermicro, Wiwynn and xFusion

A Global Trend

Regulations setting energy-efficiency standards are pending in Asia, Europe and the U.S. That’s motivating banks and other large data center operators to evaluate liquid cooling, too.

And the technology isn’t limited to data centers. Cars and other systems need it to cool high-performance systems embedded inside confined spaces.

The Road to Sustainability

“This is the start of a journey,” said Smith of the debut of liquid-cooled mainstream accelerators.

Indeed, we plan to follow up the A100 PCIe card with a version next year using the H100 Tensor Core GPU based on the NVIDIA Hopper architecture. We plan to support liquid cooling in our high-performance data center GPUs and our NVIDIA HGX platforms for the foreseeable future.

For fast adoption, today’s liquid-cooled GPUs deliver the same performance for less energy. In the future, we expect these cards will provide an option of getting more performance for the same energy, something users say they want.

“Measuring wattage alone is not relevant, the performance you get for the carbon impact you have is what we need to drive toward,” said Smith.

Learn more about our new A100 PCIe liquid-cooled GPUs.

The post NVIDIA Adds Liquid-Cooled GPUs for Sustainable, Efficient Computing appeared first on NVIDIA Blog.

Read More

NVIDIA Partners Announce Wave of New Jetson AGX Orin Servers and Appliances at COMPUTEX

More than 30 leading technology partners worldwide announced this week the first wave of NVIDIA Jetson AGX Orin-powered production systems at COMPUTEX in Taipei.

New products are coming from a dozen Taiwan-based camera, sensor and hardware providers for use in edge AI, AIoT, robotics and embedded applications.

Available worldwide since GTC in March, the NVIDIA Jetson AGX Orin developer kit delivers 275 trillion operations per second, packing over 8x the processing power of its predecessor, NVIDIA AGX Xavier, in the same pin-compatible form factor.

Jetson Orin features the NVIDIA Ampere architecture GPU, Arm Cortex-A78AE CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, faster memory bandwidth, and multimodal sensor support capable of feeding multiple, concurrent AI application pipelines.

Offering server-class performance for edge AI, new Jetson AGX Orin production modules will be available in July, while Orin NX modules are coming in September.

“The new Jetson AGX Orin is supercharging the next generation of robotics and edge AI applications,” said Deepu Talla, vice president of Embedded and Edge Computing at NVIDIA. “This momentum continues to accelerate as our ecosystem partners release Jetson Orin-based production systems in various form factors tailored towards specific industries and use cases.”

Robust Product and Partner Ecosystem

Jetson-based products announced include servers, edge appliances, industrial PCs, carrier boards, AI software and more. They will come in fan and fanless configurations with multiple connectivity and interface options, also including specifications for commercial or ruggedized applications in robotics, manufacturing, retail, transportation, smart cities, healthcare and other essential sectors of the economy.

Among the releases are Taiwan-based members of the NVIDIA Partner Network, including AAEON, Adlink, Advantech, Aetina, AIMobile, Appropho, AverMedia, Axiomtek, EverFocus, Neousys, Onyx and Vecow.

Other NVIDIA partners launching new Jetson Orin-based solutions worldwide include Auvidea, Basler AG, Connect Tech, D3 Engineering, Diamond Systems, e-Con Systems, Forecr, Framos, Infineon, Leetop, Leopard Imaging, MiiVii, Quectel, RidgeRun, Sequitur, Silex, SmartCow, Stereolabs, Syslogic, Realtimes, Telit and TZTEK, to name a few.

Million-Plus Jetson Developers

Today more than 1 million developers and over 6,000 companies are building commercial products on the NVIDIA Jetson edge AI and robotics computing platform to create and deploy autonomous machines and edge AI applications.

And, with over 150 members, the growing Jetson ecosystem of partners offers a wide range of support, including from companies specialized in AI software, hardware and application design services, cameras, sensors and peripherals, developer tools and development systems. This year, the AAEON BOXER-8240 powered by the Jetson AGX Xavier won the COMPUTEX 2022 Best Choice Golden Award.

Developers are building their next-generation applications on the Jetson AGX Orin developer kit for seamless deployment on the production modules. Jetson AGX Orin users can tap into the NVIDIA CUDA-X accelerated computing stack, NVIDIA JetPack SDK and the latest NVIDIA tools for application development and optimization, including cloud-native development workflows.

Comprehensive Software Support

Jetson Orin enables developers to deploy the largest, most complex models needed to solve edge AI and robotics challenges in natural language understanding, 3D perception, multisensor fusion and other areas.

“NVIDIA is the recognized leader in AI and continues to leverage this expertise to advance robotics through a robust ecosystem and complete end-to-end solutions, including a range of hardware platforms that leverage common tools and neural network models,” said Jim McGregor, principal analyst at TIRIAS Research.

“The new Jetson platform brings the performance and versatility of the NVIDIA Ampere architecture to enable even further advancements in autonomous mobile robots for a wide range of applications ranging from agriculture and manufacturing to healthcare and smart cities,” he said.

Pretrained models from the NVIDIA NGC catalog are optimized and ready for fine-tuning with the NVIDIA TAO toolkit and customer datasets. This reduces time and cost for production-quality AI deployments, while cloud-native technologies allow seamless updates throughout a product’s lifetime.

For specific use cases, NVIDIA software platforms include NVIDIA Isaac Sim on Omniverse for robotics; Riva, a GPU-accelerated SDK for building speech AI applications; the DeepStream streaming analytics toolkit for AI-based multi-sensor processing, video, audio and image understanding; as well as Metropolis, an application framework, set of developer tools and partner ecosystem that brings visual data and AI together to improve operational efficiency and safety across industries.

Watch NVIDIA’s COMPUTEX keynote address on Monday, May 23, at 8 p.m. PT.

The post NVIDIA Partners Announce Wave of New Jetson AGX Orin Servers and Appliances at COMPUTEX appeared first on NVIDIA Blog.

Read More

Master of Arts: NVIDIA RTX GPUs Accelerate Creative Ecosystems, Delivering Unmatched AI and Ray-Tracing Performance

The future of content creation was on full display during the virtual NVIDIA keynote at COMPUTEX 2022, as the NVIDIA Studio platform expands with new Studio laptops and RTX-powered AI apps — all backed by the May Studio Driver released today.

Built-for-creator designs from ASUS, Lenovo, Acer and HP join the NVIDIA Studio laptop lineup. With up to GeForce RTX 3080 Ti or NVIDIA RTX A5500 GPUs, these new machines power unrivaled performance in 3D rendering and AI applications.

NVIDIA Studio is powering the AI revolution in content creation, giving creators time-saving tools that help them go from concept to completion faster. A host of AI-powered software updates are supported in the latest driver. Notably, dive In the NVIDIA Studio with Blackmagic Design DaVinci Resolve 18 to explore three new features that will reduce previously tedious tasks to simple button clicks.

New Hardware on Display

ASUS recently announced the Zenbook Pro 14 Duo, Pro 16X OLED and Pro 17, plus Vivobook Pro 14X, 15X and 16X laptops with up to GeForce RTX 30 Series Laptop GPUs. These new systems join the ProArt line as NVIDIA Studio laptops, giving creators a slew of options: professional-grade ProArt laptops with displays apt for film editing; the portable and balanced Zenbooks with beautiful designs and powerful GPUs; and the new Vivobooks, great for aspiring creators or advanced users.

Ignite creativity with the ASUS Vivobook 16X featuring a 16-inch NanoEdge 4K OLED display and the exclusive ASUS DialPad for intuitive and precise creative tool control, the world’s first in a laptop.

Unleash the full force of your creative ambitions with new NVIDIA Studio laptops from Lenovo. The Lenovo Slim 7i Pro X and Lenovo Slim 7 Pro X (or Yoga Slim 7i Pro X and Yoga Slim 7 Pro X in some regions) come with a 3K 120Hz Lenovo PureSight display, hardware calibrated for Delta E <1 color accuracy, sporting 100% sRGB color space and color volume – for full accuracy no matter the display brightness. These laptops feature up to a GeForce RTX 3050 GPU.

The Lenovo Slim 7 Pro X sports a 120Hz refresh rate, touch support and a pin-sharp 3K PureSight display, all in a lightweight, aesthetically pleasing design.

Acer’s ConceptD 5 and ConceptD 5 Pro come equipped with up to an NVIDIA GeForce RTX 3070 Ti and RTX A5500 GPU, respectively. Less than an inch thick, their sophisticated and durable metal design makes them easy to take on the road.

Acer’s ConceptD 5 Pantone-validated, 16-inch, OLED screen displays beautiful, color-accurate imagery, all with a sophisticated, matte finish design.

The HP ZBook Studio G9 is engineered to deliver pro-level performance in a thin and light form factor. Equipped with up to an NVIDIA RTX A5500 or GeForce RTX 3080 Ti Laptop GPU and professional-grade HP Dreamcolor displays, the HP ZBook Studio G9 offers optimal performance for multitasking, rendering 3D models and using powerful creative tools. HP also announced the HP Envy 16, fitted with a GeForce RTX 3060. With a beautiful design and extended display, the HP Envy 16 is a fantastic laptop for video editors.

Creative professionals with the HP ZBook Studio G9 benefit from the beautiful HP DreamColor display with optimal performance for rendering 3D models, video editing and completing complex creative tasks.

It’s Not Magic, It’s ‘In the NVIDIA Studio’

This week In The NVIDIA Studio, take a deeper look at three new features that help streamline video editing with RTX GPUs in Blackmagic Design’s DaVinci Resolve 18.

DaVinci Resolve is the only all-in-one editing, color grading, visual effects (VFX) and audio post-production app. NVIDIA Studio benefits extend into the software, with GPU-accelerated color grading, video editing, and color scopes; hardware encoder and decoder accelerated video transcoding; and RTX-accelerated AI features.

In addition to the incredibly valuable new cloud collaboration update which allows multiple editors, colorists, VFX artists and audio engineers to work simultaneously — on the same project, on the same timeline, anywhere in the world — the recent update also introduced a number of new features accelerated on RTX GPUs.

Automatic Depth Map uses AI to instantly generate a 3D depth matte of a scene to quickly grade the foreground separately from the background, and vice versa.

Generate 3D depth scenes using AI with the new Automatic Depth Map feature in DaVinci Resolve 18.

The feature enables creators to easily add creative effects and color corrections to footage. Change the mood by adding environmental effects like fog or atmosphere. It also makes it easier to mimic the characteristics of different high-quality lenses by adding blur or depth of field to further enhance the shot.

Object Mask Tracking also takes advantage of AI to recognize and track the movement of thousands of unique objects without having to manually rotoscope.

Object Mask Tracking in DaVinci Resolve 18 can track the movement of thousands of unique objects eliminating manual rotoscoping, image courtesy of Blackmagic Design.

Found within the magic mask palette, the DaVinci Neural Engine intuitively isolates animals, vehicles, people and food, plus countless other elements for advanced secondary grading and effects application.

Surface Tracking uses the CUDA cores found on RTX GPUs to quickly calculate and track any surface and apply graphics to surfaces that warp or change perspective in dramatic ways.

Add static or animated graphics to moving objects with the new Surface Tracking feature in DaVinci Resolve 18.

It allows creators to add static or animated graphics to just about anything that moves. The customizable mesh follows the motion of textured surfaces, meaning the feature works even on visuals that warp or change perspective — like a wrinkled t-shirt on an individual who’s in motion. It also allows for quick and easy cloning out of unwanted objects.

With NVIDIA GPUs doing all the hard work, creators can leverage these newly unlocked features to eliminate long manual work, resulting in more time to focus on creating.

Supplementing Creativity With AI

New AI features support creators by helping to reduce or eliminate tedious tasks.

Topaz Labs Gigapixel AI increases image resolution in a natural way for higher quality scaled images.

Updated to version 6.1 this month, Topaz Labs’ Gigapixel AI introduced improvements to face recovery when upscaling photos with notable performance improvements on NVIDIA GPUs. By transitioning the AI models from DirectML to TensorRT, users can process photos up to 2.5x faster, by leveraging the Tensor Cores on their RTX GPU.

Marmoset Toolbag 4.04, available now, includes a ton of new features. One example is Depth of Field in the camera object to include ray-traced depth of field. It produces a higher quality effect with more natural transitions between the subject and out-of-focus areas. The update also migrates the software to DirectX 12, giving NVIDIA GeForce RTX users a 1.3x increase in rendering speeds.

Reallusion recently unveiled iClone 8 and Character Creator 4, along with updated Omniverse Connectors for each. iClone 8 introduces NVIDIA volumetric lighting and GPU-accelerated skinning for ActorCore characters, ensuring smooth animations.

Time-saving AI-features in these apps, including DaVinci Resolve 18, are all backed by the May Studio Driver available for download today.

NVIDIA Omniverse Evolution

Creators globally are using NVIDIA Omniverse as a hub to interconnect 3D workflows. At COMPUTEX, NVIDIA introduced Omniverse features to help creators and technical artists create faster and easier than ever.

Introducing Omniverse Cloud and Omniverse XR (beta) with updates to Audio2Face and Machinima.

Omniverse Cloud is a suite of cloud services helping 3D designers, artists and developers collaborate easily from anywhere. Omniverse Cloud Simple Share is now available for early access by application — it lets users click once to package and send an Omniverse scene to friends.

Audio2Face: quickly and easily generate expressive facial animation from just an audio source with NVIDIA’s deep learning AI technology.

The Omniverse Audio2Face app has a suite of new updates launching in a few weeks, including full facial animation control and Audio2Emotion — an AI-powered animation feature that generates realistic facial expressions from just an audio file.

The Omniverse XR App (beta) is the world’s first full-fidelity, fully ray-traced virtual reality, allowing modelers to see every reflection, soft shadow and limitless lights — and enabling instant rendering of high-poly models without special imports.

Omniverse Machinima has a reinvented sequencer, as well as animation and rendering features that make it easier than ever for 3D artists to make animated shorts. New free game assets are also now available in the app — including Post Scriptum, Beyond the Wire, Shadow Warrior 3 and Squad.

The #MadeinMachinima contest is in full swing. Easily create an animated short with Omniverse materials, physics effects and game assets to win top-of-the-line Studio laptops.

Follow NVIDIA Studio on Instagram, Twitter and Facebook, Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Master of Arts: NVIDIA RTX GPUs Accelerate Creative Ecosystems, Delivering Unmatched AI and Ray-Tracing Performance appeared first on NVIDIA Blog.

Read More