Digitalization: A Game Changer for the Auto Industry

Digitalization: A Game Changer for the Auto Industry

The fusion of the physical and digital worlds is reshaping the automotive industry. NVIDIA’s automotive partners are using digitalization to transform every phase of the product lifecycle — evolving primarily physical, manual processes into software-driven, AI-enhanced digital systems.

Watch the video to learn more.

Digitalization: A Game Changer From End to End

Kaivan Karimi, global partner strategy lead at Microsoft, observes that companies are achieving “huge” results from “digitizing the physical entity, running simulations and rendering in 3D, whether it’s factory automation or modernizing the design and development of the car.”

Brian Ullem, vice president of engineering at Capgemini, explains that, with “the 30,000 parts that go into a car, it takes approximately five years to develop a vehicle end to end. Instead of building 50 or 100 cars, we can use digitalization to simulate without having to build prototypes. That saves a lot of time and money in the process.”

Thomas Mueller, chief technology officer of engineering at Wipro, adds that with digitalization, “we are now able to run simulations at a low cost…and improve the user experience.”

Simulation: Critical for Autonomous Driving

“Simulation is crucial to the development of autonomous systems,” says Ziv Binyamini, CEO of Foretellix. “On one hand, you need the real world, but this is highly costly. So you have to complement it with the ability to simulate a virtual world where everything is possible. And then you can, in a very cost-effective way, iterate quickly and ensure the system operates under all of these conditions.”

Simulation “gives our customers the power to validate their ADAS or autonomous systems virtually — with highly accurate sensors in the camera, lidar and radar domains — without having to rely on actual physical drives,” adds Tony Karam, global sales director at Ansys.

Austin Russell, founder and CEO of Luminar, agrees that “simulation is absolutely critical for autonomous driving. It’s great to see the work that NVIDIA has been doing in that domain, with not just the hardware but also the software.”

NVIDIA Omniverse: The Digital-Physical Convergence

“Software is a new component in the value proposition,” notes Walid Negm, chief technology officer of product engineering at Deloitte. The companies that will “survive and thrive are going to have to become much more efficient using the digital-physical convergence. The Omniverse experience is going to be important for the automotive sector.”

Shiv Tasker, global vice president of engineering at Capgemini, adds that the “visualization and production of digital twins relies on an efficient, high-performance infrastructure as well as the platforms that make it easy for customers to adopt the technology.”

Omniverse “will allow your worldwide team to simultaneously collaborate,” says Karimi of Microsoft. “Design engineers, migration engineers, test engineers — everybody collaborates simultaneously. That’s the power of NVIDIA Omniverse.”

Learn more about the NVIDIA DRIVE platform and how it’s helping industry leaders redefine transportation.

Join NVIDIA at GTC from March 18-21 in San Jose, Calif., to learn more about digitalization in the automotive industry.

Read More

Speak Like a Native: NVIDIA Parlays Win in Voice Challenge

Speak Like a Native: NVIDIA Parlays Win in Voice Challenge

Thanks to their work driving AI forward, Akshit Arora and Rafael Valle could someday speak to their spouses’ families in their native languages.

Arora and Valle — along with colleagues Sungwon Kim and Rohan Badlani — won the LIMMITS ’24 challenge which asks contestants to recreate in real time a speaker’s voice in English or any of six languages spoken in India with the appropriate accent. Their novel AI model only required a three-second speech sample.

The NVIDIA team advanced the state of the art in an emerging field of personalized voice interfaces for more than a billion native speakers of Bengali, Chhattisgarhi, Hindi, Kannada, Marathi and Telugu.

Making Voice Interfaces Realistic

The technology for personalized text-to-speech translation is a work in progress. Existing services sometimes fail to accurately reflect the accents of the target language or nuances of the speaker’s voice.

The challenge judged entries by listening for the naturalness of models’ resulting speech and its similarity to the original speaker’s voice.

The latest improvements promise personalized, realistic conversations and experiences that break language barriers. Broadcasters, telcos, universities, as well as e-commerce and online gaming services are eager to deploy such technology to create multilingual movies, lectures and virtual agents.

“We demonstrated we can do this at a scale not previously seen,” said Arora, who has two uses close to his heart.

Breaking Down Linguistic Barriers

A senior data scientist who supports one of NVIDIA’s biggest customers, Arora speaks Punjabi, while his wife and her family are native Tamil speakers.

It’s a gulf he’s long wanted to bridge for himself and others. “I had classmates who knew their native languages much better than the Hindi and English used in school, so they struggled to understand class material,” he said.

The gulf crosses continents for Valle, a native of Brazil whose wife and family speak Gujarati, a language popular in west India.

“It’s a problem I face every day,” said Valle, an AI researcher with degrees in computer music and machine listening and improvisation. “We’ve tried many products to help us have clearer conversations.”

Badlani, an AI researcher, said living in seven different Indian states, each with its own popular language, inspired him to work in the field.

A Race to the Finish Line

The initiative started nearly two years ago when Arora and Badlani formed the four-person team to work on the very different version of the challenge that would be held in 2023.

Their efforts generated a working code base for the so-called Indic languages. But getting to the win announced in January required a full-on sprint because the 2024 challenge didn’t get on the team’s radar until 15 days before the deadline.

Luckily, Kim, a deep learning researcher in NVIDIA’s Seoul office, had been working for some time on an AI model well suited to the challenge.

A specialist in text-to-speech voice synthesis, Kim was designing a so-called P-Flow model prior to starting his second internship at NVIDIA in 2023. P-Flow models borrow the technique large language models employ of using short voice samples as prompts so they can respond to new inputs without retraining.

“I created the model for English, but we were able to generalize it for any language,” he said.

“We were talking and texting about this model even before he started at NVIDIA,” said Valle, who mentored Kim in two internships before he joined full time in January.

Giving Others a Voice

P-Flow will soon be part of NVIDIA Riva, a framework for building multilingual speech and translation AI software, included in the NVIDIA AI Enterprise software platform.

The new capability will let users deploy the technology inside their data centers, on personal systems or in public or private cloud services. Today, voice translation services typically run on public cloud services.

“I hope our customers are inspired to try this technology,” Arora said. “I enjoy being able to showcase in challenges like this one the work we do every day.”

The contest is part of an initiative to develop open-source datasets and AI models for nine languages most widely spoken in India.

Hear Arora and Badlani share their experiences in a session at GTC next month.

And listen to the results of the team’s model below, starting with a three-second sample of a native Kannada speaker:


 

Here’s a similar-sounding synthesized voice reading the first sentence of this blog in Hindi:

 

And then in English:

See notice regarding software product information.

Read More

How the Ohio Supercomputer Center Drives the Future of Computing

How the Ohio Supercomputer Center Drives the Future of Computing

NASCAR races are all about speed, but even the fastest cars need to factor in safety, especially as rules and tracks change. The Ohio Supercomputer Center is ready to help. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Alan Chalker, the director of strategic programs at the OSC, about all things supercomputing. The center’s Open OnDemand program, which takes the form of a web-based interface, empowers Ohio higher education institutions and industries with accessible, reliable and secure computational services and training and educational programs. Chalker dives into the history and evolution of the OSC, and explains how it’s working with client companies like NASCAR, which is simulating race car designs virtually. Tune in to learn more about Chalker’s outlook on the future of supercomputing and OSC’s role in realizing it.

Time Stamps:

1:39: History of the Ohio Supercomputer Center
3:18: What are supercomputers?
5:08: How the Open OnDemand program came to be
11:50 How is Open OnDemand being used across higher education, industries?
22:45: OSC’s work with NASCAR
26:57: What’s on the horizon for Open OnDemand?

You Might Also Like…

MIT’s Anant Agarwal on AI in Education – Ep. 197

AI could help students work smarter, not harder. Anant Agarwal, founder of edX and Chief Platform Officer at 2U, shares his vision for the future of online education and the impact of AI in revolutionizing the learning experience.

UF Provost Joe Glover on Building a Leading AI University – Ep. 186

Joe Glover, provost and senior vice president of academic affairs at the University of Florida, discusses the university’s efforts to implement AI across all aspects of higher education, including a public-private partnership with NVIDIA that has helped transform UF into one of the leading AI universities in the country.

NVIDIA’s Marc Hamilton on Building the Cambridge-1 Supercomputer During a Pandemic – Ep. 137

Cambridge-1, U.K.’s most powerful supercomputer, ranks among the world’s top 3 most energy-efficient supercomputers and was built to help healthcare researchers make new discoveries. Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA, speaks on how he remotely oversaw its construction.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs

Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs

Chatbots are used by millions of people around the world every day, powered by NVIDIA GPU-based cloud servers. Now, these groundbreaking tools are coming to Windows PCs powered by NVIDIA RTX for local, fast, custom generative AI.

Chat with RTX, now free to download, is a tech demo that lets users personalize a chatbot with their own content, accelerated by a local NVIDIA GeForce RTX 30 Series GPU or higher with at least 8GB of video random access memory, or VRAM.

Ask Me Anything

Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.

Rather than searching through notes or saved content, users can simply type queries. For example, one could ask, “What was the restaurant my partner recommended while in Las Vegas?” and Chat with RTX will scan local files the user points it to and provide the answer with context.

The tool supports various file formats, including .txt, .pdf, .doc/.docx and .xml. Point the application at the folder containing these files, and the tool will load them into its library in just seconds.

Users can also include information from YouTube videos and playlists. Adding a video URL to Chat with RTX allows users to integrate this knowledge into their chatbot for contextual queries. For example, ask for travel recommendations based on content from favorite influencer videos, or get quick tutorials and how-tos based on top educational resources.

Chat with RTX can integrate knowledge from YouTube videos into queries.

Since Chat with RTX runs locally on Windows RTX PCs and workstations, the provided results are fast — and the user’s data stays on the device. Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.

In addition to a GeForce RTX 30 Series GPU or higher with a minimum 8GB of VRAM, Chat with RTX requires Windows 10 or 11, and the latest NVIDIA GPU drivers.

Develop LLM-Based Applications With RTX

Chat with RTX shows the potential of accelerating LLMs with RTX GPUs. The app is built from the TensorRT-LLM RAG developer reference project, available on GitHub. Developers can use the reference project to develop and deploy their own RAG-based applications for RTX, accelerated by TensorRT-LLM. Learn more about building LLM-based applications.

Enter a generative AI-powered Windows app or plug-in to the NVIDIA Generative AI on NVIDIA RTX developer contest, running through Friday, Feb. 23, for a chance to win prizes such as a GeForce RTX 4090 GPU, a full, in-person conference pass to NVIDIA GTC and more.

Learn more about Chat with RTX.

Read More

NVIDIA CEO: Every Country Needs Sovereign AI

NVIDIA CEO: Every Country Needs Sovereign AI

Every country needs to own the production of their own intelligence, NVIDIA founder and CEO Jensen Huang told attendees Monday at the World Governments Summit in Dubai.

Huang, who spoke as part of a fireside chat with the UAE’s Minister of AI, His Excellency Omar Al Olama, described sovereign AI — which emphasizes a country’s ownership over its data and the intelligence it produces — as an enormous opportunity for the world’s leaders.

“It codifies your culture, your society’s intelligence, your common sense, your history – you own your own data,” Huang told Al Olama during their conversation, a highlight of an event attended by more than 4,000 delegates from 150 countries.

“We completely subscribe to that vision,” Al Olama said. “That’s why the UAE is moving aggressively on creating large language models and mobilizing compute.”

Huang’s appearance in the UAE comes as the Gulf State is moving rapidly to transform itself from an energy powerhouse into a global information technology hub.

Dubai is the latest stop for Huang in a global tour that has included meetings with leaders in Canada, France, India, Japan, Malaysia, Singapore and Vietnam over the past six months.

The Middle East is poised to reap significant benefits from AI, with PwC projecting a $320 billion boost to the region’s economy by 2030.

At Monday’s summit, Huang urged leaders not to be “mystified” by AI. AI’s unprecedented ability to take directions from ordinary humans makes it critical for countries to embrace AI, infusing it with local languages and expertise.

In response to Al Olama’s question about how he might approach AI if he were the leader of a developing nation, Huang emphasized the importance of building infrastructure.

“It’s not that costly, it is also not that hard,” Huang said. “The first thing that I would do, of course, is I would codify the language, the data of your culture into your own large language model.”

And as AI and accelerated computing has developed, NVIDIA GPUs have become a platform for one innovation after another.

“NVIDIA GPU is the only platform that’s available to everybody on any platform,” Huang said. “This ubiquity has not only democratized AI but facilitated a wave of innovation that spans from cloud computing to autonomous systems and beyond.

All of this promises to unleash new kinds of innovations that go beyond what’s traditionally been thought of as information technology.

Huang even countered advice offered by many visionaries over the years who urged young people to study computer science in order to compete in the information age. No longer.

“In fact, it’s almost exactly the opposite,” Huang said. “It is our job to create computing technologies that nobody has to program and that the programming language is human: everybody in the world is now a programmer — that is the miracle.”

In a move that further underscores the regional momentum behind AI, Moro Hub, a subsidiary of Digital DEWA, the digital arm of the Dubai Electricity and Water Authority, focused on providing cloud services, cybersecurity and smart city solutions, announced Monday it has agreed to build a green data center with NVIDIA.

In addition to the fireside chat, the summit featured panels on smart mobility, sustainable development and more, showcasing the latest in AI advancements. Later in the evening, Huang and Al Olama took the stage at the ‘Get Inspired’ ecosystem event, organized by the UAE’s AI Office, featuring 280 attendees including developers, start-ups, and others.

Read More

NVIDIA RTX 2000 Ada Generation GPU Brings Performance, Versatility for Next Era of AI-Accelerated Design and Visualization

NVIDIA RTX 2000 Ada Generation GPU Brings Performance, Versatility for Next Era of AI-Accelerated Design and Visualization

Generative AI is driving change across industries — and to take advantage of its benefits, businesses must select the right hardware to power their workflows.

The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12GB in professional workflows.

From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card’s capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities.

Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews.

With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies.

Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance.

And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.

Expanding the Reach of NVIDIA RTX

Among the first to tap the power and performance of the RTX 2000 Ada are Dassault Systèmes for its SOLIDWORKS applications, Rob Wolkers Design and Engineering, and WSP.

“The new RTX 2000 Ada Generation GPU boasts impressive features compared to previous generations, with a compact design that offers exceptional performance and versatility,” said Mark Kauffman, assistant vice president and technical lead at WSP. “Its 16GB of RAM is a game-changer, enabling smooth loading of asset-heavy content, and its ability to run applications like Autodesk 3ds Max, Adobe After Effects and Unreal Engine, as well as support path tracing, expands my creative possibilities.”

“The new NVIDIA RTX 2000 Ada — with its higher-efficiency, next-generation architecture, low power consumption and large frame buffer — will benefit SOLIDWORKS users,” said Olivier Zegdoun, graphics applications research and development director for SOLIDWORKS at Dassault Systèmes. “It delivers excellent performance for designers and engineers to accelerate the development of innovative product experiences with full-model fidelity, even with larger datasets.”

“Today’s design and visualization workflows demand more advanced compute and horsepower,” said Rob Wolkers, owner and senior industrial design engineer at Rob Wolkers Design and Engineering. “Equipped with next-generation architecture and a large frame buffer, the RTX 2000 Ada Generation GPU improves productivity in my everyday industrial design and engineering workflows, allowing me to work with large datasets in full fidelity and generate renders with more lighting and reflection scenarios 3x faster.”

Elevating Workflows With Next-Generation RTX Technology 

The NVIDIA RTX 2000 Ada features the latest technologies in the NVIDIA Ada Lovelace GPU architecture, including:

  • Third-generation RT Cores: Up to 1.7x faster ray-tracing performance for high-fidelity, photorealistic rendering.
  • Fourth-generation Tensor Cores: Up to 1.8x AI throughput over the previous generation, with structured sparsity and FP8 precision to enable higher inference performance for AI-accelerated tools and applications.
  • CUDA cores: Up to 1.5x the FP32 throughput of the previous generation for significant performance improvements in graphics and compute workloads.
  • Power efficiency: Up to a 2x performance boost across professional graphics, rendering, AI and compute workloads, all within the same 70W of power as the previous generation.
  • Immersive workflows: Up to 3x performance for virtual-reality workflows over the previous generation.
  • 16GB of GPU memory: An expanded canvas enables users to tackle larger projects, along with support for error correction code memory to deliver greater computing accuracy and reliability for mission-critical applications.
  • DLSS 3: Delivers a breakthrough in AI-powered graphics, significantly boosting performance by generating additional high-quality frames.
  • AV1 encoder: Eighth-generation NVIDIA Encoder, aka NVENC, with AV1 support is 40% more efficient than H.264, enabling new possibilities for broadcasters, streamers and video callers.

NVIDIA RTX Enterprise Driver Delivers New Features, Adds Support for RTX 2000 Ada

The latest RTX Enterprise Driver, available now to download, includes a range of features that enhance graphics workflows, along with support for the RTX 2000 Ada.

The AI-based, standard dynamic range to high dynamic range tone-mapping feature, called Video TrueHDR, expands the color range and brightness levels when viewing content in Chrome or Edge browsers. With added support for Video Super Resolution and TrueHDR to the NVIDIA NGX software development kit, video quality of low-resolution sources can be enhanced, and SDR content can easily be converted to HDR.

Additional features in this release include:

  • TensorRT-LLM, an open-source library that optimizes and accelerates inference performance for the latest large language models on NVIDIA GPUs.
  • Video quality improvement and enhanced coding efficiency to video codecs through bit depth expansion techniques and new low-delay B frame.
  • Ability to offload work from the CPU to the GPU with the execute indirect extension NVIDIA API for quicker task completion.
  • Ability to display the GPU serial number in the NV Control Panel on desktops for easier registration to the NVIDIA AI Enterprise and NVIDIA Omniverse Enterprise platforms.

Availability

The NVIDIA RTX 2000 Ada is available now through global distribution partners such as Arrow Electronics, Ingram Micro, Leadtek, PNY, Ryoyo Electro and TD SYNNEX, and will be available from Dell Technologies, HP and Lenovo starting in April.

See the NVIDIA RTX 2000 Ada at Dassault Systèmes’ 3DEXPERIENCE World

Stop by the Dell, Lenovo and Z by HP booths at Dassault Systèmes’ 3DEXPERIENCE World, running Feb. 11-14 at the Kay Bailey Hutchison Convention Center in Dallas, to view live demos of Dassault Systèmes SOLIDWORKS applications powered by the NVIDIA RTX 2000 Ada.

Attend the Z by HP session on Tuesday, Feb. 13, where Wolkers will discuss the workflow used to design NEMO, the supercar of submarines.

Learn more about the NVIDIA RTX 2000 Ada Generation GPU.

Read More

National Institute of Standards and Technology Launches Artificial Intelligence Safety Institute Consortium

National Institute of Standards and Technology Launches Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology’s new U.S. Artificial Intelligence Safety Institute Consortium as part of the company’s effort to advance safe, secure and trustworthy AI.

AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST — an agency of the U.S. Department of Commerce — and fellow consortium members to advance the consortium’s mandate.

NVIDIA’s participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality.

In 2023, NVIDIA endorsed the Biden Administration’s voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation’s National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

AISIC Research Focus

Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI. AISIC members, which include more than 200 of the nation’s leading AI creators, academics, government and industry researchers, as well as civil society organizations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.

In addition to participating in working groups, NVIDIA plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several NVIDIA-developed, open-source AI safety, red-teaming and security tools.

Learn more about NVIDIA’s guiding principles for trustworthy AI.

Read More

Devices for Days: With GeForce NOW, Every Device Is a Dream Gaming PC

Devices for Days: With GeForce NOW, Every Device Is a Dream Gaming PC

The GeForce NOW anniversary celebrations continue with more games and a member-exclusive discount on the Logitech G Cloud.

Among the six new titles coming to the cloud this week is The Inquisitor from Kalypso Media, which spotlights the GeForce NOW anniversary with a special shout-out.

“Congrats to four years of empowering gamers to play anywhere, anytime,” said Marco Nier, head of marketing and public relations at Kalypso Media. “We’re thrilled to raise a glass to GeForce NOW for their four-year anniversary and commitment to bringing AAA gaming to gamers — here’s to many more chapters in this cloud-gaming adventure!”

Stream the dark fantasy adventure from Kalypso Media and more newly supported titles today across a variety of GeForce NOW-capable devices, whether at home, on a gaming rig, TV or Mac, or on the go with handheld streaming.

Gadgets Galore

GeForce NOW anniversary - device ecosystem
Play on!

Gone are the days of only being able to play full PC games on a decked-out gaming rig. GeForce NOW is a cloud gaming service accessible on a range of devices, from PCs and Macs to gaming handhelds, thanks to GeForce RTX-powered servers in the cloud.

Dive into the cloud streaming experience with the dedicated GeForce NOW app for Windows and macOS. Even on underpowered PCs, gamers can enjoy stunning visuals and buttery-smooth frame rates streaming at up to 240 frames per second or at ultrawide resolutions for Ultimate members, a cloud-gaming first.

Take it to the big screen and stream graphically demanding titles, from The Witcher 3 to Alan Wake 2, on GeForce NOW from the comfort of the couch at up to 4K natively on Samsung and LG Smart TVs, without the need for a console. Or stream across any TV with NVIDIA SHIELD TV for the ultimate living room gaming experience.

Gamers on the go can drop into the neon lights of Cyberpunk 2077 and other ray tracing-supported titles on a portable, lightweight Chromebook and stream up to 1600p at 120 fps. GeForce NOW members can also stream to Android devices at new higher resolutions, up to 1440p at 120 fps.

Logitech G Cloud with GeForce NOW
Look, ma, no wires.

Go hands-on with any of the new handheld gaming devices supported by GeForce NOW, from the ASUS ROG Ally to the Logitech G Cloud. The Logitech G Cloud is an Android device with a seven-inch 1080p 16:9 touchscreen, fully customizable controls and support for GeForce NOW right out of the box.

The Logitech G Cloud is normally priced at $349.99, but Logitech and GeForce NOW are providing a 20% discount to the first 500 Ultimate and Priority members that grab the code from the GeForce NOW Rewards portal, a deal available until March 8. On top of that, follow the GeForce NOW and Logitech social channels for a chance to win a Logitech G Cloud during the anniversary celebrations this month.

Whether playing at home or on the go, members can game freely on GeForce NOW without having to worry about download times or system specs.

Celebrate With New Games

The Inquisitor on GeForce NOW
Jesus, take the wheel.

Dive into an alternate reality in the world of The Inquisitor, where Jesus has escaped from the cross. Play as Mordimer Madderdin, an inquisitor who investigates a mysterious murder in the town of Koenigstein. Face moral choices, visit the dangerous Unworld and fight against sinners.

The title leads the six new games this week. Here’s the full list:

  • Stormgate (Demo on Steam, available Feb. 5-12 during Steam Next Fest)
  • The Inquisitor (New release on Steam, Feb. 8)
  • Aragami 2 (Xbox, available on Microsoft Store)
  • art of rally (Xbox, available on Microsoft Store)
  • dotAGE (Steam)
  • Tram Simulator Urban Transit (Steam)
  • The Walking Dead: The Telltale Definitive Series (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Beyond ‘Data-Driven’: How Energy-Efficient Computing for AI Is Propelling Innovation and Savings Across Industries

Beyond ‘Data-Driven’: How Energy-Efficient Computing for AI Is Propelling Innovation and Savings Across Industries

With advances in computing, sophisticated AI models and machine learning are having a profound impact on business and society. Industries can use AI to quickly analyze vast bodies of data, allowing them to derive meaningful insights, make predictions and automate processes for greater efficiency.

In the public sector, government agencies are achieving superior disaster preparedness. Biomedical researchers are bringing novel drugs to market faster. Telecommunications providers are building more energy-efficient networks. Manufacturers are trimming emissions from product design, development and manufacturing processes. Hollywood studios are creating impressive visual effects at a fraction of the cost and time. Robots are being deployed on important missions to help preserve the Earth. And investment advisors are running more trade scenarios to optimize portfolios.

Eighty-two percent of companies surveyed are already using or exploring AI, and 84% report that they’re increasing investments in data and AI initiatives. Any organization that delays AI implementation risks missing out on new efficiency gains and becoming obsolete.

However, AI workloads are computationally demanding, and legacy computing systems are ill-equipped for the development and deployment of AI. CPU-based compute requires linear growth in power input to meet the increased processing needs of AI and data-heavy workloads. If data centers are using carbon-based energy, it’s impossible for enterprises to innovate using AI while controlling greenhouse gas emissions and meeting sustainability commitments. Plus, many countries are introducing tougher regulations to enforce data center carbon reporting.

Accelerated computing — the use of GPUs and special hardware, software and parallel computing techniques — has exponentially improved the performance and energy efficiency of data centers.

Below, read more on how industries are using energy-efficient computing to scale AI, improve products and services, and reduce emissions and operational costs.

The Public Sector Drives Research, Delivers Improved Citizen Services 

Data is playing an increasingly important role in government services, including for public health and disease surveillance, scientific research, social security administration, and extreme-weather monitoring and management. These operations require platforms and systems that can handle large volumes of data, provide real-time data access, and ensure data quality and accuracy.

But many government agencies rely on legacy systems that are difficult to maintain, don’t efficiently integrate with modern technologies and consume excessive energy. To handle increasingly demanding workloads while sticking to sustainability goals, government agencies and public organizations must adopt more efficient computing solutions.

The U.S. Department of Energy is making inroads in this endeavor. The department runs the National Energy Research Scientific Computing Center for open science. NERSC develops simulations, data analytics and machine learning solutions to accelerate scientific discovery through computation. Seeking new computing efficiencies, the center measured results across four of its key high performance computing and AI applications. It clocked how fast the applications ran, as well as how much energy they consumed using CPU-only versus GPU-accelerated nodes on Perlmutter, one of the world’s largest supercomputers.

At performance parity, a GPU-accelerated cluster consumes 588 less megawatt hours per month, representing a 5x improvement in energy efficiency. By running the same workload on GPUs rather than CPU-only instances, researchers could save millions of dollars per month. These gains mean that the 8,000+ researchers using NERSC computing infrastructure can perform more experiments on important use cases, like studying subatomic interactions to uncover new green energy sources, developing 3D maps of the universe and bolstering a broad range of innovations in materials science and quantum physics.

Governments help protect citizens from adverse weather events, such as hurricanes, floods, blizzards and heat waves. With GPU deployments, climate models, like the IFS model from the European Centre for Medium-Range Weather Forecasts, can run up to 24x faster while reducing annual energy usage by up to 127 gigawatt hours compared to CPU-only systems. As extreme-weather events occur with greater frequency and, often, with little warning, meteorology centers can use accelerated computing to generate more accurate, timely forecasts that improve readiness and response.

By adopting more efficient computing systems, governments can save costs while equipping researchers with the tools they need for scientific discoveries to improve climate modeling and forecasting, as well as deliver superior services in public health, disaster relief and more.

Drug Discovery Researchers Conduct Virtual Screenings, Generate New Proteins at Light Speed

Drug development has always been a time-consuming process that involves innumerable calculations and thousands of experiments to screen new compounds. To develop novel medications, the binding properties of small molecules must be tested against protein targets, a cumbersome task required for up to billions of compounds — which translates to billions of CPU hours and hundreds of millions of dollars each year.

Highly accurate AI models can now predict protein structures, generate small molecules, predict protein-ligand binding and perform virtual screening.

Researchers at Oak Ridge National Laboratory (ORNL) and Scripps Research have shown that screening a dataset of billions of compounds against a protein, which has traditionally taken years, can now be completed in just hours with accelerated computing. By running AutoDock, a molecular-modeling simulation software, on a supercomputer with more than 27,000 NVIDIA GPUs, ORNL screened more than 25,000 molecules per second and evaluated the docking of 1 billion compounds in less than 12 hours. This is a speedup of more than 50x compared with running AutoDock on CPUs.

Iambic, an AI platform for drug discovery, has developed an approach combining quantum chemistry and AI that calculates quantum-accurate molecular-binding energies and forces at a fraction of the computational expense of traditional methods. These energies and forces can power molecular-dynamics simulations at unprecedented speed and accuracy. With its OrbNet model, Iambic uses a graph transformer to power quantum-mechanical operators that represent chemical structures. The company is using the technology to identify drug molecules that could deactivate proteins linked to certain cancer types.

As the number of new drug approvals declines and research and development and computing costs rise, optimizing drug discovery with accelerated computing can help control energy expenditures while creating a far-reaching impact on medical research, treatments and patient outcomes.

Telcos Scale Network Capacity

To connect their subscribers, telecommunications companies send data across sprawling networks of cell towers, fiber-optic cables and wireless signals. In the U.S., AT&T’s network connects more than 100 million users from the Aleutian Islands in Alaska to the Florida Keys, processing 500 petabytes of data per day. As telcos add compute-intensive workloads like AI and user plane function (UPF) to process and route data over 5G networks, power consumption costs are skyrocketing.

AT&T processes trillions of data rows to support field technician dispatch operations, generate performance reports and power mobile connectivity. To process data faster, AT&T tested the NVIDIA RAPIDS Accelerator for Apache Spark. By spreading work across nodes in a cluster, the software processed 2.8 trillion rows of information — a month’s worth of mobile data — in just five hours. That’s 3.3x faster at 60% lower cost than any prior test.

Other telcos are saving energy by offloading networking and security tasks to SmartNICs and data processing units (DPUs) to reduce server power consumption. Ericsson, a leading telecommunications equipment manufacturer, tested a 5G UPF on servers with and without network offload to an NVIDIA ConnectX-6 Dx NIC. At maximum network traffic, the network offloading provided 23% power savings. The study also found that CPU micro-sleeps and frequency scaling — allowing CPUs to sleep and slow their clock frequencies during low workload levels — saved more than 10% of power per CPU.

Hardware-accelerated networking offloads like these allow telco operators to increase network capacity without a proportional increase in energy consumption, ensuring that networks can scale to handle increased demand and conserve energy during times of low use. By adopting energy-efficient accelerated computing, telco operators can reduce their carbon footprint, improve scalability and lower operational costs.

Manufacturing and Product Design Teams Achieve Faster, Cleaner Simulations

Many industries rely on computational fluid dynamics during design and engineering processes to model fluid flows, combustion, heat transfer and aeroacoustics. The aerospace and automotive industries use CFD to model vehicle aerodynamics, and the energy and environmental industries use it to optimize fluid-particle refining systems and model reactions, wind-farm air flow and hydro-plant water flow.

Traditional CFD methods are compute-intensive, using nearly 25 billion CPU core hours annually, and consume massive amounts of energy. This is a major obstacle for industrial companies looking to reduce carbon emissions and achieve net zero. Parallel computing with GPUs is making a difference.

Ansys, an engineering simulation company, is speeding up CFD physics models with GPUs to help customers drastically reduce emissions while improving the aerodynamics of vehicles. To measure computing efficiency, the company ran the benchmark DrivAer model, used for optimizing vehicle geometry, on different CPU and GPU configurations using its Fluent fluid-simulation software. Results showed that a single GPU achieved more than 5x greater performance than a cluster with 80 CPU cores. With eight GPUs, the simulation experienced more than a 30x speedup. And a server with six GPUs reduced power consumption 4x compared with a high performance computing CPU cluster delivering the same performance.

CPFD offers GPU parallelization for Barracuda Virtual Reactor, a physics-based engineering software package capable of predicting fluid, particulate-solid, thermal and chemically reacting behavior in fluidized bed reactors and other fluid-particle systems.

Using CPFD’s Barracuda software, green energy supplier ThermoChem Recovery International (TRI) developed technology that converts municipal solid waste and woody biomass into jet fuel. Since its partnership with CPFD began 14 years ago, TRI has benefitted from 1,500x model speedups as CPFD moved its code from CPU hardware to full GPU parallelization. With these exponential speedups, models that would’ve previously taken years to run can now be completed in a day or less, saving millions of dollars in data center infrastructure and energy costs.

With GPU parallelization and energy-efficient architectures, industrial design processes that rely on CFD can benefit from dramatically faster simulations while achieving significant energy savings.

Media and Entertainment Boost Rendering

Rendering visual effects (VFX) and stylized animations consumes nearly 10 billion CPU core hours per year in the media and entertainment industry. A single animated film can require over 50,000 CPU cores working for more than 300 million hours. Enabling this necessitates a large space for data centers, climate control and computing — all of which result in substantial expenditures and a sizable carbon footprint.

Accelerated computing offers a more energy-efficient way to produce VFX and animation, enabling studios to iterate faster and compress production times.

Studios like Wylie Co., known for visuals in the Oscar-winning film Dune and in HBO and Netflix features, are adopting GPU-powered rendering to improve performance and save energy. After migrating to GPU rendering, Wylie Co. realized a 24x performance boost over CPUs.

Image Engine, a VFX company involved in creating Marvel Entertainment movies and Star Wars-based television shows, observed a 25x performance improvement by using GPUs for rendering.

GPUs can increase performance up to 46x while reducing energy consumption by 10x and capital expenses by 6x. With accelerated computing, the media and entertainment industry has the potential to save a staggering $900 million in hardware acquisition costs worldwide and conserve 215 gigawatt hours of energy that would have been consumed by CPU-based render farms. Such a shift would lead to substantial cost savings and significant reductions in the industry’s environmental impact.

Robotics Developers Extend Battery Life for Important Missions 

With edge AI and supercomputing now available using compact modules, demand for robots is surging for use in factory logistics, sales showrooms, urban delivery services and even ocean exploration. Mobile robot shipments are expected to climb from 549,000 units last year to 3 million by 2030, with revenue forecast to jump from more than $24 billion to $111 billion in the same period, according to ABI Research.

Most robots are battery-operated and rely on an array of lidar sensors and cameras for navigation. Robots communicate with edge servers or clouds for mission dispatch and require high throughput due to diverse sets of camera sensors as well as low latency for real-time decision-making. These factors necessitate energy-efficient onboard computing.

Accelerated edge computing can be optimized to decode images, process video and analyze lidar data to enable robot navigation of unstructured environments. This allows developers to build and deploy more energy-efficient machines that can remain in service for longer without needing to charge.

The Woods Hole Oceanographic Institution Autonomous Robotics and Perception Laboratory (WARPLab) and MIT are using the NVIDIA Jetson Orin platform for energy-efficient edge AI and robotics to power an autonomous underwater vehicle to study coral reefs.

The AUV, named CUREE, for Curious Underwater Robot for Ecosystem Exploration, gathers visual, audio and other environmental data to help understand the human impact on reefs and sea life. With 25% of the vehicle’s power needed for data collection, energy efficiency is a must. With Jetson Orin, CUREE constructs 3D models of reefs, tracks marine organisms and plant life, and autonomously navigates and gathers data. The AUV’s onboard energy-efficient computing also powers convolutional neural networks that enhance underwater vision by reducing backscatter and correcting colors. This enables CUREE to transmit clear images to scientists, facilitating fish detection and reef analysis.

Driverless smart tractors with energy-efficient edge computing are now available to help farmers with automation and data analysis. The Founder Series MK-V tractors, designed by NVIDIA Inception member Monarch Tractor, combine electrification, automation and data analysis to help farmers reduce their carbon footprint, improve field safety and streamline farming operations. Using onboard AI video analytics, the tractor can traverse rows of crops, enabling it to navigate even in remote areas without connectivity or GPS.

The MK-V tractor produces zero emissions and is estimated to save farmers $2,600 annually compared to diesel tractors. The tractor’s AI data analysis advises farmers on how to reduce the use of expensive, harmful herbicides that deplete the soil. Decreasing the volume of chemicals is a win all around, empowering farmers to protect the quality of soil, reduce herbicide expenditures and deliver more naturally cultivated produce to consumers.

As energy-efficient edge computing becomes more accessible to enable AI, expect to see growing use cases for mobile robots that can navigate complex environments, make split-second decisions, interact with humans and safely perform difficult tasks with precision.

Financial Services Use Data to Inform Investment Decisions 

Financial services is an incredibly data-intensive industry. Bankers and asset managers pursuing the best results for investors rely on AI algorithms to churn through terabytes of unstructured data from economic indicators, earnings reports, news articles, and disparate environmental, social and governance metrics to generate market insight that inform investments. Plus, financial services companies must comb through network data and transactions to prevent fraud and protect accounts.

NVIDIA and Dell Technologies are optimizing computing for financial workloads to achieve higher throughput, speed and capacity with greater energy efficiency. The Strategic Technology Analysis Center, an organization dedicated to technology discovery and assessment in the finance industry, recently tested the STAC-A2 benchmark tests on several computing stacks comprising CPU-only infrastructure and GPU-based infrastructure. The STAC-A2 benchmark is designed by quants and technologists to measure the performance, scalability, quality and resource efficiency of technology stacks running market-risk analysis for derivatives.

When testing the STAC-A2 options pricing benchmark, the Dell PowerEdge server with NVIDIA GPUs performed 16x faster and 3x more energy efficiently than a CPU-only system for the same workload. This enables investment advisors to integrate larger bodies of data into derivatives risk-analysis calculations, enabling more data-driven decisions without increasing computing time or energy requirements.

PayPal, which was looking to deploy a new fraud-detection system to operate 24/7, worldwide and in real time to protect customer transactions, realized CPU-only servers couldn’t meet such computing requirements. Using NVIDIA GPUs for inference, PayPal improved real-time fraud detection by 10% and lowered server energy consumption by nearly 8x.

With accelerated computing, financial services organizations can run more iterations of investment scenarios, improve risk assessments and make more informed decisions for better investment results. Accelerated computing is the foundation for improving data throughput, reducing latency and optimizing energy usage to lower operating costs and achieve emissions goals.

An AI Future With Energy-Efficient Computing

With energy-efficient computing, enterprises can reduce data center costs and their carbon footprint while scaling AI initiatives and data workloads to stay competitive.

The NVIDIA accelerated computing platform offers a comprehensive suite of energy-efficient hardware and software to help enterprises use AI to drive innovation and efficiency without the need for equivalent growth in energy consumption.

With more than 100 frameworks, pretrained models and development tools optimized for GPUs, NVIDIA AI Enterprise accelerates the entire AI journey, from data preparation and model training to inference and scalable deployment. By getting their AI into production faster, businesses can significantly reduce overall power consumption.

With the NVIDIA RAPIDS Accelerator for Apache Spark, which is included with NVIDIA AI Enterprise, data analytics workloads can be completed 6x faster, translating to 5x savings on infrastructure and 6x less power used for the same amount of work. For a typical enterprise, this means 10 gigawatt hours less energy consumed compared with running jobs without GPU acceleration.

NVIDIA BlueField DPUs bring greater energy efficiency to data centers by offloading and accelerating data processing, networking and security tasks from the main CPU infrastructure. By maximizing performance per watt, they can help enterprises slash server power consumption by up to 30%, saving millions in data center costs.

As businesses shift to a new paradigm of AI-driven results, energy-efficient accelerated computing is helping organizations deliver on the promise of AI while controlling costs, maintaining sustainable practices and ensuring they can keep up with the pace of innovation.

Learn how accelerated computing can help organizations achieve both AI goals and carbon-footprint objectives.

Read More

Twitch Streamer Mr_Vudoo Supercharges Gaming, Entertaining and Video Editing With RTX This Week ‘In the NVIDIA Studio’

Twitch Streamer Mr_Vudoo Supercharges Gaming, Entertaining and Video Editing With RTX This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Mr_Vudoo is a digital renaissance man — a livestreamer, video editor, gamer and entertainer skilled in producing an array of content for his audience.

This week’s featured artist In the NVIDIA Studio, he recently acquired a new GeForce RTX 4080 SUPER graphics card, which helps creators like him take their content to the next level. (Read more about the 4080 SUPER below.)

There’s no better place for creative types to connect with others and explore what’s next in AI and accelerated computing than GTC, which is back in-person, from March 18-21 in San Jose.

From the keynote by NVIDIA founder and CEO Jensen Huang to hundreds of sessions, exhibits and networking events, GTC delivers something for every technical level and interest area, including sessions on how to power content creation using OpenUSD and generative AI. GTC registration is open for virtual or in-person attendance.

Join sessions like “What’s Next in Generative AI” in person or virtually.

In other NVIDIA Studio news, Topaz Labs, a company that delivers AI-powered photo and video enhancement software, recently adopted NVIDIA TensorRT acceleration for its new Remove Tool. It uses AI to replace unwanted objects in an image with a context-aware background. The tool expedites photo editing workflows, delivering 2.4x faster processing on a GeForce RTX 4090 GPU compared with an Apple MacBook M3 Max.

Topaz Lab’s TensorRT-powered Remove Tool removes unwanted objects with just a click.

Mr_Vudoo Taps RTX for a Livestreaming Upgrade

Mr_Vudoo’s Twitch channel is known for its variety of unique content. In his Trading Card Games series, Mr_Vudoo opens trading card packs and competes at one of the highest levels live on stream. In his Gameplay series, he goes back to his original love for gaming, streaming both multiplayer online games and third-person shooters. And in his In Real Life series, he breaks the norm of traditional streaming bringing his viewers outside to share his everyday life experiences.

It takes significant computing power to bring these series to life. Mr_Vudoo’s GeForce RTX 4080 SUPER features the eighth-generation NVIDIA NVENC — an independent component for encoding video that frees up the system to run games or tackle other compute-intensive tasks. Using it, Mr_Vudoo can achieve a more seamless streaming experience.

“It’s a revelation to stream with high-quality settings and get almost no performance loss in games,” he said.

Mr_Vudoo can also join in the new Twitch Enhanced Broadcasting beta, powered by GeForce RTX GPUs and NVENC, to broadcast up to three resolutions simultaneously at up to 1080p. This eliminates the need to trade off resolution for stream reliability.

In the coming months, Enhanced Broadcasting beta testers will be able to experiment with higher input bit rates, up to 4K resolutions, support for up to five concurrent streams and new codecs.

Mr_Vudoo uses a number of AI-enabled features in the video and photo editing part of his creative workflow. With RTX acceleration, he can add a video camera effect with a timed zoom and a film strip transition in real time without having to render the entire project.

 

“Adding multiple effects on a clip without affecting the preview of the video is a massive time-saver,” he said.

DaVinci Resolve has several RTX-accelerated AI features that can boost content creation, offering tools to smooth slow-motion effects or provide seamless video super resolution. These features are available to all RTX GPU owners.

“GeForce RTX graphics cards are the best GPUs for video editors to use, as they can render tasks much faster, allowing us to become more efficient with our work.” — Mr_Vudoo

Mr_Vudoo can quickly export files with the RTX 4080 SUPER’s dual encoders, which work in tandem to slash export times nearly in half.

In post-production, Mr_Vudoo uses Adobe Photoshop’s AI-powered subject selection tool to quickly isolate objects in an image, instead of having to manually crop them out, speeding his work.

 

Mr_Vudoo also taps the free NVIDIA Broadcast app to boost his productivity.

“I’ve utilized the video noise removal and background replacement the most,” he said. “The eye contact feature was very interesting and quite honestly took me by surprise at how well it worked.”

AI has become an irreplaceable part of Mr_Vudoo’s content creation process, helping him quickly and effortlessly produce his best work. Catch him on Twitch.

 

RTX 4080 SUPER Brings Super Performance

GeForce RTX 4080 SUPER graphics cards are changing the content creation game.

Generative AI apps like Adobe Photoshop can take advantage of the GPU’s Tensor Cores to speed productivity and creative workflows. With the 4080 SUPER, 3D apps like Blender can run up to 70% faster than on previous-generation graphics cards. And video editing apps like Blackmagic Design’s DaVinci Resolve can accelerate AI effects over 30% faster than with the GeForce RTX 3080 Ti.

For gamers, the RTX 4080 SUPER enables greater immersion, with fully ray-traced visuals and the ability to run all settings at max. It delivers twice the speed of the RTX 3080 Ti, up to 836 trillion operations per second, in the most graphically intensive games with DLSS Frame Generation.

Get creative and AI superpowers with the GeForce RTX 4080 SUPER.

Since its release, GeForce RTX 4080 SUPER Series GPUs have been put to the test in creating, gaming and other AI-powered tasks. Here’s what some reviewers had to say:

  • “Jumping over to creative professionals and content creators, the 4080 Super also provides nice performance gains over the standard GeForce RTX 4080 in applications like Blender, Maya, After Effects, DaVinci Resolve and more. This means users can take full advantage of what the NVIDIA 4080 SUPER offers in much more than just gaming and can push the software they use for streaming, video creation, audio and 3D creation to get the most out of their PC.” – CG Magazine
  • “Features like NVIDIA Broadcast and Reflex hold deep practical appeal; RTX Video Super Resolution uses AI to make ugly videos beautiful. And NVIDIA maintains a strong lead in most creative and machine learning/AI workloads if you like to put your GPU to work when you’re not playing — witness the dual AV1 encoders in the 4080 SUPER” — PC World  
  • “Blender can make use of the RT cores on NVIDIA’s GPUs through the OptiX ray tracing rendering engine, and as a result, performance is much higher than any competing GPU in a similar class. The GeForce RTX 4080 SUPER notches another victory over its namesake, and dang the RTX 4090 is a beast.” – Hot Hardware
  • “In terms of creative performance, the RTX 4080 SUPER walks away the winner against the RX 7900 XTX, even if you don’t factor in the fact that Blender Benchmark 4.0.0 workloads wouldn’t even run on the RX 7900 XTX (though the RX 7900 XT was able to run them, just not nearly as well).” – Tech Radar
  • “And when you throw in RTX technologies like DLSS into the mix, NVIDIA’s superior AV1 encoding quality, content-creator-friendly features, and performance, plus generative AI capabilities – and there’s a lot more to the story here than pure 4K gaming EXPERT-ise.” — TweakTown

Discover what RTX 4080 SUPER Series graphics cards and systems are available.

Read More