NVIDIA Unveils Its Most Affordable Generative AI Supercomputer

NVIDIA Unveils Its Most Affordable Generative AI Supercomputer

NVIDIA is taking the wraps off a new compact generative AI supercomputer, offering increased performance at a lower price with a software upgrade.

The new NVIDIA Jetson Orin Nano Super Developer Kit, which fits in the palm of a hand, provides everyone from commercial AI developers to hobbyists and students, gains in generative AI capabilities and performance. And the price is now $249, down from $499.

Available today, it delivers as much as a 1.7x leap in generative AI inference performance, a 70% increase in performance to 67 INT8 TOPS, and a 50% increase in memory bandwidth to 102GB/s compared with its predecessor.

Whether creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots, the Jetson Orin Nano Super is an ideal solution to fetch.

The Gift That Keeps on Giving

The software updates available to the new Jetson Orin Nano Super will also boost generative AI performance for those who already own the Jetson Orin Nano Developer Kit.

Jetson Orin Nano Super is suited for those interested in developing skills in generative AI, robotics or computer vision. As the AI world is moving from task-specific models into foundation models, it also provides an accessible platform to transform ideas into reality.

Powerful Performance With Super for Generative AI

The enhanced performance of the Jetson Orin Nano Super delivers gains for all popular generative AI models and transformer-based computer vision.

The developer kit consists of a Jetson Orin Nano 8GB system-on-module (SoM) and a reference carrier board, providing an ideal platform for prototyping edge AI applications.

The SoM features an NVIDIA Ampere architecture GPU with tensor cores and a 6-core Arm CPU, facilitating multiple concurrent AI application pipelines and high-performance inference. It can support up to four cameras, offering higher resolution and frame rates than previous versions.

Extensive Generative AI Software Ecosystem and Community

Generative AI is evolving quickly. The NVIDIA Jetson AI lab offers immediate support for those cutting-edge models from the open-source community and provides easy-to-use tutorials. Developers can also get extensive support from the broader Jetson community and inspiration from projects created by developers.

Jetson runs NVIDIA AI software including NVIDIA Isaac for robotics, NVIDIA Metropolis for vision AI and NVIDIA Holoscan for sensor processing. Development time can be reduced with NVIDIA Omniverse Replicator for synthetic data generation and NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NGC catalog.

Jetson ecosystem partners offer additional AI and system software, developer tools and custom software development. They can also help with cameras and other sensors, as well as carrier boards and design services for product solutions.

Boosting Jetson Orin Performance for All With Super Mode

The software updates to boost 1.7X generative AI performance will also be available to the Jetson Orin NX and Orin Nano series of systems on modules.

Existing Jetson Orin Nano Developer Kit owners can upgrade the JetPack SDK to unlock boosted performance today.

Learn more about Jetson Orin Nano Super Developer Kit.

See notice regarding software product information.

Read More

Tech Leader, AI Visionary, Endlessly Curious Jensen Huang to Keynote CES 2025

Tech Leader, AI Visionary, Endlessly Curious Jensen Huang to Keynote CES 2025

On Jan. 6 at 6:30 p.m. PT, NVIDIA founder and CEO Jensen Huang — with his trademark leather jacket and an unwavering vision — will step onto the CES 2025 stage.

From humble beginnings as a busboy at a Denny’s to founding NVIDIA, Huang’s story embodies innovation and perseverance.

Huang has been named the world’s best CEO by Fortune and The Economist, as well as one of TIME magazine’s 100 most influential people in the world.

Today, NVIDIA is a driving force behind breakthroughs in AI and accelerated computing, technologies transforming industries ranging from healthcare, to automotive and entertainment.

Across the globe, NVIDIA’s innovations enable advanced chatbots, robots, software-defined vehicles, sprawling virtual worlds, hypersynchronized factory floors and much more.

NVIDIA’s accelerated computing and AI platforms power hundreds of millions of computers, available from major cloud providers and server manufacturers.

They fuel 76% of the world’s fastest supercomputers on the TOP500 list and are supported by a thriving community of more than 5 million developers.

For decades, Huang has led NVIDIA through revolutions that ripple across industries.

GPUs redefined gaming as an art form, and NVIDIA’s AI tools empower labs, factory floors and Hollywood sets. From self-driving cars to automated industrial processes, these tools are foundational to the next generation of technological breakthroughs.

CES has long been the stage for the unveiling of technological advancements, and Huang’s keynote is no exception.

Since its inception in 1967, CES has unveiled iconic innovations, including transistor radios, VCRs and HDTVs.

Over the decades, CES has launched numerous NVIDIA flagship innovations, from a first look at NVIDIA SHIELD to NVIDIA DRIVE for autonomous vehicles.

NVIDIA at CES 2025

The keynote is just the beginning.

From Jan. 7-10, NVIDIA will host press, analysts, customers and partners at the Fontainebleau Resort Las Vegas.

The space will feature hands-on demos showcasing innovations in AI, robotics and accelerated computing across NVIDIA’s automotive, consumer, enterprise, Omniverse and robotics portfolios.

Meanwhile, NVIDIA’s technologies will take center stage on the CES show floor at the Las Vegas Convention Center, where partners will highlight AI-powered technologies, immersive gaming experiences and groundbreaking automotive advancements.

Attendees can also participate in NVIDIA’s “Explore to Win” program, an interactive scavenger hunt featuring missions, points and prizes.

Curious about the future? Tune in live on NVIDIA’s website or the company’s YouTube channels to witness how NVIDIA is shaping the future of technology.

Read More

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

From heart-pounding action games to remastered classics, there’s something for everyone this GFN Thursday.

Six new titles join the cloud this week, starting with The Thing: Remastered. Face the horrors of the Antarctic as the game oozes onto GeForce NOW. Nightdive Studios’ revival of the cult-classic 2002 survival-horror game came to the cloud as a surprise at the PC Gaming Show last week. Since then, GeForce NOW members have been able to experience all the bone-chilling action in the sequel to the title based on Universal Pictures’ genre-defining 1982 film.

And don’t miss out on the limited-time GeForce NOW holiday sale, which offers 50% off the first month of a new Ultimate or Performance membership. The 25% off Day Pass sale ends today — take advantage of the offer to experience 24 hours of cloud gaming with all the benefits of Ultimate or Performance membership.

It’s Alive!

The Thing Remastered on GeForce NOW@
Freeze enemies, not frame rates.

The Thing: Remastered brings the 2002 third-person shooter into the modern era with stunning visual upgrades, including improved character models, textures and animations, all meticulously crafted to enhance the game’s already-tense atmosphere.

Playing as Captain J.F. Blake, leader of a U.S. governmental rescue team, navigate the blood-curdling aftermath of the events depicted in the original film. Trust is a precious commodity as members command their squad through 11 terrifying levels, never knowing who might harbor the alien within. The remaster introduces enhanced lighting and atmospheric effects that make the desolate research facility more immersive and frightening than ever.

With an Ultimate or Performance membership, stream this blood-curdling experience in all its remastered glory without the need for high-end hardware. GeForce NOW streams from powerful GeForce RTX-powered servers in the cloud, rendering every shadow, every flicker of doubt in teammates’ eyes and every grotesque transformation with crystal-clear fidelity.

The Performance tier now offers up to 1440p resolution, allowing members to immerse themselves in the game’s oppressive atmosphere with even greater clarity. Ultimate members can experience the paranoia-inducing gameplay at up to 4K resolution and 120 frames per second, making every heart-pounding moment feel more real than ever.

Feast on This

Dive into the depths of a gothic vampire saga, slide through feudal Japan and flip burgers at breakneck speed with GeForce NOW and the power of the cloud. Grab a controller and rally the gaming squad to stream these mouth-watering additions.

Legacy of Kain Soul Reaver 1&2 Remastered on GeForce NOW
Time to rise again.

The highly anticipated Legacy of Kain Soul Reaver 1&2 Remastered from Aspyr and Crystal Dynamics breathes new life into the classic vampire saga genre. These beloved titles have been meticulously overhauled to offer stunning visuals and improved controls. Join the epic conflict of Kain and Raziel in the gothic world of Nosgoth and traverse between the Spectral and Material Realms to solve puzzles, reveal new paths and defeat foes.

The Spirit of the Samurai on GeForce NOW
Defend the forbidden village.

The Spirit of the Samurai from Digital Mind Games and Kwalee brings a blend of Souls and Metroidvania elements to feudal Japan. This stop-motion inspired 2D action-adventure game offers three playable characters and intense combat with legendary Japanese weapons, all set against a backdrop of mythological landscapes.

Fast Food Simulator on GeForce NOW
The ice cream machine actually works.

Or take on the chaotic world of fast-food management with Fast Food Simulator, a multiplayer simulation game from No Ceiling Games. Take orders, make burgers and increase earnings by dealing with customers. Play solo or co-op with up to four players and take on unexpected and bizarre events that can occur at any moment.

Shift between realms in Legacy of Kain at up to 4K 120 fps with an Ultimate membership, slice through The Spirit of the Samurai’s mythical landscapes in stunning 1440p with RTX ON with a Performance membership or manage a fast-food empire with silky-smooth gameplay. With extended sessions and priority access, members will have plenty of time to master these diverse worlds.

Play On

Diablo Immortal on GeForce NOW
Evil never sleeps.

Diablo Immortal — the action-packed role-playing game from Blizzard Entertainment, set in the dark fantasy world of Sanctuary — bridges the stories of Diablo II and Diablo III. Choose from a variety of classes, each offering unique playstyles and devastating abilities, to battle through diverse zones and randomly generated rifts, and uncover the mystery of the shattered Worldstone while facing off against hordes of demonic enemies.

Since its launch, the game has offered frequent updates, including two new character classes, new zones, gear, competitive events and more demonic stories to experience. With its immersive storytelling, intricate character customization and endless replayability, Diablo Immortal provides members with a rich, hellish adventure to stream from the cloud across devices.

Look for the following games available to stream in the cloud this week:

  • Indiana Jones and the Great Circle (New release on Steam and Xbox, available on the Microsoft Store and PC Game Pass, Dec. 8)
  • Fast Food Simulator (New release on Steam, Dec. 10)
  • Legacy of Kain Soul Reaver 1&2 Remastered (New release on Steam, Dec. 10)
  • The Spirit of the Samurai (New release on Steam, Dec. 12)
  • Diablo Immortal (Battle.net)
  • The Lord of the Rings: Return to Moria (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Vay, a Berlin-based provider of automotive-grade remote driving (teledriving) technology, is offering an alternative approach to autonomous driving.

Through the company’s app, a user can hail a car, and a professionally trained teledriver will remotely drive the vehicle to the customer’s location. Once the car arrives, the user manually drives it.

After completing their trip, the user can end the rental in the app and pull over to a safe location to exit the car, away from traffic flow. There’s no need to park the vehicle, as the teledriver will handle the parking or drive the car to the next customer.

This system offers sustainable, door-to-door mobility, with the unique advantage of having a human driver remotely controlling the vehicle in real time.

Vay’s technology is built on the NVIDIA DRIVE AGX centralized compute platform, running the NVIDIA DriveOS operating system for safe, AI-defined autonomous vehicles.

These technologies enable Vay’s fleets to process large volumes of camera and other vehicle data over the air. DRIVE AGX’s real-time, low-latency video streaming capabilities provide enhanced situational awareness for teledrivers, while its automotive-grade design ensures reliability in any driving condition.

“By combining Vay’s innovative remote driving capabilities with the advanced AI and computing power of NVIDIA DRIVE AGX, we’re setting a new standard for remotely driven vehicles,” said Justin Spratt, chief business officer at Vay. “This collaboration helps us bring safe, reliable and accessible driverless options to the market and provides an adaptable solution that can be deployed in real-world environments now — not years from now.”

High-Quality Video Stream

Vay’s advanced technology stack includes NVIDIA DRIVE AGX software that’s optimized for latency and processing power. By harnessing NVIDIA GPUs specifically designed for autonomous driving, the company’s teledriving system can process and transmit high-definition video feeds in real time, delivering critical situational awareness to the teledriver, even in complex environments. In the event of an emergency, the vehicle can safely bring itself to a complete stop.

“Working with NVIDIA, Vay is setting a new standard in driverless technology,” said Bogdan Djukic, cofounder and vice president of engineering, teledrive experience and autonomy at Vay. “We are proud to not only accelerate the deployment of remotely driven and autonomous vehicles but also to expand the boundaries of what’s possible in urban transportation, logistics and beyond — transforming mobility for both businesses and communities.”

Reshaping Mobility With Teledriving

Vay’s technology enables professionally trained teledrivers to remotely drive vehicles from specialized teledrive stations equipped with industry-standard controls, such as a steering wheel and pedals.

The company’s teledrivers are totally immersed in the drive — road traffic sounds, such as those from emergency vehicles and other warning signals, are transmitted via microphones to the operator’s headphones. Camera sensors reproduce the car’s surroundings and transmit them to the screens of the teledrive station with minimum latency. The vehicles can operate at speeds of up to 26 mph.

Vay’s technology effectively addresses complex edge cases with human supervision, enhancing safety while significantly reducing costs and development challenges.

Vay is a member of NVIDIA Inception, a program that nurtures AI startups with go-to-market support, expertise and technology. Last year, Vay became the first and only company in Europe to teledrive a vehicle on public streets without a safety driver.

Since January, Vay has been operating its commercial services in Las Vegas. The startup recently secured a partnership with Bayanat, a provider of AI-powered geospatial solutions, and is working with Ush and Poppy, Belgium-based car-sharing companies, as well as Peugeot, a French automaker.

In October, Vay announced a $35 million investment from the European Investment Bank, which will help it roll out its technology across Europe and expand its development team.

Learn more about the NVIDIA DRIVE platform.

Read More

Built for the Era of AI, NVIDIA RTX AI PCs Enhance Content Creation, Gaming, Entertainment and More

Built for the Era of AI, NVIDIA RTX AI PCs Enhance Content Creation, Gaming, Entertainment and More

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.

NVIDIA and GeForce RTX GPUs are built for the era of AI.

RTX GPUs feature specialized AI Tensor Cores that can deliver more than 1,300 trillion operations per second (TOPS) of processing power for cutting-edge performance in gaming, creating, everyday productivity and more. Today there are more than 600 deployed AI-powered games and apps that are accelerated by RTX.

RTX AI PCs can help anyone start their AI journey and supercharge their work.

Every RTX AI PC comes with regularly updated NVIDIA Studio Drivers — fine-tuned in collaboration with developers — that enhance performance in top creative apps and are tested extensively to deliver maximum stability. Download the December Studio Driver today.

The importance of large language models (LLM) continues to grow. Two benchmarks were introduced this week to spotlight LLM performance on various hardware: MLPerf Client v0.5 and Procyon AI Text Generation. These LLM-based benchmarks, which internal tests have shown accurately replicate real-world performance, are easy to run.

This holiday season, content creators can participate in the #WinterArtChallenge, running through February. Share winter-themed art on Facebook, Instagram or X with #WinterArtChallenge for a chance to be featured on NVIDIA Studio social media channels.

Advanced AI

With NVIDIA and GeForce RTX GPUs, AI elevates everyday tasks and activities, as covered in our AI Decoded blog series. For example, AI can enable:

Faster creativity: With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated up to 2.2x faster than on an NPU. And thanks to software optimizations using the NVIDIA TensorRT SDK, the applications used to run these models, like ComfyUI, get an additional 60% boost.

Greater gaming: NVIDIA DLSS technology boosts frame rates and improves image quality, using AI to automatically generate pixels in video games. With ongoing improvements, including to Ray Reconstruction, DLSS enables richer visual quality for more immersive gameplay.

Enhanced entertainment: RTX Video Super Resolution uses AI to enhance video by removing compression artifacts and sharpening edges while upscaling video quality. RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range, enabling more vivid, dynamic colors when streamed in Google Chrome, Microsoft Edge, Mozilla Firefox or VLC media player.

Improved productivity: The NVIDIA ChatRTX tech demo app connects a large language model, like Meta’s Llama, to a user’s data for quickly querying notes, documents or images. Free for RTX GPU owners, the custom chatbot provides quick, contextually relevant answers. Since it runs locally on Windows RTX PCs and workstations, results are fast and private.

This snapshot of AI capabilities barely scratches the surface of the technology’s possibilities. With an NVIDIA or GeForce RTX GPU-powered system, users can also supercharge their STEM studies and research, and tap into the NVIDIA Studio suite of AI-powered tools.

Decisions, Decisions

More than 200 powerful RTX AI PCs are capable of running advanced AI.

ASUS’ Vivobook Pro 16X comes with up to a GeForce RTX 4070 Laptop GPU.

ASUS’ Vivobook Pro 16X comes with up to a GeForce RTX 4070 Laptop GPU, as well as a superbright 550-nit panel, ultrahigh contrast ratio and ultrawide 100% DCI-P3 color gamut. It’s available on Amazon and ASUS.com.

Dell’s Inspiron 16 Plus 7640 comes with up to a GeForce RTX 4060 Laptop GPU.

Dell’s Inspiron 16 Plus 7640 comes with up to a GeForce RTX 4060 Laptop GPU and a 16:10 aspect ratio display, ideal for users working on multiple projects. It boasts military-grade testing for added reliability and an easy-to-use, built-in Trusted Platform Module to protect sensitive data. It’s available on Amazon and Dell.com.

GIGABYTE’s AERO 16 OLED comes with up to a GeForce RTX 4070 Laptop GPU.

GIGABYTE’s AERO 16 OLED, equipped with up to a GeForce RTX 4070 Laptop GPU, is designed for professionals, designers and creators. The 16:10 thin-bezel 4K+ OLED screen is certified by multiple third parties to provide the best visual experience with X-Rite 2.0 factory-by-unit color calibration and Pantone Validated color calibration. It’s available on Amazon and GIGABYTE.com.

MSI’s Creator M14 comes with up to a GeForce RTX 4070 Laptop GPU.

MSI’s Creator M14 comes with up to a GeForce RTX 4070 Laptop GPU, delivering a quantum leap in performance with DLSS 3 to enable lifelike virtual worlds with full ray tracing. Plus, its Max-Q suite of technologies optimizes system performance, power, battery life and acoustics for peak efficiency. Purchase one on Amazon or MSI.com.

These are just a few of the many RTX AI PCs available, with some on sale, including the Acer Nitro V, ASUS TUF 16″, HP Envy 16″ and Lenovo Yoga Pro 9i.

Follow NVIDIA Studio on Facebook, Instagram and X. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Into the Omniverse: How OpenUSD-Based Simulation and Synthetic Data Generation Advance Robot Learning

Into the Omniverse: How OpenUSD-Based Simulation and Synthetic Data Generation Advance Robot Learning

Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Scalable simulation technologies are driving the future of autonomous robotics by reducing development time and costs.

Universal Scene Description (OpenUSD) provides a scalable and interoperable data framework for developing virtual worlds where robots can learn how to be robots. With SimReady OpenUSD-based simulations, developers can create limitless scenarios based on the physical world.

And NVIDIA Isaac Sim is advancing perception AI-based robotics simulation. Isaac Sim is a reference application built on the NVIDIA Omniverse platform for developers to simulate and test AI-driven robots in physically based virtual environments.

At AWS re:Invent, NVIDIA announced that Isaac Sim is now available on Amazon EC2 G6e instances powered by NVIDIA L40S GPUs. These powerful instances enhance the performance and accessibility of Isaac Sim, making high-quality robotics simulations more scalable and efficient.

These advancements in Isaac Sim mark a significant leap for robotics development. By enabling realistic testing and AI model training in virtual environments, companies can reduce time to deployment and improve robot performance across a variety of use cases.

Advancing Robotics Simulation With Synthetic Data Generation

Robotics companies like Cobot, Field AI and Vention are using Isaac Sim to simulate and validate robot performance while others, such as SoftServe and Tata Consultancy Services, use synthetic data to bootstrap AI models for diverse robotics applications.

The evolution of robot learning has been deeply intertwined with simulation technology. Early experiments in robotics relied heavily on labor-intensive, resource-heavy trials. Simulation is a crucial tool for the creation of physically accurate environments where robots can learn through trial and error, refine algorithms and even train AI models using synthetic data.

Physical AI describes AI models that can understand and interact with the physical world. It embodies the next wave of autonomous machines and robots, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses.

Robotics simulation, which forms the second computer in the three computer solution, is a cornerstone of physical AI development that lets engineers and researchers design, test and refine systems in a controlled virtual environment.

A simulation-first approach significantly reduces the cost and time associated with physical prototyping while enhancing safety by allowing robots to be tested in scenarios that might otherwise be impractical or hazardous in real life.

With a new reference workflow, developers can accelerate the generation of synthetic 3D datasets with generative AI using OpenUSD NIM microservices. This integration streamlines the pipeline from scene creation to data augmentation, enabling faster and more accurate training of perception AI models.

Synthetic data can help address the challenge of limited, restricted or unavailable data needed to train various types of AI models, especially in computer vision. Developing action recognition models is a common use case that can benefit from synthetic data generation.

To learn how to create a human action recognition video dataset with Isaac Sim, check out the technical blog on Scaling Action Recognition Models With Synthetic Data. 3D simulations offer developers precise control over image generation, eliminating hallucinations.

Robotic Simulation for Humanoids

Humanoid robots are the next wave of embodied AI, but they present a challenge at the intersection of mechatronics, control theory and AI. Simulation is crucial to solving this challenge by providing a safe, cost-effective and versatile platform for training and testing humanoids.

With NVIDIA Isaac Lab, an open-source unified framework for robot learning built on top of Isaac Sim, developers can train humanoid robot policies at scale via simulations. Leading commercial robot makers are adopting Isaac Lab to handle increasingly complex movements and interactions.

NVIDIA Project GR00T, an active research initiative to enable the humanoid robot ecosystem of builders, is pioneering workflows such as GR00T-Gen to generate robot tasks and simulation-ready environments in OpenUSD. These can be used for training generalist robots to perform manipulation, locomotion and navigation.

Recently published research from Project GR00T also shows how advanced simulation can be used to train interactive humanoids. Using Isaac Sim, the researchers developed a single unified controller for physically simulated humanoids called MaskedMimic. The system is capable of generating a wide range of motions across diverse terrains from intuitive user-defined intents.

Physics-Based Digital Twins Simplify AI Training

Partners across industries are using Isaac Sim, Isaac Lab, Omniverse, and OpenUSD to design, simulate and deploy smarter, more capable autonomous machines:

  • Agility uses Isaac Lab to create simulations that let simulated robot behaviors transfer directly to the robot, making it more intelligent, agile and robust when deployed in the real world.
  • Cobot uses Isaac Sim with its AI-powered cobot, Proxie, to optimize logistics in warehouses, hospitals, manufacturing sites and more.
  • Cohesive Robotics has integrated Isaac Sim into its software framework called Argus OS for developing and deploying robotic workcells used in high-mix manufacturing environments.
  • Field AI, a builder of robot foundation models, uses Isaac Sim and Isaac Lab to evaluate the performance of its models in complex, unstructured environments across industries such as construction, manufacturing, oil and gas, mining, and more.
  • Fourier uses NVIDIA Isaac Gym and Isaac Lab to train its GR-2 humanoid robot, using reinforcement learning and advanced simulations to accelerate development, enhance adaptability and improve real-world performance.
  • Foxglove integrates Isaac Sim and Omniverse to enable efficient robot testing, training and sensor data analysis in realistic 3D environments.
  • Galbot used Isaac Sim to verify the data generation of DexGraspNet, a large-scale dataset of 1.32 million ShadowHand grasps, advancing robotic hand functionality by enabling scalable validation of diverse object interactions across 5,355 objects and 133 categories.
  • Standard Bots is simulating and validating the performance of its R01 robot used in manufacturing and machining setups.
  • Wandelbots integrates its NOVA platform with Isaac Sim to create physics-based digital twins and intuitive training environments, simplifying robot interaction and enabling seamless testing, validation and deployment of robotic systems in real-world scenarios.

Learn more about how Wandelbots is advancing robot learning with NVIDIA technology in this livestream recording:

Get Plugged Into the World of OpenUSD

NVIDIA experts and Omniverse Ambassadors are hosting livestream office hours and study groups to provide robotics developers with technical guidance and troubleshooting support for Isaac Sim and Isaac Lab. Learn how to get started simulating robots in Isaac Sim with this new, free course on NVIDIA Deep Learning Institute (DLI).

For more on optimizing OpenUSD workflows, explore the new self-paced Learn OpenUSD training curriculum that includes free DLI courses for 3D practitioners and developers. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and the AOUSD website.

Don’t miss the CES keynote delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT for more on the future of AI and graphics.

Stay up to date by subscribing to NVIDIA news, joining the community, and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.

Featured image courtesy of Fourier.

Read More

AI Pioneers Win Nobel Prizes for Physics and Chemistry

AI Pioneers Win Nobel Prizes for Physics and Chemistry

Artificial intelligence, once the realm of science fiction, claimed its place at the pinnacle of scientific achievement Monday in Sweden.

In a historic ceremony at Stockholm’s iconic Konserthuset, John Hopfield and Geoffrey Hinton received the Nobel Prize in Physics for their pioneering work on neural networks — systems that mimic the brain’s architecture and form the bedrock of modern AI.

Meanwhile, Demis Hassabis and John Jumper accepted the Nobel Prize in Chemistry for Google DeepMind’s AlphaFold, a system that solved biology’s “impossible” problem: predicting the structure of proteins, a feat with profound implications for medicine and biotechnology.

These achievements go beyond academic prestige. They mark the start of an era where GPU-powered AI systems tackle problems once deemed unsolvable, revolutionizing multitrillion-dollar industries from healthcare to finance.

Hopfield’s Legacy and the Foundations of Neural Networks

In the 1980s, Hopfield, a physicist with a knack for asking big questions, brought a new perspective to neural networks.

He introduced energy landscapes — borrowed from physics — to explain how neural networks solve problems by finding stable, low-energy states. His ideas, abstract yet elegant, laid the foundation for AI by showing how complex systems optimize themselves.

Fast forward to the early 2000s, when Geoffrey Hinton — a British cognitive psychologist with a penchant for radical ideas — picked up the baton. Hinton believed neural networks could revolutionize AI, but training these systems required enormous computational power.

In 1983, Hinton and Sejnowski built on Hopfield’s work and invented the Boltzmann Machine which used stochastic binary neurons to jump out of local minima. They discovered an elegant and very simple learning procedure based on statistical mechanics which was an alternative to backpropagation.

In 2006 a simplified version of this learning procedure proved to be very effective at initializing deep neural networks before training them with backpropagation. However, training these systems still required enormous computational power.

AlphaFold: Biology’s AI Revolution

A decade after AlexNet, AI moved to biology. Hassabis and Jumper led the development of AlphaFold to solve a problem that had stumped scientists for years: predicting the shape of proteins.

Proteins are life’s building blocks. Their shapes determine what they can do. Understanding these shapes is the key to fighting diseases and developing new medicines. But finding them was slow, costly and unreliable.

AlphaFold changed that. It used Hopfield’s ideas and Hinton’s networks to predict protein shapes with stunning accuracy. Powered by GPUs, it mapped almost every known protein. Now, scientists use AlphaFold to fight drug resistance, make better antibiotics and treat diseases once thought to be incurable.

What was once biology’s Gordian knot has been untangled — by AI.

The GPU Factor: Enabling AI’s Potential

GPUs, the indispensable engines of modern AI, are at the heart of these achievements. Originally designed to make video games look good, GPUs were perfect for the massive parallel processing demands of neural networks.

NVIDIA GPUs, in particular, became the engine driving breakthroughs like AlexNet and AlphaFold. Their ability to process vast datasets with extraordinary speed allowed AI to tackle problems on a scale and complexity never before possible.

Redefining Science and Industry

The Nobel-winning breakthroughs of 2024 aren’t just rewriting textbooks — they’re optimizing global supply chains, accelerating drug development and helping farmers adapt to changing climates.

Hopfield’s energy-based optimization principles now inform AI-powered logistics systems. Hinton’s architectures underpin self-driving cars and language models like ChatGPT. AlphaFold’s success is inspiring AI-driven approaches to climate modeling, sustainable agriculture and even materials science.

The recognition of AI in physics and chemistry signals a shift in how we think about science. These tools are no longer confined to the digital realm. They’re reshaping the physical and biological worlds.

Read More

Turn Down the Noise: CUDA-Q Enables Industry-First Quantum Computing Demo With Logical Qubits

Turn Down the Noise: CUDA-Q Enables Industry-First Quantum Computing Demo With Logical Qubits

Quantum computing has the potential to transform industries ranging from drug discovery to logistics, but a huge barrier standing between today’s quantum devices and useful applications is noise. These disturbances, introduced by environmental interactions and imperfect hardware, mean that today’s qubits can only perform hundreds of operations before quantum computations irretrievably deteriorate. 

Though seemingly inevitable, noise in quantum hardware can be tackled by so-called logical qubits – collections of tens, hundreds or even thousands of actual physical qubits that allow the correction of noise-induced errors. Logical qubits are the holy grail of quantum computing, and quantum hardware builder Infleqtion today published groundbreaking work that used the NVIDIA CUDA-Q platform to both design and demonstrate an experiment with two of them.  

These logical qubits were used to perform a small-scale demonstration of the so-called single-impurity Anderson model, a high-accuracy approach necessary for many important materials science applications. 

This constitutes the first time that a demonstration of a materials science quantum algorithm has been performed on logical qubits. The creation of just a single logical qubit is extremely challenging. Infleqtion was able to achieve such a feat thanks to accurate modeling of its quantum computer using CUDA-Q’s unique GPU-accelerated simulation capabilities.  

Having developed and tested its entire experiment within CUDA-Q’s simulators, with only trivial changes, Infleqtion could then use CUDA-Q to orchestrate the experiment using the actual physical qubits within its Sqale neutral atom quantum processor. 

This work sets the stage for quantum computing’s move toward large-scale, error-corrected systems.  

Many scaling challenges still stand between today’s quantum devices and large systems of logical qubits, which will only be solved by integrating quantum hardware with AI supercomputers to form accelerated quantum supercomputers.  

NVIDIA continues to work with partners like Infleqtion to enable this breakthrough research needed to make accelerated quantum supercomputing a reality. 

Learn more about NVIDIA’s quantum computing platforms. 

Read More

Crowning Achievement: NVIDIA Research Model Enables Fast, Efficient Dynamic Scene Reconstruction

Crowning Achievement: NVIDIA Research Model Enables Fast, Efficient Dynamic Scene Reconstruction

Content streaming and engagement are entering a new dimension with QUEEN, an AI model by NVIDIA Research and the University of Maryland that makes it possible to stream free-viewpoint video, which lets viewers experience a 3D scene from any angle.

QUEEN could be used to build immersive streaming applications that teach skills like cooking, put sports fans on the field to watch their favorite teams play from any angle, or bring an extra level of depth to video conferencing in the workplace. It could also be used in industrial environments to help teleoperate robots in a warehouse or a manufacturing plant.

The model will be presented at NeurIPS, the annual conference for AI research that begins Tuesday, Dec. 10, in Vancouver.

“To stream free-viewpoint videos in near real time, we must simultaneously reconstruct and compress the 3D scene,” said Shalini De Mello, director of research and a distinguished research scientist at NVIDIA. “QUEEN balances factors including compression rate, visual quality, encoding time and rendering time to create an optimized pipeline that sets a new standard for visual quality and streamability.”

Reduce, Reuse and Recycle for Efficient Streaming

Free-viewpoint videos are typically created using video footage captured from different camera angles, like a multicamera film studio setup, a set of security cameras in a warehouse or a system of videoconferencing cameras in an office.

Prior AI methods for generating free-viewpoint videos either took too much memory for livestreaming or sacrificed visual quality for smaller file sizes. QUEEN balances both to deliver high-quality visuals — even in dynamic scenes featuring sparks, flames or furry animals — that can be easily transmitted from a host server to a client’s device. It also renders visuals faster than previous methods, supporting streaming use cases.

In most real-world environments, many elements of a scene stay static. In a video, that means a large share of pixels don’t change from one frame to another. To save computation time, QUEEN tracks and reuses renders of these static regions — focusing instead on reconstructing the content that changes over time.

Using an NVIDIA Tensor Core GPU, the researchers evaluated QUEEN’s performance on several benchmarks and found the model outperformed state-of-the-art methods for online free-viewpoint video on a range of metrics. Given 2D videos of the same scene captured from different angles, it typically takes under five seconds of training time to render free-viewpoint videos at around 350 frames per second.

This combination of speed and visual quality can support media broadcasts of concerts and sports games by offering immersive virtual reality experiences or instant replays of key moments in a competition.

In warehouse settings, robot operators could use QUEEN to better gauge depth when maneuvering physical objects. And in a videoconferencing application — such as the 3D videoconferencing demo shown at SIGGRAPH and NVIDIA GTC — it could help presenters demonstrate tasks like cooking or origami while letting viewers pick the visual angle that best supports their learning.

The code for QUEEN will soon be released as open source and shared on the project page.

QUEEN is one of over 50 NVIDIA-authored NeurIPS posters and papers that feature groundbreaking AI research with potential applications in fields including simulation, robotics and healthcare.

Generative Adversarial Nets, the paper that first introduced GAN models, won the NeurIPS 2024 Test of Time Award. Cited more than 85,000 times, the paper was coauthored by Bing Xu, distinguished engineer at NVIDIA. Hear more from its lead author, Ian Goodfellow, research scientist at DeepMind, on the AI Podcast:

Learn more about NVIDIA Research at NeurIPS.

See the latest work from NVIDIA Research, which has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Academic researchers working on large language models, simulation and modeling, edge AI and more can apply to the NVIDIA Academic Grant Program.

See notice regarding software product information.

Read More

Thailand and Vietnam Embrace Sovereign AI to Drive Economic Growth

Thailand and Vietnam Embrace Sovereign AI to Drive Economic Growth

Southeast Asia is embracing sovereign AI.

The prime ministers of Thailand and Vietnam this week met with NVIDIA founder and CEO Jensen Huang to discuss initiatives that will accelerate AI innovation in their countries.

During his visit to the region, Huang also joined Bangkok-based cloud infrastructure company SIAM.AI Cloud onstage for a fireside chat on sovereign AI. In Vietnam, he announced NVIDIA’s collaboration with the country’s government on an AI research and development center — and NVIDIA’s acquisition of VinBrain, a health technology startup funded by Vingroup, one of Vietnam’s largest public companies.

These events capped a year of global investments in sovereign AI, the ability for countries to develop and harness AI using domestic computing infrastructure, data and workforces. AI will contribute nearly $20 trillion to the global economy through the end of the decade, according to IDC.

Canada, Denmark and Indonesia are among the countries that have announced initiatives to develop sovereign AI infrastructure powered by NVIDIA technology. And at the recent NVIDIA AI Summits in India and Japan, leading enterprises, infrastructure providers and startups in both countries announced sovereign AI projects in sectors including finance, healthcare and manufacturing.

Supporting Sovereign Cloud Infrastructure in Thailand

Huang’s Southeast Asia visit kicked off with a meeting with Thailand Prime Minister Paetongtarn Shinawatra, where he discussed the opportunities for sovereign AI development in Thailand and shared memories of his childhood years spent in Bangkok.

The pair discussed how further investing in AI education and training can help Thailand drive AI innovations in fields such as weather prediction, climate simulation and healthcare. NVIDIA is working with dozens of local universities and startups to support AI advancement in the country.

Huang and Shinawatra met in the Purple Room of the Thai-Khu-Fah building, which houses the offices of the prime minister and cabinet.

Huang later took the stage at an “AI Vision for Thailand” event hosted by SIAM.AI Cloud, a cloud platform company that offers customers access to virtual servers featuring NVIDIA Tensor Core GPUs.

“The most important part of artificial intelligence is the data. And the data of Thailand belongs to the Thai people,” Huang said in a fireside chat with Ratanaphon Wongnapachant, CEO of SIAM.AI Cloud. Highlighting the importance of sovereign AI development, Huang said, “The digital data of Thailand encodes the knowledge, the history, the culture, the common sense of your people. It should be harvested by your people.”

Following the conversation, Wongnapachant gifted Huang a custom leather jacket lined with Thai silk. The pair also signed an NVIDIA DGX H200 system in recognition of SIAM.AI Cloud’s plans to expand its offerings to NVIDIA H200 Tensor Core GPUs and NVIDIA GB200 Grace Blackwell Superchips.

Advancing AI From Research to Industry in Vietnam

In Hanoi the next day, Huang met with Vietnam’s Prime Minister Pham Minh Chinh, and NVIDIA signed an agreement to build the company’s first research and development center in the country. The center will focus on software development and collaborate with Vietnam’s enterprises, startups, government agencies and universities to accelerate AI adoption in the country.

The announcement builds on NVIDIA’s existing work with 65 universities in Vietnam and more than 100 of the country’s AI startups through NVIDIA Inception, a global program designed to help startups evolve faster. NVIDIA has acquired Inception member VinBrain, a Hanoi-based company that applies AI diagnostics to multimodal health data.

While in Vietnam, Huang also received the 2024 VinFuture Prize alongside AI pioneers Yoshua Bengio, Geoffrey Hinton, Yann Le Cun and Fei-Fei Li for their “transformational contributions to the advancement of deep learning.”

Broadcast live nationally in the country, the awards ceremony was hosted by the VinFuture Foundation, a nonprofit that recognizes innovations in science and technology with significant societal impact.

“Our award today is recognition by the VinFuture committee of the transformative power of AI to revolutionize every field of science and every industry,” Huang said in his acceptance speech.

Bengio, Huang and LeCun accepted the 2024 VinFuture Prize onstage in Hanoi.

Learn more about sovereign AI.

Editor’s note: The data on the economic impact of AI is from IDC’s press release titled “IDC: Artificial Intelligence Will Contribute $19.9 Trillion to the Global Economy through 2030 and Drive 3.5% of Global GDP in 2030,” published in September 2024.

Read More