New GeForce RTX 50 Series GPUs Double Creative Performance in 3D, Video and Generative AI

New GeForce RTX 50 Series GPUs Double Creative Performance in 3D, Video and Generative AI

GeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more.

Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative generative AI models up to 2x faster in a smaller memory footprint, compared with the previous generation. They feature ninth-generation NVIDIA encoders for advanced video editing and livestreaming, and come with NVIDIA DLSS 4 and up to 32GB of VRAM  to tackle massive 3D projects.

These GPUs come with various software updates, including two new AI-powered NVIDIA Broadcast effects, updates to RTX Video and RTX Remix, and NVIDIA NIM microservices — prepackaged and optimized models built to jumpstart AI content creation workflows on RTX AI PCs.

Built for the Generative AI Era

Generative AI can create sensational results for creators, but with models growing in both complexity and scale, generative AI can be difficult to run even on the latest hardware.

The GeForce RTX 50 Series adds FP4 support to help address this issue. FP4 is a lower quantization method, similar to file compression, that decreases model sizes. Compared with FP16 — the default method that most models feature — FP4 uses less than half of the memory and 50 Series GPUs provide over 2x performance compared to the previous generation. This can be done with virtually no loss in quality with advanced quantization methods offered by NVIDIA TensorRT Model Optimizer.

For example, Black Forest Labs’ FLUX.1 [dev] model at FP16 requires over 23GB of VRAM, meaning it can only be supported by the GeForce RTX 4090 and professional GPUs. With FP4, FLUX.1 [dev] requires less than 10GB, so it can run locally on more GeForce RTX GPUs.

With a GeForce RTX 4090 with FP16, the FLUX.1 [dev] model can generate images in 15 seconds with 30 steps. With a GeForce RTX 5090 with FP4, images can be generated in just over five seconds.

A new NVIDIA AI Blueprint for 3D-guided generative AI based on FLUX.1 [dev], which will be offered as an NVIDIA NIM microservice, offers artists greater control over text-based image generation. With this blueprint, creators can use simple 3D objects — created by hand or generated with AI — and lay them out in a 3D renderer like Blender to guide AI image generation.

A prepackaged workflow powered by the FLUX NIM microservice and ComfyUI can then generate high-quality images that match the 3D scene’s composition.

The NVIDIA Blueprint for 3D-guided generative AI is expected to be available through GitHub using a one-click installer in February.

Stability AI announced that its Stable Point Aware 3D, or SPAR3D, model will be available this month on RTX AI PCs. Thanks to RTX acceleration, the new model from Stability AI will help transform 3D design, delivering exceptional control over 3D content creation by enabling real-time editing and the ability to generate an object in less than a second from a single image.

Professional-Grade Video for All

GeForce RTX 50 Series GPUs deliver a generational leap in NVIDIA encoders and decoders with support for the 4:2:2 pro-grade color format, multiview-HEVC (MV-HEVC) for 3D and virtual reality (VR) video, and the new AV1 Ultra High Quality mode.

Most consumer cameras are confined to 4:2:0 color compression, which reduces the amount of color information. 4:2:0 is typically sufficient for video playback on browsers, but it can’t provide the color depth needed for advanced video editors to color grade videos. The 4:2:2 format provides double the color information with just a 1.3x increase in RAW file size — offering an ideal balance for video editing workflows.

Decoding 4:2:2 video can be challenging due to the increased file sizes. GeForce RTX 50 Series GPUs include 4:2:2 hardware support that can decode up to eight times the 4K 60 frames per second (fps) video sources per decoder, enabling smooth multi-camera video editing.

The GeForce RTX 5090 GPU is equipped with three encoders and two decoders, the GeForce RTX 5080 GPU includes two encoders and two decoders, the 5070 Ti GPUs has two encoders with a single decoder, and the GeForce RTX 5070 GPU includes a single encoder and decoder. These multi-encoder and decoder setups, paired with faster GPUs, enable the GeForce RTX 5090 to export video 60% faster than the GeForce RTX 4090 and at 4x speed compared with the GeForce RTX 3090.

GeForce RTX 50 Series GPUs also feature the ninth-generation NVIDIA video encoder, NVENC, that offers a 5% improvement in video quality on HEVC and AV1 encoding (BD-BR), as well as a new AV1 Ultra Quality mode that achieves 5% more compression at the same quality. They also include the sixth-generation NVIDIA decoder, with 2x the decode speed for H.264 video.

NVIDIA is collaborating with Adobe Premiere Pro, Blackmagic Design’s DaVinci Resolve, Capcut and Wondershare Filmora to integrate these technologies, starting in February.

3D video is starting to catch on thanks to the growth of VR, AR and mixed reality headsets. The new RTX 50 Series GPUs also come with support for MV-HEVC codecs to unlock such formats in the near future.

Livestreaming Enhanced

Livestreaming is a juggling act, where the streamer has to entertain the audience, produce a show and play a video game — all at the same time. Top streamers can afford to hire producers and moderators to share the workload, but most have to manage these responsibilities on their own and often in long shifts — until now.

Streamlabs, a Logitech brand and leading provider of broadcasting software and tools for content creators, is collaborating with NVIDIA and Inworld AI to create the Streamlabs Intelligent Streaming Assistant.

Streamlabs Intelligent Streaming Assistant is an AI agent that can act as a sidekick, producer and technical support. The sidekick that can join streams as a 3D avatar to answer questions, comment on gameplay or chats, or help initiate conversations during quiet periods. It can help produce streams, switching to the most relevant scenes and playing audio and video cues during interesting gameplay moments. It can even serve as an IT assistant that helps configure streams and troubleshoot issues.

Streamlabs Intelligent Streaming Assistant is powered by NVIDIA ACE technologies for creating digital humans and Inworld AI, an AI framework for agentic AI experiences. The assistant will be available later this year.

Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality — without needing expensive, specialized equipment.

Two new AI-powered beta effects are being added to the NVIDIA Broadcast app.

The first, Studio Voice, enhances the sound of a user’s microphone to match that of a high-quality microphone. The other, Virtual Key Light, can relight a subject’s face to deliver even coverage as if it were well-lit by two lights.

Because they harness demanding AI models, these beta features are recommended for video conferencing or non-gaming livestreams using a GeForce RTX 5080 GPU or higher. NVIDIA is working to expand these features to more GeForce RTX GPUs in future updates.

The NVIDIA Broadcast upgrade also includes an updated user interface that allows users to apply more effects simultaneously, as well as improvements to the background noise removal, virtual background and eye contact effects.

The updated NVIDIA Broadcast app will be available in February.

Livestreamers can also benefit from NVENC — 5% BD-BR video quality improvement for HEVC and AV1 — in the latest beta of Twitch’s Enhanced Broadcast feature in OBS, and the improved AV1 encoder for streaming in Discord or YouTube.

RTX Video — an AI feature that enhances video playback on popular internet browsers like Google Chrome and Microsoft Edge, and locally with Video Super Resolution and HDR — is getting an update to decrease GPU usage by 30%, expanding the lineup of GeForce RTX GPUs that can run Video Super Resolution with higher quality.

The RTX Video update is slated for a future NVIDIA App release.

Unprecedented 3D Render Performance

The GeForce RTX 5090 GPU offers 32GB of GPU memory — the largest of any GeForce RTX GPU ever, marking a 33% increase over the GeForce RTX 4090 GPU. This lets 3D artists build larger, richer worlds while using multiple applications simultaneously. Plus, new RTX 50 Series fourth-generation RT Cores can run 3D applications 40% faster.

DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. This enables animators to smoothly navigate a scene with 4x as many frames, or render 3D content at 60 fps or more.

D5 Render and Chaos Vantage, two popular professional-grade 3D apps for architects and designers, will add support for DLSS 4 in February.

3D artists have adopted generative AI to boost productivity in generating draft 3D meshes, HDRi maps or even animations to prototype a scene. At CES, Stability AI announced SPAR3D, its new 3D model that can generate 3D meshes from images in seconds with RTX acceleration.

NVIDIA RTX Remix — a modding platform that lets modders capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing — supports DLSS 4, increasing graphical fidelity and frame rates to maximize realism and immersion during gameplay.

RTX Remix soon plans to support Neural Radiance Cache, a neural shader that uses AI to train on live game data and estimate per-pixel accurate indirect lighting. RTX Remix creators can also expect access to RTX Skin in their mods, the first ray-traced sub-surface scattering implementation in games. With RTX Skin, RTX Remix mods expect to feature characters with new levels of realism, as light will reflect and propagate through their skin, grounding them in the worlds they inhabit.

GeForce RTX 5090 and 5080 GPUs will be available for purchase starting Jan. 30 — followed by GeForce RTX 5070 Ti and 5070 GPUs in February and RTX 50 Series laptops in March.

All systems equipped with GeForce RTX GPUs include the NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps, as well as NVIDIA Studio Drivers, tested extensively and released monthly to enhance performance and maximize stability in popular creative applications.

Stay tuned for more updates on the GeForce RTX 50 Series. Learn more about how the GeForce RTX 50 Series supercharges gaming, and check out all of NVIDIA’s announcements at CES

Every month brings new creative app updates and optimizations powered by the NVIDIA Studio 

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter

See notice regarding software product information.

Read More

NVIDIA Announces Nemotron Model Families to Advance Agentic AI

NVIDIA Announces Nemotron Model Families to Advance Agentic AI

Artificial intelligence is entering a new era — agentic AI — where teams of specialized agents can help people solve complex problems and automate repetitive tasks.

With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimized for agentic AI functions and capabilities. This complexity means that the need for powerful, efficient, enterprise-grade models has never been greater.

To provide a foundation for enterprise agentic AI, NVIDIA today announced the Llama Nemotron family of open large language models (LLMs). Built with Llama, the models can help developers create and deploy AI agents across a range of applications — including customer support, fraud detection, and product supply chain and inventory management optimization.

To be effective, many AI agents need both language skills and the ability to perceive the world and respond with the appropriate action.

With new NVIDIA Cosmos Nemotron vision language models (VLMs) and NVIDIA NIM microservices for video search and summarization, developers can build agents that analyze and respond to images and video from autonomous machines, hospitals, stores and warehouses, as well as sports events, movies and news. For developers seeking to generate physics-aware videos for robotics and autonomous vehicles, NVIDIA today separately announced NVIDIA Cosmos world foundation models.

Open Llama Nemotron Models Optimize Compute Efficiency, Accuracy for AI Agents

Built with Llama foundation models — one of the most popular commercially viable open-source model collections, downloaded over 650 million times — NVIDIA Llama Nemotron models provide optimized building blocks for AI agent development. This builds on NVIDIA’s commitment to developing state-of-the-art models, such as Llama 3.1 Nemotron 70B, now available through the NVIDIA API catalog.

Llama Nemotron models are pruned and trained with NVIDIA’s latest techniques and high-quality datasets for enhanced agentic capabilities. They excel at instruction following, chat, function calling, coding and math, while being size-optimized to run on a broad range of NVIDIA accelerated computing resources.

“Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimization across a system of LLMs to deliver efficient, accurate AI agents,” said Ahmad Al-Dahle, vice president and head of GenAI at Meta. “Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”

Leading AI agent platform providers including SAP and ServiceNow are expected to be among the first to use the new Llama Nemotron models.

“AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios,” said Philipp Herzig, chief AI officer at SAP. “Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialized AI agents to transform business processes.”

“AI agents make it possible for organizations to achieve more with less effort, setting new standards for business transformation,” said Jeremy Barnes, vice president of platform AI at ServiceNow. “The improved performance and accuracy of NVIDIA’s open Llama Nemotron models can help build advanced AI agent services that solve complex problems across functions, in any industry.”

The NVIDIA Llama Nemotron models use NVIDIA NeMo for distilling, pruning and alignment. Using these techniques, the models are small enough to run on a variety of computing platforms while providing high accuracy as well as increased model throughput.

The Llama Nemotron model family will be available as downloadable models and as NVIDIA NIM microservices that can be easily deployed on clouds, data centers, PCs and workstations. They offer enterprises industry-leading performance with reliable, secure and seamless integration into their agentic AI application workflows.

Customize and Connect to Business Knowledge With NVIDIA NeMo

The Llama Nemotron and Cosmos Nemotron model families are coming in Nano, Super and Ultra sizes to provide options for deploying AI agents at every scale.

  • Nano: The most cost-effective model optimized for real-time applications with low latency, ideal for deployment on PCs and edge devices.
  • Super: A high-accuracy model offering exceptional throughput on a single GPU.
  • Ultra: The highest-accuracy model, designed for data-center-scale applications demanding the highest performance.

Enterprises can also customize the models for their specific use cases and domains with NVIDIA NeMo microservices to simplify data curation, accelerate model customization and evaluation, and apply guardrails to keep responses on track.

With NVIDIA NeMo Retriever, developers can also integrate retrieval-augmented generation capabilities to connect models to their enterprise data.

And using NVIDIA Blueprints for agentic AI, enterprises can quickly create their own applications using NVIDIA’s advanced AI tools and end-to-end development expertise. In fact, NVIDIA Cosmos Nemotron, NVIDIA Llama Nemotron and NeMo Retriever supercharge the new NVIDIA Blueprint for video search and summarization, announced separately today.

NeMo, NeMo Retriever and NVIDIA Blueprints are all available with the NVIDIA AI Enterprise software platform.

Availability

Llama Nemotron and Cosmos Nemotron models will be available soon as hosted application programming interfaces and for download on build.nvidia.com and Hugging Face. Access for development, testing and research is free for members of the NVIDIA Developer Program.

Enterprises can run Llama Nemotron and Cosmos Nemotron NIM microservices in production with the NVIDIA AI Enterprise software platform on accelerated data center and cloud infrastructure.

Sign up to get notified about Llama Nemotron and Cosmos Nemotron models, and join NVIDIA at CES.

See notice regarding software product information.

Read More

NVIDIA Enhances Three Computer Solution for Autonomous Mobility With Cosmos World Foundation Models

NVIDIA Enhances Three Computer Solution for Autonomous Mobility With Cosmos World Foundation Models

Autonomous vehicle (AV) development is made possible by three distinct computers: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA AGX in-vehicle computer to process real-time sensor data for safety.

Together, these purpose-built, full-stack systems enable continuous development cycles, speeding improvements in performance and safety.

At the CES trade show, NVIDIA today announced a new part of the equation: NVIDIA Cosmos, a platform comprising state-of-the-art generative world foundation models (WFMs), advanced tokenizers, guardrails and an accelerated video processing pipeline built to advance the development of physical AI systems such as AVs and robots.

With Cosmos added to the three-computer solution, developers gain a data flywheel that can turn thousands of human-driven miles into billions of virtually driven miles — amplifying training data quality.

“The AV data factory flywheel consists of fleet data collection, accurate 4D reconstruction and AI to generate scenes and traffic variations for training and closed-loop evaluation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Using the NVIDIA Omniverse platform, as well as Cosmos and supporting AI models, developers can generate synthetic driving scenarios to amplify training data by orders of magnitude.”

“Developing physical AI models has traditionally been resource-intensive and costly for developers, requiring acquisition of real-world datasets and filtering, curating and preparing data for training,” said Norm Marks, vice president of automotive at NVIDIA. “Cosmos accelerates this process with generative AI, enabling smarter, faster and more precise AI model development for autonomous vehicles and robotics.”

Transportation leaders are using Cosmos to build physical AI for AVs, including:

  • Waabi, a company pioneering generative AI for the physical world, will use Cosmos for the search and curation of video data for AV software development and simulation.
  • Wayve, which is developing AI foundation models for autonomous driving, is evaluating Cosmos as a tool to search for edge and corner case driving scenarios used for safety and validation.
  • AV toolchain provider Foretellix will use Cosmos, alongside NVIDIA Omniverse Sensor RTX APIs, to evaluate and generate high-fidelity testing scenarios and training data at scale.
  • In addition, ridesharing giant Uber is partnering with NVIDIA to accelerate autonomous mobility. Rich driving datasets from Uber, combined with the features of the Cosmos platform and NVIDIA DGX Cloud, will help AV partners build stronger AI models even more efficiently.

Availability

Cosmos WFMs are now available under an open model license on Hugging Face and the NVIDIA NGC catalog. Cosmos models will soon be available as fully optimized NVIDIA NIM microservices.

Get started with Cosmos and join NVIDIA at CES.

See notice regarding software product information.

Read More

NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise

NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise

New NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work.

With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights from video, PDF and other images.

CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases are among leading providers of agentic AI orchestration and management tools that have worked with NVIDIA to build blueprints that integrate the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices and NVIDIA NeMo, with their platforms. These five blueprints — comprising a new category of partner blueprints for agentic AI — provide the building blocks for developers to create the next wave of AI applications that will transform every industry.

In addition to the partner blueprints, NVIDIA is introducing its own new AI Blueprint for PDF to podcast, as well as another to build AI agents for video search and summarization. These are joined by four additional NVIDIA Omniverse Blueprints that make it easier for developers to build simulation-ready digital twins for physical AI.

To help enterprises rapidly take AI agents into production, Accenture is announcing AI Refinery for Industry built with NVIDIA AI Enterprise, including NVIDIA NeMo, NVIDIA NIM microservices and AI Blueprints.

The AI Refinery for Industry solutions — powered by Accenture AI Refinery with NVIDIA — can help enterprises rapidly launch agentic AI across fields like automotive, technology, manufacturing, consumer goods and more.

Agentic AI Orchestration Tools Conduct a Symphony of Agents

Agentic AI represents the next wave in the evolution of generative AI. It enables applications to move beyond simple chatbot interactions to tackle complex, multi-step problems through sophisticated reasoning and planning. As explained in NVIDIA founder and CEO Jensen Huang’s CES keynote, enterprise AI agents will become a centerpiece of AI factories that generate tokens to create unprecedented intelligence and productivity across industries.

Agentic AI orchestration is a sophisticated system designed to manage, monitor and coordinate multiple AI agents working together — key to developing reliable enterprise agentic AI systems. The agentic AI orchestration layer from NVIDIA partners provides the glue needed for AI agents to effectively work together.

The new partner blueprints, now available from agentic AI orchestration leaders, offer integrations with NVIDIA AI Enterprise software, including NIM microservices and NVIDIA NeMo Retriever, to boost retrieval accuracy and reduce latency of agent workflows. For example:

  • CrewAI is using new Llama 3.3 70B NVIDIA NIM microservices and the NVIDIA NeMo Retriever embedding NIM microservice for its blueprint for code documentation for software development. The blueprint helps ensure code repositories remain comprehensive and easy to navigate.
  • Daily’s voice agent blueprint, powered by the company’s open-source Pipecat framework, uses the NVIDIA Riva automatic speech recognition and text-to-speech NIM microservice, along with the Llama 3.3 70B NIM microservice to achieve real-time conversational AI.
  • LangChain is adding Llama 3.3 70B NVIDIA NIM microservices to its structured report generation blueprint. Built on LangGraph, the blueprint allows users to define a topic and specify an outline to guide an agent in searching the web for relevant information, so it can return a report in the requested format.
  • LlamaIndex’s document research assistant for blog creation blueprint harnesses NVIDIA NIM microservices and NeMo Retriever to help content creators produce high-quality blogs. It can tap into agentic-driven retrieval-augmented generation with NeMo Retriever to automatically research, outline and generate compelling content with source attribution.
  • Weights & Biases is adding its W&B Weave capability to the AI Blueprint for AI virtual assistants, which features the Llama 3.1 70B NIM microservice. The blueprint can streamline the process of debugging, evaluating, iterating and tracking production performance and collecting human feedback to support seamless integration and faster iterations for building and deploying agentic AI applications.

Summarize Many, Complex PDFs While Keeping Proprietary Data Secure 

With trillions of PDF files — from financial reports to technical research papers — generated every year, it’s a constant challenge to stay up to date with information.

NVIDIA’s PDF to podcast AI Blueprint provides a recipe developers can use to turn multiple long and complex PDFs into AI-generated readouts that can help professionals, students and researchers efficiently learn about virtually any topic and quickly understand key takeaways.

The blueprint — built on NIM microservices and text-to-speech models — allows developers to build applications that extract images, tables and text from PDFs, and convert the data into easily digestible audio content, all while keeping data secure.

For example, developers can build AI agents that can understand context, identify key points and generate a concise summary as a monologue or a conversation-style podcast, narrated in a natural voice. This offers users an engaging, time-efficient way to absorb information at their desired speed.

Test, Prototype and Run Agentic AI Blueprints in One Click

NVIDIA Blueprints empower the world’s more than 25 million software developers to easily integrate AI into their applications across various industries. These blueprints simplify the process of building and deploying agentic AI applications, making advanced AI integration more accessible than ever.

With just a single click, developers can now build and run the new agentic AI Blueprints as NVIDIA Launchables. These Launchables provide on-demand access to developer environments with predefined configurations, enabling quick workflow setup.

By containing all necessary components for development, Launchables support consistent and reproducible setups without the need for manual configuration or overhead — streamlining the entire development process, from prototyping to deployment.

Enterprises can also deploy blueprints into production with the NVIDIA AI Enterprise software platform on data center platforms including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro, or run them on accelerated cloud platforms from Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.

Accenture and NVIDIA Fast-Track Deployments With AI Refinery for Industry

Accenture is introducing its new AI Refinery for Industry with 12 new industry agent solutions built with NVIDIA AI Enterprise software and available from the Accenture NVIDIA Business Group. These industry-specific agent solutions include revenue growth management for consumer goods and services, clinical trial companion for life sciences, industrial asset troubleshooting and B2B marketing, among others.

AI Refinery for Industry offerings include preconfigured components, best practices and foundational elements designed to fast-track the development of AI agents. They provide organizations the tools to build specialized AI networks tailored to their industry needs.

Accenture plans to launch over 100 AI Refinery for Industry agent solutions by the end of the year.

Get started with AI Blueprints and join NVIDIA at CES.

See notice regarding software product information.

Read More

NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI

NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI

From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment.

NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences.

Built on technologies like NVIDIA NIM microservices and AI Blueprints — and breakthrough AI applications from startups and software partners — Media2 uses AI to drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.

Amid this rapid creative transformation, companies embracing NVIDIA Media2 can stay on the $3 trillion media and entertainment industry’s cutting edge, reshaping how audiences consume and engage with content.

NVIDIA Media2 technology stack

NVIDIA Technologies at the Heart of Media2

As the media and entertainment industry embraces generative AI and accelerated computing, NVIDIA technologies are transforming how content is created, delivered and experienced.

NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that allows companies in broadcast, streaming and live sports to run live video pipelines on the same infrastructure as AI. The platform delivers applications from vendors across the industry on NVIDIA-accelerated infrastructure.

NVIDIA Holoscan for Media

Delivering the power needed to drive the next wave of data-enhanced intelligent content creation and hyper-personalized media is the NVIDIA Blackwell architecture, built to handle data-center-scale generative AI workflows with up to 25x more energy efficiency over the NVIDIA Hopper generation. Blackwell integrates six types of chips: GPUs, CPUs, DPUs, NVIDIA NVLink Switch chips, NVIDIA InfiniBand switches and Ethernet switches.

NVIDIA Blackwell architecture

Blackwell is supported by NVIDIA AI Enterprise, an end-to-end software platform for production-grade AI. NVIDIA AI Enterprise comprises NVIDIA NIM microservices, AI frameworks, libraries and tools that media companies can deploy on NVIDIA-accelerated clouds, data centers and workstations. Of the expanding list, these include:

  • The Mistral-NeMo-12B-Instruct NIM microservice, which enables multilingual information retrieval — the ability to search, process and retrieve knowledge across languages. This is key in enhancing an AI model’s outputs with greater accuracy and global relevancy.
  • The NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, which can help advertisers easily build personalized, on-brand and product-accurate marketing content at scale using real-time rendering and generative AI without affecting a hero product asset.
  • The NVIDIA Cosmos Nemotron vision language model NIM microservice, which is a multimodal VLM that can understand the meaning and context of text, images and video. With the microservice, media companies can query images and videos with natural language and receive informative responses.
  • The NVIDIA Edify multimodal generative AI architecture, which can generate visual assets — like images, 3D models and HDRi environments — from text or image prompts. It offers advanced editing tools and efficient training for developers. With NVIDIA AI Foundry, service providers can customize Edify models for commercial visual services using NVIDIA NIM microservices.

Partners in the Media2 Ecosystem

Partners across the industry are adopting NVIDIA technology to reshape the next chapter of storytelling.

Getty Images and Shutterstock are intelligent content creation services built with NVIDIA Edify. The AI models have also been optimized and packaged for maximum performance with NVIDIA NIM microservices.

Bria is a commercial-first visual generative AI platform designed for developers. It’s trained on 100% licensed data and built on responsible AI principles. The platform offers tools for custom pipelines, seamless integration and flexible deployment, ensuring enterprise-grade compliance and scalable, predictable content generation. Optimized with NVIDIA NIM microservices, Bria delivers faster, safer and scalable production-ready solutions.

Runway is an AI platform that provides advanced creative tools for artists and filmmakers. The company’s Gen-3 Alpha Turbo model excels in video generation and includes a new Camera Control feature that allows for precise camera movements like pan, tilt and zoom. Runway’s integration of the NVIDIA CV-CUDA open-source library combined with NVIDIA GPUs accelerates preprocessing for high-resolution videos in its segmentation model.

Wonder Dynamics, an Autodesk company, recently launched the beta version of Wonder Animation, featuring powerful new video-to-3D scene technology that can turn any video sequence into a 3D-animated scene for animated film production. Accelerated by NVIDIA GPU technology, Wonder Animation provides visual effects artists and animators with an easy-to-use, flexible tool that significantly reduces the time, complexity and efforts traditionally associated with 3D animation and visual effects workflows — while allowing the artist to maintain full creative control.

Comcast’s Sky innovation team is collaborating with NVIDIA on lab testing NVIDIA NIM microservices and partner models for its global platforms. The integration could lead to greater interactivity and accessibility for customers around the world, such as enabling the use of voice commands to request summaries during live sports and access other contextual information.

, a creative technology company and home to the largest network of virtual studios, is broadening access to the creation of virtual environments and immersive content with NVIDIA-accelerated generative AI technologies.

Twelve Labs, a member of the NVIDIA Inception program for startups, is developing advanced multimodal foundation models that can understand videos like humans, enabling precise semantic search, content analysis and video-to-text generation. Twelve Labs uses NVIDIA H100 GPUs to significantly improve the models’ inference performance, achieving up to a 7x improvement in requests served per second.

S4 Capital’s Monks is using cutting-edge AI technologies to enhance live broadcasts with real-time content segmentation and personalized fan experiences. Powered by NVIDIA Holoscan for Media, the company’s solution is integrated with tools like NVIDIA VILA to generate contextual metadata for injection within a time-addressible media store framework — enabling precise, action-based searching within video content.

Additionally, Monks uses NVIDIA NeMo Curator to help process data to build tailored AI models for sports leagues and IP holders, unlocking new monetization opportunities through licensing. By combining these technologies, broadcasters can seamlessly deliver hyper-relevant content to fans as events unfold, while adapting to the evolving demands of modern audiences.

Media companies manage vast amounts of video content, which can be challenging and time-consuming to locate, catalog and compile into finished assets. Leading media-focused consultant and system integrator Qvest has developed an AI video discovery engine, built on NIM microservices, that accelerates this process by automating the data capture of video files. This streamlines a user’s ability to both discover and contextualize how videos can fit in their intended story.

Verizon is transforming global enterprise operations, as well as live media and sports content, by integrating its reliable, secure private 5G network with NVIDIA’s full-stack AI platform, including NVIDIA AI Enterprise and NIM microservices, to deliver the latest AI solutions at the edge.

Using this solution, streamers, sports leagues and rights holders can enhance fan experiences with greater interactivity and immersion by deploying high-performance 5G connectivity along with generative AI, agentic AI, extended reality and streaming applications that enable personalized content delivery. These technologies also help elevate player performance and viewer engagement by offering real-time data analytics to coaches, players, referees and fans. It can also enable private 5G-powered enterprise AI use cases to drive automation and productivity.

Welcome to NVIDIA Media2

The NVIDIA Media2 initiative empowers companies to redefine the future of media and entertainment through intelligent, data-driven and immersive technologies — giving them a competitive edge while equipping them to drive innovation across the industry.

NIM microservices from NVIDIA and model developers are now available to try, with additional models added regularly.

Get started with NVIDIA NIM and AI Blueprints, and watch the CES opening keynote delivered by NVIDIA founder and CEO Jensen Huang to hear the latest advancements in AI.

See notice regarding software product information.

 

Read More

NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development

NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development

Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts.

Announced by NVIDIA founder and CEO Jensen Huang today at the CES trade show, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.

Imitation learning — a subset of robot learning — enables humanoids to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the Isaac GR00T blueprint for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations.

Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a digital twin. These human actions are mimicked by a robot in  simulation and recorded for use as ground truth.

The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the NVIDIA Omniverse and NVIDIA Cosmos platforms, exponentially expands this dataset through domain randomization and 3D upscaling.

The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in NVIDIA Isaac Lab, an open-source and modular framework for robot learning.

World Foundation Models Narrow the Sim-to-Real Gap 

NVIDIA also announced Cosmos at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and synthetic data.

In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse — a developer platform of application programming interfaces and microservices for building 3D applications and services — with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations.

An Expanding Ecosystem 

Collectively, NVIDIA Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure.

Humanoid software, hardware and robot manufacturers can apply for early access to NVIDIA’s humanoid robot developer program.

Watch the CES opening keynote from NVIDIA founder and CEO Jensen Huang, and stay up to date by subscribing to the newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X and Facebook.

See notice regarding software product information.

Read More

NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community

NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community

NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs).

World foundation models, or WFMs, are as fundamental as large language models. They use input data, including text, image, video and movement, to generate and simulate virtual worlds in a way that accurately models the spatial relationships of objects in the scene and their physical interactions.

Announced today at CES, NVIDIA is making available the first wave of Cosmos WFMs for physics-based simulation and synthetic data generation — plus state-of-the-art tokenizers, guardrails, an accelerated data processing and curation pipeline, and a framework for model customization and optimization.

Researchers and developers, regardless of their company size, can freely use the Cosmos models under NVIDIA’s permissive open model license that allows commercial usage. Enterprises building AI agents can also use new open NVIDIA Llama Nemotron and Cosmos Nemotron models, unveiled at CES.

The openness of Cosmos’ state-of-the-art models unblocks physical AI developers building robotics and AV technology and enables enterprises of all sizes to more quickly bring their physical AI applications to market. Developers can use Cosmos models directly to generate physics-based synthetic data, or they can harness the NVIDIA NeMo framework to fine-tune the models with their own videos for specific physical AI setups.

Physical AI leaders — including robotics companies 1X, Agility Robotics and XPENG, and AV developers Uber and Waabi  — are already working with Cosmos to accelerate and enhance model development.

Developers can preview the first Cosmos autoregressive and diffusion models on the NVIDIA API catalog, and download the family of models and fine-tuning framework from the NVIDIA NGC catalog and Hugging Face.

World Foundational Models for Physical AI

Cosmos world foundation models are a suite of open diffusion and autoregressive transformer models for physics-aware video generation. The models have been trained on 9,000 trillion tokens from 20 million hours of real-world human interactions, environment, industrial, robotics and driving data.

The models come in three categories: Nano, for models optimized for real-time, low-latency inference and edge deployment; Super, for highly performant baseline models; and Ultra, for maximum quality and fidelity, best used for distilling custom models.

When paired with NVIDIA Omniverse 3D outputs, the diffusion models generate controllable, high-quality synthetic video data to bootstrap training of robotic and AV perception models. The autoregressive models predict what should come next in a sequence of video frames based on input frames and text. This enables real-time next-token prediction, giving physical AI models the foresight to predict their next best action.

Developers can use Cosmos’ open models for text-to-world and video-to-world generation. Versions of the diffusion and autoregressive models, with between 4 and 14 billion parameters each, are available now on the NGC catalog and Hugging Face.

Also available are a 12-billion-parameter upsampling model for refining text prompts, a 7-billion-parameter video decoder optimized for augmented reality, and guardrail models to ensure responsible, safe use.

To demonstrate opportunities for customization, NVIDIA is also releasing fine-tuned model samples for vertical applications, such as generating multisensor views for AVs.

Advancing Robotics, Autonomous Vehicle Applications

Cosmos world foundation models can enable synthetic data generation to augment training datasets, simulation to test and debug physical AI models before they’re deployed in the real world, and reinforcement learning in virtual environments to accelerate AI agent learning.

Developers can generate massive amounts of controllable, physics-based synthetic data by conditioning Cosmos with composed 3D scenes from NVIDIA Omniverse.

Waabi, a company pioneering generative AI for the physical world, starting with autonomous vehicles, is evaluating the use of Cosmos for the search and curation of video data for AV software development and simulation. This will further accelerate the company’s industry-leading approach to safety, which is based on Waabi World, a generative AI simulator that can create any situation a vehicle might encounter with the same level of realism as if it happened in the real world.

In robotics, WFMs can generate synthetic virtual environments or worlds to provide a less expensive, more efficient and controlled space for robot learning. Embodied AI startup Hillbot is boosting its data pipeline by using Cosmos to generate terabytes of high-fidelity 3D environments. This AI-generated data will help the company refine its robotic training and operations, enabling faster, more efficient robotic skilling and improved performance for industrial and domestic tasks.

In both industries, developers can use NVIDIA Omniverse and Cosmos as a multiverse simulation engine, allowing a physical AI policy model to simulate every possible future path it could take to execute a particular task — which in turn helps the model select the best of these paths.

Data curation and the training of Cosmos models relied on thousands of NVIDIA GPUs through NVIDIA DGX Cloud, a high-performance, fully managed AI platform that provides accelerated computing clusters in every leading cloud.

Developers adopting Cosmos can use DGX Cloud for an easy way to deploy Cosmos models, with further support available through the NVIDIA AI Enterprise software platform.

Customize and Deploy With NVIDIA Cosmos

In addition to foundation models, the Cosmos platform includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.

Robotics and AV developers collect millions or billions of hours of real-world recorded video, resulting in petabytes of data. Cosmos enables developers to process 20 million hours of data in just 40 days on NVIDIA Hopper GPUs, or as little as 14 days on NVIDIA Blackwell GPUs. Using unoptimized pipelines running on a CPU system with equivalent power consumption, processing the same amount of data would take over three years.

The platform also features a suite of powerful video and image tokenizers that can convert videos into tokens at different video compression ratios for training various transformer models.

The Cosmos tokenizers deliver 8x more total compression than state-of-the-art methods and 12x faster processing speed, which offers superior quality and reduced computational costs in both training and inference. Developers can access these tokenizers, available under NVIDIA’s open model license, via Hugging Face and GitHub.

Developers using Cosmos can also harness model training and fine-tuning capabilities offered by NeMo framework, a GPU-accelerated framework that enables high-throughput AI training.

Developing Safe, Responsible AI Models

Now available to developers under the NVIDIA Open Model License Agreement, Cosmos was developed in line with NVIDIA’s trustworthy AI principles, which include nondiscrimination, privacy, safety, security and transparency.

The Cosmos platform includes Cosmos Guardrails, a dedicated suite of models that, among other capabilities, mitigates harmful text and image inputs during preprocessing and screens generated videos during postprocessing for safety. Developers can further enhance these guardrails for their custom applications.

Cosmos models on the NVIDIA API catalog also feature an inbuilt watermarking system that enables identification of AI-generated sequences.

NVIDIA Cosmos was developed by NVIDIA Research. Read the research paper, “Cosmos World Foundation Model Platform for Physical AI,” for more details on model development and benchmarks. Model cards providing additional information are available on Hugging Face.

Learn more about world foundation models in an AI Podcast episode, airing Jan. 7, that features Ming-Yu Liu, vice president of research at NVIDIA. 

Get started with NVIDIA Cosmos and join NVIDIA at CES. Watch the Cosmos demo and Huang’s keynote below: 

See notice regarding software product information.

Read More

PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW

PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW

GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions.

Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW app for the Steam Deck. NVIDIA is working to bring cloud gaming to the popular PC gaming handheld device later this year.

In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality devices — with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS.

In addition, NVIDIA is launching the first GeForce RTX-powered data center in India, making gaming more accessible around the world.

Plus, GeForce NOW’s extensive library of over 2,100 supported titles is expanding with highly anticipated AAA titles. DOOM: The Dark Ages and Avowed will join the cloud when they launch on PC this year.

RTX on Deck

The Steam Deck’s portability paired with GeForce NOW opens up new possibilities for high-fidelity gaming everywhere. The native GeForce NOW app will offer up to 4K resolution and 60 frames per second with high dynamic range on Valve’s innovative Steam Deck handheld when connected to a TV, streaming from GeForce RTX-powered gaming rigs in the cloud.

Last year, GeForce NOW rolled out a beta installation method that was eagerly welcomed by the gaming community. Later this year, members will be able to download the native GeForce NOW app and install it on Steam Deck.

Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency. Because GeForce NOW streams from an RTX gaming rig in the cloud, the Steam Deck uses less processing power, which extends battery life compared with playing locally.

The streaming experience with GeForce NOW looks stunning, whichever way Steam Deck users want to play — whether that’s in handheld mode for HDR-quality graphics, connected to a monitor for up to 1440p 120 fps HDR or hooked up to a TV for big-screen streaming at up to 4K 60 HDR. GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.

Get ready for major upgrades to streaming on the go when the GeForce NOW app launches on the Steam Deck later this year.

Stream Beyond Reality

Get immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.

Meta Quest 3 to be on GeForce NOW
Jump into a whole new gaming dimension with GeForce NOW.

Members can transform the space around them into a personal gaming theater with GeForce NOW. The streaming experience on these devices will support gamepad-compatible titles for members to play their favorite PC games on a massive virtual screen.

For an even more enhanced visual experience, GeForce NOW Ultimate and Performance members using these devices can tap into RTX and DLSS technologies in supported games. Members will be able to step into a world where games come to life on a grand scale, powered by GeForce NOW technologies.

Land of a Thousand Lights … and Games

India data center to be on GeForce NOW
New year, new data center.

NVIDIA is broadening cloud gaming in India and Latin America. The first GeForce RTX 4080-powered data center will launch in India in the first half of this year. This follows the launch of GeForce NOW in Japan last year, as well as in Colombia and Chile, to be operated by GeForce NOW Alliance partner Digevo.

GeForce RTX-powered gaming in the rapidly growing Indian gaming market will provide the ability to stream AAA games without the latest hardware. Gamers in the region can look forward to the launch of Ultimate memberships, along with all the new games and technological advancements announced at CES.

Send in the Games

AAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software, the legendary studio behind the DOOM franchise. All will be available at launch on PC this year.

Avowed to be on GeForce NOW
Get ready to jump into the Living Lands.

Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Welcome to the Living Lands, an island full of mysteries and secrets, danger and adventure, choices and consequences and untamed wilderness. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic — harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions — which join the players’ parties — have unique abilities and storylines that can be influenced by gamers’ choices.

DOOM: The Dark Ages to be on GeForce NOW
Have a hell of a time in the cloud.

DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayer’s rage this year.

Get ready to play these titles and more at high performance when they join GeForce NOW at launch. Ultimate members will be able to stream at up to 4K resolution and 120 fps with support for NVIDIA DLSS and Reflex technology, and experience the action even on low-powered devices. Keep an eye out on GFN Thursdays for the latest on their release dates in the cloud.

GeForce NOW is making popular devices cloud-gaming-ready while consistently delivering quality titles from top publishers to bring another ultimate year of gaming to members across the globe.

See notice regarding software product information.

Read More

NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES

NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES

Leading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility.

NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation and in-vehicle compute.

At the CES trade show in Las Vegas this week, NVIDIA’s customers and partners are showcasing their latest mobility innovations built on NVIDIA accelerated computing and AI.

Readying Future Vehicle Roadmaps With NVIDIA DRIVE Thor, Built on NVIDIA Blackwell

The NVIDIA DRIVE AGX Thor system-on-a-chip (SoC), built on the NVIDIA Blackwell architecture, is engineered to handle the transportation industry’s most demanding data-intensive workloads, including those involving generative AI, vision language models and large language models.

DRIVE Ecosystem Partners Transform the Show Floor and Industry at Large

NVIDIA partners are pushing boundaries of automotive innovation with their latest developments and demos, using NVIDIA technologies and accelerated computing to advance everything from sensors, simulation and training to generative AI and teledriving, and include:

Delivering 1,000 teraflops of accelerated compute performance, DRIVE Thor is equipped to accelerate inference tasks that are critical for autonomous vehicles to understand and navigate the world around them, such as recognizing pedestrians, adjusting to inclement weather and more.

At CES, Aurora, Continental and NVIDIA announced a long-term strategic partnership to deploy driverless trucks at scale, powered by the next-generation NVIDIA DRIVE Thor SoC. NVIDIA DRIVE Thor and DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous driving system that Continental plans to mass-manufacture in 2027.

Arm, one of NVIDIA’s key technology partners, is the compute platform of choice for a number of innovations at CES. The Arm Neoverse V3AE CPU, designed to meet the specific safety and performance demands of automotive, is integrated with DRIVE Thor. This marks the first implementation of Arm’s next-generation automotive CPU, which combines Arm v9-based technologies with data-center-class single-thread performance, alongside essential safety and security features.

Tried and True — DRIVE Orin Mainstream Adoption Continues

NVIDIA DRIVE AGX Orin, the predecessor of DRIVE Thor, continues to be a production-proven advanced driver-assistance system computer widely used in cars today — delivering 254 trillion operations per second of accelerated compute to process sensor data for safe, real-time driving decisions.

Toyota, the world’s largest automaker, will build its next-generation vehicles on the high-performance, automotive-grade NVIDIA DRIVE Orin SoC, running the safety-certified NVIDIA DriveOS. These vehicles will offer functionally safe advanced driving-assistance capabilities.

At the NVIDIA showcase on the fourth floor of the Fontainebleau, Volvo Cars’ software-defined EX90 and Nuro’s autonomous driving technology — the Nuro Driver platform — will be on display, built on NVIDIA DRIVE AGX.

Other vehicles powered by NVIDIA DRIVE Orin on display during CES include:

  • Zeekr Mix and Zeekr 001, which feature DRIVE Orin will be on display along with the debut of Zeekr’s self-developed ultra-high-performance intelligent driving domain controller that will be built on DRIVE Thor and the NVIDIA Blackwell architecture (LVCC West Hall, booth 5640)
  • Lotus Eletre Carbon (LVCC West Hall, booth 4266 with P3 and 3SS and booth 3500 with HERE)
  • Rivian R1S and Polestar 3 activated with Dolby — vehicles on display and demos available by appointment (Park MGM/NoMad Hotel next to Dolby Live)
  • Lucid Air (LVCC West Hall booth 4964 with SoundHound AI)
Zeekr MIX
Rivian R1S

NVIDIA’s partners will also showcase their automotive solutions built on NVIDIA technologies, including:

  • Arbe: Delivering next-generation, ultra-high-definition radar technology, integrating with NVIDIA DRIVE AGX to revolutionize radar-based free-space mapping with cutting-edge AI capabilities. The integration empowers manufacturers to incorporate radar data effortlessly into their perception systems, enhancing safety applications and autonomous driving. (LVCC, West Hall 7406, Diamond Lot 323)
  • Cerence: Collaborating with NVIDIA to enhance its CaLLM family of language models, including the cloud-based Cerence Automotive Large Language Model, or CaLLM, powered by DRIVE Orin.
  • Foretellix: Integrating NVIDIA Omniverse Sensor RTX APIs into its Foretify AV test management platform, enhancing object-level simulation with physically accurate sensor simulations.
  • Imagry: Building AI-driven, HD-mapless autonomous driving solutions, accelerated by NVIDIA technology, that are designed for both self-driving passenger vehicles and urban buses. (LVCC, West Hall, 5976)
  • Lenovo Vehicle Computing: Previewing (by appointment) its Lenovo AD1, a powerful automotive-grade domain controller built on the NVIDIA DRIVE Thor platform, and tailored for SAE level 4 autonomous driving.
  • Provizio: Showcasing Provizio’s 5D perception Imaging Radar, accelerated by NVIDIA technology, that delivers unprecedented, scalable, on-the-edge radar perception capabilities, with on-vehicle demonstration rides at CES.
  • Quanta: Demonstrating (by appointment) in-house NVIDIA DRIVE AGX Hyperion cameras running on its electronic control unit powered by DRIVE Orin.
  • SoundHound AI: Showcasing its work with NVIDIA to bring voice generative AI directly to the edge, bringing the intelligence of cloud-based LLMs directly to vehicles. (LVCC, West Hall, 4964)
  • Vay: Offering innovative door-to-door mobility services by combining Vay’s remote driving capabilities with NVIDIA DRIVE advanced AI and computing power.
  • Zoox: Showcasing its latest robotaxi, which leverages NVIDIA technology, driving autonomously on the streets of Las Vegas and parked in the Zoox booth. (LVCC, West Hall 3316).

Safety Is the Way for Autonomous Innovation 

At CES, NVIDIA also announced that its DRIVE AGX Hyperion platform has achieved safety certifications from TÜV SÜD and TÜV Rheinland, setting new standards for autonomous vehicle safety and innovation.

To enhance safety measures, NVIDIA also launched the DRIVE AI Systems Inspection Lab, designed to help partners meet rigorous autonomous vehicle safety and cybersecurity requirements.

In addition, complementing its three computers designed to accelerate AV development — NVIDIA AGX, NVIDIA Omniverse running on OVX and NVIDIA DGX — NVIDIA has introduced the NVIDIA Cosmos platform. Cosmos’ world foundation models and advanced data processing pipelines can dramatically scale generated data and speed up physical AI system development. With the platform’s data flywheel capability, developers can effectively transform thousands of real-world driven miles into billions of virtual miles.

Transportation leaders using Cosmos to build physical AI for AVs include Fortellix, Uber, Waabi and Wayve.

Learn more about NVIDIA’s latest automotive news by watching NVIDIA founder and CEO Jensen Huang’s opening keynote at CES.

See notice regarding software product information.

Read More

NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones

NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones

A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety.

The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional safety.

The lab has been accredited by the ANSI National Accreditation Board (ANAB) according to the ISO/IEC 17020 assessment for standards, including:

  • Functional safety (ISO 26262)
  • SOTIF (ISO 21448)
  • Cybersecurity (ISO 21434)
  • UN-R regulations, including UN-R 79, UN-R 13-H, UN-R 152, UN-R 155, UN-R 157 and UN-R 171
  • AI functional safety (ISO PAS 8800 and ISO/IEC TR 5469)

“The launch of this new lab will help partners in the global automotive ecosystem create safe, reliable autonomous driving technology,” said Ali Kani, vice president of automotive at NVIDIA. “With accreditation by ANAB, the lab will carry out an inspection plan that combines functional safety, cybersecurity and AI — bolstering adherence to the industry’s safety standards.”

“ANAB is proud to be the accreditation body for the NVIDIA DRIVE AI Systems Inspection Lab,” said R. Douglas Leonard Jr., executive director of ANAB. “NVIDIA’s comprehensive evaluation verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that DRIVE ecosystem partners meet the highest benchmarks for functional safety, cybersecurity and AI integration.”

The new lab builds on NVIDIA’s ongoing safety compliance work with Mercedes-Benz and JLR. Inaugural participants in the lab include Continental and Sony SSS-America.

“We are pleased to participate in the newly launched NVIDIA Drive AI Systems Inspection Lab and to further intensify the fruitful, ongoing collaboration between our two companies,” said Nobert Hammerschmidt, head of components business at Continental.

“Self-driving vehicles have the capability to significantly enhance safety on roads,” said Marius Evensen, head of automotive image sensors at Sony SSS-America. “We look forward to working with NVIDIA’s DRIVE AI Systems Inspection Lab to help us deliver the highest levels of safety to our customers.”

“Compliance with functional safety, SOTIF and cybersecurity is particularly challenging for complex systems such as AI-based autonomous vehicles,” said Riccardo Mariani, head of industry safety at NVIDIA. “Through the DRIVE AI Systems Inspection Lab, the correctness of the integration of our partners’ products with DRIVE safety and cybersecurity requirements can be inspected and verified.”

Now open to all NVIDIA DRIVE AGX platform partners, the lab is expected to expand to include additional automotive and robotics products and add a testing component.

Complementing International Automotive Safety Standards

The NVIDIA DRIVE AI Systems Inspection Lab complements the missions of independent third-party certification bodies, including technical service organizations such as TÜV SÜD, TÜV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA.

Today’s announcement dovetails with recent significant safety certifications and assessments of NVIDIA automotive products:

TÜV SÜD granted the ISO 21434 Cybersecurity Process certification to NVIDIA for its automotive system-on-a-chip, platform and software engineering processes. Upon certification release, the NVIDIA DriveOS 6.0 operating system conforms with ISO 26262 Automotive Safety Integrity Level (ASIL) D standards.

“Meeting cybersecurity process requirements is of fundamental importance in the autonomous vehicle era,” said Martin Webhofer, CEO of TÜV SÜD Rail GmbH. “NVIDIA has successfully established processes, activities and procedures that fulfill the stringent requirements of ISO 21434. Additionally, NVIDIA DriveOS 6.0 conforms to ISO 26262 ASIL D standards, pending final certification activities.”

TÜV Rheinland performed an independent United Nations Economic Commission for Europe safety assessment of NVIDIA DRIVE AV related to safety requirements for complex electronic systems.

“NVIDIA has demonstrated thorough, high-quality, safety-oriented processes and technologies in the context of the assessment of the generic, non-OEM-specific parts of the SAE level 2 NVIDIA DRIVE system,” said Dominik Strixner, global lead functional safety automotive mobility at TÜV Rheinland.

To learn more about NVIDIA’s work in advancing autonomous driving safety, read the NVIDIA Self-Driving Safety Report.

Read More