NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise

NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise

New NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work.

With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights from video, PDF and other images.

CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases are among leading providers of agentic AI orchestration and management tools that have worked with NVIDIA to build blueprints that integrate the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices and NVIDIA NeMo, with their platforms. These five blueprints — comprising a new category of partner blueprints for agentic AI — provide the building blocks for developers to create the next wave of AI applications that will transform every industry.

In addition to the partner blueprints, NVIDIA is introducing its own new AI Blueprint for PDF to podcast, as well as another to build AI agents for video search and summarization. These are joined by four additional NVIDIA Omniverse Blueprints that make it easier for developers to build simulation-ready digital twins for physical AI.

To help enterprises rapidly take AI agents into production, Accenture is announcing AI Refinery for Industry built with NVIDIA AI Enterprise, including NVIDIA NeMo, NVIDIA NIM microservices and AI Blueprints.

The AI Refinery for Industry solutions — powered by Accenture AI Refinery with NVIDIA — can help enterprises rapidly launch agentic AI across fields like automotive, technology, manufacturing, consumer goods and more.

Agentic AI Orchestration Tools Conduct a Symphony of Agents

Agentic AI represents the next wave in the evolution of generative AI. It enables applications to move beyond simple chatbot interactions to tackle complex, multi-step problems through sophisticated reasoning and planning. As explained in NVIDIA founder and CEO Jensen Huang’s CES keynote, enterprise AI agents will become a centerpiece of AI factories that generate tokens to create unprecedented intelligence and productivity across industries.

Agentic AI orchestration is a sophisticated system designed to manage, monitor and coordinate multiple AI agents working together — key to developing reliable enterprise agentic AI systems. The agentic AI orchestration layer from NVIDIA partners provides the glue needed for AI agents to effectively work together.

The new partner blueprints, now available from agentic AI orchestration leaders, offer integrations with NVIDIA AI Enterprise software, including NIM microservices and NVIDIA NeMo Retriever, to boost retrieval accuracy and reduce latency of agent workflows. For example:

  • CrewAI is using new Llama 3.3 70B NVIDIA NIM microservices and the NVIDIA NeMo Retriever embedding NIM microservice for its blueprint for code documentation for software development. The blueprint helps ensure code repositories remain comprehensive and easy to navigate.
  • Daily’s voice agent blueprint, powered by the company’s open-source Pipecat framework, uses the NVIDIA Riva automatic speech recognition and text-to-speech NIM microservice, along with the Llama 3.3 70B NIM microservice to achieve real-time conversational AI.
  • LangChain is adding Llama 3.3 70B NVIDIA NIM microservices to its structured report generation blueprint. Built on LangGraph, the blueprint allows users to define a topic and specify an outline to guide an agent in searching the web for relevant information, so it can return a report in the requested format.
  • LlamaIndex’s document research assistant for blog creation blueprint harnesses NVIDIA NIM microservices and NeMo Retriever to help content creators produce high-quality blogs. It can tap into agentic-driven retrieval-augmented generation with NeMo Retriever to automatically research, outline and generate compelling content with source attribution.
  • Weights & Biases is adding its W&B Weave capability to the AI Blueprint for AI virtual assistants, which features the Llama 3.1 70B NIM microservice. The blueprint can streamline the process of debugging, evaluating, iterating and tracking production performance and collecting human feedback to support seamless integration and faster iterations for building and deploying agentic AI applications.

Summarize Many, Complex PDFs While Keeping Proprietary Data Secure 

With trillions of PDF files — from financial reports to technical research papers — generated every year, it’s a constant challenge to stay up to date with information.

NVIDIA’s PDF to podcast AI Blueprint provides a recipe developers can use to turn multiple long and complex PDFs into AI-generated readouts that can help professionals, students and researchers efficiently learn about virtually any topic and quickly understand key takeaways.

The blueprint — built on NIM microservices and text-to-speech models — allows developers to build applications that extract images, tables and text from PDFs, and convert the data into easily digestible audio content, all while keeping data secure.

For example, developers can build AI agents that can understand context, identify key points and generate a concise summary as a monologue or a conversation-style podcast, narrated in a natural voice. This offers users an engaging, time-efficient way to absorb information at their desired speed.

Test, Prototype and Run Agentic AI Blueprints in One Click

NVIDIA Blueprints empower the world’s more than 25 million software developers to easily integrate AI into their applications across various industries. These blueprints simplify the process of building and deploying agentic AI applications, making advanced AI integration more accessible than ever.

With just a single click, developers can now build and run the new agentic AI Blueprints as NVIDIA Launchables. These Launchables provide on-demand access to developer environments with predefined configurations, enabling quick workflow setup.

By containing all necessary components for development, Launchables support consistent and reproducible setups without the need for manual configuration or overhead — streamlining the entire development process, from prototyping to deployment.

Enterprises can also deploy blueprints into production with the NVIDIA AI Enterprise software platform on data center platforms including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro, or run them on accelerated cloud platforms from Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.

Accenture and NVIDIA Fast-Track Deployments With AI Refinery for Industry

Accenture is introducing its new AI Refinery for Industry with 12 new industry agent solutions built with NVIDIA AI Enterprise software and available from the Accenture NVIDIA Business Group. These industry-specific agent solutions include revenue growth management for consumer goods and services, clinical trial companion for life sciences, industrial asset troubleshooting and B2B marketing, among others.

AI Refinery for Industry offerings include preconfigured components, best practices and foundational elements designed to fast-track the development of AI agents. They provide organizations the tools to build specialized AI networks tailored to their industry needs.

Accenture plans to launch over 100 AI Refinery for Industry agent solutions by the end of the year.

Get started with AI Blueprints and join NVIDIA at CES.

See notice regarding software product information.

Read More

NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI

NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI

From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment.

NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences.

Built on technologies like NVIDIA NIM microservices and AI Blueprints — and breakthrough AI applications from startups and software partners — Media2 uses AI to drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.

Amid this rapid creative transformation, companies embracing NVIDIA Media2 can stay on the $3 trillion media and entertainment industry’s cutting edge, reshaping how audiences consume and engage with content.

NVIDIA Media2 technology stack

NVIDIA Technologies at the Heart of Media2

As the media and entertainment industry embraces generative AI and accelerated computing, NVIDIA technologies are transforming how content is created, delivered and experienced.

NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that allows companies in broadcast, streaming and live sports to run live video pipelines on the same infrastructure as AI. The platform delivers applications from vendors across the industry on NVIDIA-accelerated infrastructure.

NVIDIA Holoscan for Media

Delivering the power needed to drive the next wave of data-enhanced intelligent content creation and hyper-personalized media is the NVIDIA Blackwell architecture, built to handle data-center-scale generative AI workflows with up to 25x more energy efficiency over the NVIDIA Hopper generation. Blackwell integrates six types of chips: GPUs, CPUs, DPUs, NVIDIA NVLink Switch chips, NVIDIA InfiniBand switches and Ethernet switches.

NVIDIA Blackwell architecture

Blackwell is supported by NVIDIA AI Enterprise, an end-to-end software platform for production-grade AI. NVIDIA AI Enterprise comprises NVIDIA NIM microservices, AI frameworks, libraries and tools that media companies can deploy on NVIDIA-accelerated clouds, data centers and workstations. Of the expanding list, these include:

  • The Mistral-NeMo-12B-Instruct NIM microservice, which enables multilingual information retrieval — the ability to search, process and retrieve knowledge across languages. This is key in enhancing an AI model’s outputs with greater accuracy and global relevancy.
  • The NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, which can help advertisers easily build personalized, on-brand and product-accurate marketing content at scale using real-time rendering and generative AI without affecting a hero product asset.
  • The NVIDIA Cosmos Nemotron vision language model NIM microservice, which is a multimodal VLM that can understand the meaning and context of text, images and video. With the microservice, media companies can query images and videos with natural language and receive informative responses.
  • The NVIDIA Edify multimodal generative AI architecture, which can generate visual assets — like images, 3D models and HDRi environments — from text or image prompts. It offers advanced editing tools and efficient training for developers. With NVIDIA AI Foundry, service providers can customize Edify models for commercial visual services using NVIDIA NIM microservices.

Partners in the Media2 Ecosystem

Partners across the industry are adopting NVIDIA technology to reshape the next chapter of storytelling.

Getty Images and Shutterstock are intelligent content creation services built with NVIDIA Edify. The AI models have also been optimized and packaged for maximum performance with NVIDIA NIM microservices.

Bria is a commercial-first visual generative AI platform designed for developers. It’s trained on 100% licensed data and built on responsible AI principles. The platform offers tools for custom pipelines, seamless integration and flexible deployment, ensuring enterprise-grade compliance and scalable, predictable content generation. Optimized with NVIDIA NIM microservices, Bria delivers faster, safer and scalable production-ready solutions.

Runway is an AI platform that provides advanced creative tools for artists and filmmakers. The company’s Gen-3 Alpha Turbo model excels in video generation and includes a new Camera Control feature that allows for precise camera movements like pan, tilt and zoom. Runway’s integration of the NVIDIA CV-CUDA open-source library combined with NVIDIA GPUs accelerates preprocessing for high-resolution videos in its segmentation model.

Wonder Dynamics, an Autodesk company, recently launched the beta version of Wonder Animation, featuring powerful new video-to-3D scene technology that can turn any video sequence into a 3D-animated scene for animated film production. Accelerated by NVIDIA GPU technology, Wonder Animation provides visual effects artists and animators with an easy-to-use, flexible tool that significantly reduces the time, complexity and efforts traditionally associated with 3D animation and visual effects workflows — while allowing the artist to maintain full creative control.

Comcast’s Sky innovation team is collaborating with NVIDIA on lab testing NVIDIA NIM microservices and partner models for its global platforms. The integration could lead to greater interactivity and accessibility for customers around the world, such as enabling the use of voice commands to request summaries during live sports and access other contextual information.

, a creative technology company and home to the largest network of virtual studios, is broadening access to the creation of virtual environments and immersive content with NVIDIA-accelerated generative AI technologies.

Twelve Labs, a member of the NVIDIA Inception program for startups, is developing advanced multimodal foundation models that can understand videos like humans, enabling precise semantic search, content analysis and video-to-text generation. Twelve Labs uses NVIDIA H100 GPUs to significantly improve the models’ inference performance, achieving up to a 7x improvement in requests served per second.

S4 Capital’s Monks is using cutting-edge AI technologies to enhance live broadcasts with real-time content segmentation and personalized fan experiences. Powered by NVIDIA Holoscan for Media, the company’s solution is integrated with tools like NVIDIA VILA to generate contextual metadata for injection within a time-addressible media store framework — enabling precise, action-based searching within video content.

Additionally, Monks uses NVIDIA NeMo Curator to help process data to build tailored AI models for sports leagues and IP holders, unlocking new monetization opportunities through licensing. By combining these technologies, broadcasters can seamlessly deliver hyper-relevant content to fans as events unfold, while adapting to the evolving demands of modern audiences.

Media companies manage vast amounts of video content, which can be challenging and time-consuming to locate, catalog and compile into finished assets. Leading media-focused consultant and system integrator Qvest has developed an AI video discovery engine, built on NIM microservices, that accelerates this process by automating the data capture of video files. This streamlines a user’s ability to both discover and contextualize how videos can fit in their intended story.

Verizon is transforming global enterprise operations, as well as live media and sports content, by integrating its reliable, secure private 5G network with NVIDIA’s full-stack AI platform, including NVIDIA AI Enterprise and NIM microservices, to deliver the latest AI solutions at the edge.

Using this solution, streamers, sports leagues and rights holders can enhance fan experiences with greater interactivity and immersion by deploying high-performance 5G connectivity along with generative AI, agentic AI, extended reality and streaming applications that enable personalized content delivery. These technologies also help elevate player performance and viewer engagement by offering real-time data analytics to coaches, players, referees and fans. It can also enable private 5G-powered enterprise AI use cases to drive automation and productivity.

Welcome to NVIDIA Media2

The NVIDIA Media2 initiative empowers companies to redefine the future of media and entertainment through intelligent, data-driven and immersive technologies — giving them a competitive edge while equipping them to drive innovation across the industry.

NIM microservices from NVIDIA and model developers are now available to try, with additional models added regularly.

Get started with NVIDIA NIM and AI Blueprints, and watch the CES opening keynote delivered by NVIDIA founder and CEO Jensen Huang to hear the latest advancements in AI.

See notice regarding software product information.

 

Read More

NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development

NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development

Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts.

Announced by NVIDIA founder and CEO Jensen Huang today at the CES trade show, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.

Imitation learning — a subset of robot learning — enables humanoids to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the Isaac GR00T blueprint for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations.

Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a digital twin. These human actions are mimicked by a robot in  simulation and recorded for use as ground truth.

The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the NVIDIA Omniverse and NVIDIA Cosmos platforms, exponentially expands this dataset through domain randomization and 3D upscaling.

The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in NVIDIA Isaac Lab, an open-source and modular framework for robot learning.

World Foundation Models Narrow the Sim-to-Real Gap 

NVIDIA also announced Cosmos at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and synthetic data.

In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse — a developer platform of application programming interfaces and microservices for building 3D applications and services — with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations.

An Expanding Ecosystem 

Collectively, NVIDIA Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure.

Humanoid software, hardware and robot manufacturers can apply for early access to NVIDIA’s humanoid robot developer program.

Watch the CES opening keynote from NVIDIA founder and CEO Jensen Huang, and stay up to date by subscribing to the newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X and Facebook.

See notice regarding software product information.

Read More

NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community

NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community

NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs).

World foundation models, or WFMs, are as fundamental as large language models. They use input data, including text, image, video and movement, to generate and simulate virtual worlds in a way that accurately models the spatial relationships of objects in the scene and their physical interactions.

Announced today at CES, NVIDIA is making available the first wave of Cosmos WFMs for physics-based simulation and synthetic data generation — plus state-of-the-art tokenizers, guardrails, an accelerated data processing and curation pipeline, and a framework for model customization and optimization.

Researchers and developers, regardless of their company size, can freely use the Cosmos models under NVIDIA’s permissive open model license that allows commercial usage. Enterprises building AI agents can also use new open NVIDIA Llama Nemotron and Cosmos Nemotron models, unveiled at CES.

The openness of Cosmos’ state-of-the-art models unblocks physical AI developers building robotics and AV technology and enables enterprises of all sizes to more quickly bring their physical AI applications to market. Developers can use Cosmos models directly to generate physics-based synthetic data, or they can harness the NVIDIA NeMo framework to fine-tune the models with their own videos for specific physical AI setups.

Physical AI leaders — including robotics companies 1X, Agility Robotics and XPENG, and AV developers Uber and Waabi  — are already working with Cosmos to accelerate and enhance model development.

Developers can preview the first Cosmos autoregressive and diffusion models on the NVIDIA API catalog, and download the family of models and fine-tuning framework from the NVIDIA NGC catalog and Hugging Face.

World Foundational Models for Physical AI

Cosmos world foundation models are a suite of open diffusion and autoregressive transformer models for physics-aware video generation. The models have been trained on 9,000 trillion tokens from 20 million hours of real-world human interactions, environment, industrial, robotics and driving data.

The models come in three categories: Nano, for models optimized for real-time, low-latency inference and edge deployment; Super, for highly performant baseline models; and Ultra, for maximum quality and fidelity, best used for distilling custom models.

When paired with NVIDIA Omniverse 3D outputs, the diffusion models generate controllable, high-quality synthetic video data to bootstrap training of robotic and AV perception models. The autoregressive models predict what should come next in a sequence of video frames based on input frames and text. This enables real-time next-token prediction, giving physical AI models the foresight to predict their next best action.

Developers can use Cosmos’ open models for text-to-world and video-to-world generation. Versions of the diffusion and autoregressive models, with between 4 and 14 billion parameters each, are available now on the NGC catalog and Hugging Face.

Also available are a 12-billion-parameter upsampling model for refining text prompts, a 7-billion-parameter video decoder optimized for augmented reality, and guardrail models to ensure responsible, safe use.

To demonstrate opportunities for customization, NVIDIA is also releasing fine-tuned model samples for vertical applications, such as generating multisensor views for AVs.

Advancing Robotics, Autonomous Vehicle Applications

Cosmos world foundation models can enable synthetic data generation to augment training datasets, simulation to test and debug physical AI models before they’re deployed in the real world, and reinforcement learning in virtual environments to accelerate AI agent learning.

Developers can generate massive amounts of controllable, physics-based synthetic data by conditioning Cosmos with composed 3D scenes from NVIDIA Omniverse.

Waabi, a company pioneering generative AI for the physical world, starting with autonomous vehicles, is evaluating the use of Cosmos for the search and curation of video data for AV software development and simulation. This will further accelerate the company’s industry-leading approach to safety, which is based on Waabi World, a generative AI simulator that can create any situation a vehicle might encounter with the same level of realism as if it happened in the real world.

In robotics, WFMs can generate synthetic virtual environments or worlds to provide a less expensive, more efficient and controlled space for robot learning. Embodied AI startup Hillbot is boosting its data pipeline by using Cosmos to generate terabytes of high-fidelity 3D environments. This AI-generated data will help the company refine its robotic training and operations, enabling faster, more efficient robotic skilling and improved performance for industrial and domestic tasks.

In both industries, developers can use NVIDIA Omniverse and Cosmos as a multiverse simulation engine, allowing a physical AI policy model to simulate every possible future path it could take to execute a particular task — which in turn helps the model select the best of these paths.

Data curation and the training of Cosmos models relied on thousands of NVIDIA GPUs through NVIDIA DGX Cloud, a high-performance, fully managed AI platform that provides accelerated computing clusters in every leading cloud.

Developers adopting Cosmos can use DGX Cloud for an easy way to deploy Cosmos models, with further support available through the NVIDIA AI Enterprise software platform.

Customize and Deploy With NVIDIA Cosmos

In addition to foundation models, the Cosmos platform includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.

Robotics and AV developers collect millions or billions of hours of real-world recorded video, resulting in petabytes of data. Cosmos enables developers to process 20 million hours of data in just 40 days on NVIDIA Hopper GPUs, or as little as 14 days on NVIDIA Blackwell GPUs. Using unoptimized pipelines running on a CPU system with equivalent power consumption, processing the same amount of data would take over three years.

The platform also features a suite of powerful video and image tokenizers that can convert videos into tokens at different video compression ratios for training various transformer models.

The Cosmos tokenizers deliver 8x more total compression than state-of-the-art methods and 12x faster processing speed, which offers superior quality and reduced computational costs in both training and inference. Developers can access these tokenizers, available under NVIDIA’s open model license, via Hugging Face and GitHub.

Developers using Cosmos can also harness model training and fine-tuning capabilities offered by NeMo framework, a GPU-accelerated framework that enables high-throughput AI training.

Developing Safe, Responsible AI Models

Now available to developers under the NVIDIA Open Model License Agreement, Cosmos was developed in line with NVIDIA’s trustworthy AI principles, which include nondiscrimination, privacy, safety, security and transparency.

The Cosmos platform includes Cosmos Guardrails, a dedicated suite of models that, among other capabilities, mitigates harmful text and image inputs during preprocessing and screens generated videos during postprocessing for safety. Developers can further enhance these guardrails for their custom applications.

Cosmos models on the NVIDIA API catalog also feature an inbuilt watermarking system that enables identification of AI-generated sequences.

NVIDIA Cosmos was developed by NVIDIA Research. Read the research paper, “Cosmos World Foundation Model Platform for Physical AI,” for more details on model development and benchmarks. Model cards providing additional information are available on Hugging Face.

Learn more about world foundation models in an AI Podcast episode, airing Jan. 7, that features Ming-Yu Liu, vice president of research at NVIDIA. 

Get started with NVIDIA Cosmos and join NVIDIA at CES. Watch the Cosmos demo and Huang’s keynote below: 

See notice regarding software product information.

Read More

PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW

PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW

GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions.

Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW app for the Steam Deck. NVIDIA is working to bring cloud gaming to the popular PC gaming handheld device later this year.

In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality devices — with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS.

In addition, NVIDIA is launching the first GeForce RTX-powered data center in India, making gaming more accessible around the world.

Plus, GeForce NOW’s extensive library of over 2,100 supported titles is expanding with highly anticipated AAA titles. DOOM: The Dark Ages and Avowed will join the cloud when they launch on PC this year.

RTX on Deck

The Steam Deck’s portability paired with GeForce NOW opens up new possibilities for high-fidelity gaming everywhere. The native GeForce NOW app will offer up to 4K resolution and 60 frames per second with high dynamic range on Valve’s innovative Steam Deck handheld when connected to a TV, streaming from GeForce RTX-powered gaming rigs in the cloud.

Last year, GeForce NOW rolled out a beta installation method that was eagerly welcomed by the gaming community. Later this year, members will be able to download the native GeForce NOW app and install it on Steam Deck.

Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency. Because GeForce NOW streams from an RTX gaming rig in the cloud, the Steam Deck uses less processing power, which extends battery life compared with playing locally.

The streaming experience with GeForce NOW looks stunning, whichever way Steam Deck users want to play — whether that’s in handheld mode for HDR-quality graphics, connected to a monitor for up to 1440p 120 fps HDR or hooked up to a TV for big-screen streaming at up to 4K 60 HDR. GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.

Get ready for major upgrades to streaming on the go when the GeForce NOW app launches on the Steam Deck later this year.

Stream Beyond Reality

Get immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.

Meta Quest 3 to be on GeForce NOW
Jump into a whole new gaming dimension with GeForce NOW.

Members can transform the space around them into a personal gaming theater with GeForce NOW. The streaming experience on these devices will support gamepad-compatible titles for members to play their favorite PC games on a massive virtual screen.

For an even more enhanced visual experience, GeForce NOW Ultimate and Performance members using these devices can tap into RTX and DLSS technologies in supported games. Members will be able to step into a world where games come to life on a grand scale, powered by GeForce NOW technologies.

Land of a Thousand Lights … and Games

India data center to be on GeForce NOW
New year, new data center.

NVIDIA is broadening cloud gaming in India and Latin America. The first GeForce RTX 4080-powered data center will launch in India in the first half of this year. This follows the launch of GeForce NOW in Japan last year, as well as in Colombia and Chile, to be operated by GeForce NOW Alliance partner Digevo.

GeForce RTX-powered gaming in the rapidly growing Indian gaming market will provide the ability to stream AAA games without the latest hardware. Gamers in the region can look forward to the launch of Ultimate memberships, along with all the new games and technological advancements announced at CES.

Send in the Games

AAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software, the legendary studio behind the DOOM franchise. All will be available at launch on PC this year.

Avowed to be on GeForce NOW
Get ready to jump into the Living Lands.

Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Welcome to the Living Lands, an island full of mysteries and secrets, danger and adventure, choices and consequences and untamed wilderness. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic — harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions — which join the players’ parties — have unique abilities and storylines that can be influenced by gamers’ choices.

DOOM: The Dark Ages to be on GeForce NOW
Have a hell of a time in the cloud.

DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayer’s rage this year.

Get ready to play these titles and more at high performance when they join GeForce NOW at launch. Ultimate members will be able to stream at up to 4K resolution and 120 fps with support for NVIDIA DLSS and Reflex technology, and experience the action even on low-powered devices. Keep an eye out on GFN Thursdays for the latest on their release dates in the cloud.

GeForce NOW is making popular devices cloud-gaming-ready while consistently delivering quality titles from top publishers to bring another ultimate year of gaming to members across the globe.

See notice regarding software product information.

Read More

NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES

NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES

Leading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility.

NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation and in-vehicle compute.

At the CES trade show in Las Vegas this week, NVIDIA’s customers and partners are showcasing their latest mobility innovations built on NVIDIA accelerated computing and AI.

Readying Future Vehicle Roadmaps With NVIDIA DRIVE Thor, Built on NVIDIA Blackwell

The NVIDIA DRIVE AGX Thor system-on-a-chip (SoC), built on the NVIDIA Blackwell architecture, is engineered to handle the transportation industry’s most demanding data-intensive workloads, including those involving generative AI, vision language models and large language models.

DRIVE Ecosystem Partners Transform the Show Floor and Industry at Large

NVIDIA partners are pushing boundaries of automotive innovation with their latest developments and demos, using NVIDIA technologies and accelerated computing to advance everything from sensors, simulation and training to generative AI and teledriving, and include:

Delivering 1,000 teraflops of accelerated compute performance, DRIVE Thor is equipped to accelerate inference tasks that are critical for autonomous vehicles to understand and navigate the world around them, such as recognizing pedestrians, adjusting to inclement weather and more.

At CES, Aurora, Continental and NVIDIA announced a long-term strategic partnership to deploy driverless trucks at scale, powered by the next-generation NVIDIA DRIVE Thor SoC. NVIDIA DRIVE Thor and DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous driving system that Continental plans to mass-manufacture in 2027.

Arm, one of NVIDIA’s key technology partners, is the compute platform of choice for a number of innovations at CES. The Arm Neoverse V3AE CPU, designed to meet the specific safety and performance demands of automotive, is integrated with DRIVE Thor. This marks the first implementation of Arm’s next-generation automotive CPU, which combines Arm v9-based technologies with data-center-class single-thread performance, alongside essential safety and security features.

Tried and True — DRIVE Orin Mainstream Adoption Continues

NVIDIA DRIVE AGX Orin, the predecessor of DRIVE Thor, continues to be a production-proven advanced driver-assistance system computer widely used in cars today — delivering 254 trillion operations per second of accelerated compute to process sensor data for safe, real-time driving decisions.

Toyota, the world’s largest automaker, will build its next-generation vehicles on the high-performance, automotive-grade NVIDIA DRIVE Orin SoC, running the safety-certified NVIDIA DriveOS. These vehicles will offer functionally safe advanced driving-assistance capabilities.

At the NVIDIA showcase on the fourth floor of the Fontainebleau, Volvo Cars’ software-defined EX90 and Nuro’s autonomous driving technology — the Nuro Driver platform — will be on display, built on NVIDIA DRIVE AGX.

Other vehicles powered by NVIDIA DRIVE Orin on display during CES include:

  • Zeekr Mix and Zeekr 001, which feature DRIVE Orin will be on display along with the debut of Zeekr’s self-developed ultra-high-performance intelligent driving domain controller that will be built on DRIVE Thor and the NVIDIA Blackwell architecture (LVCC West Hall, booth 5640)
  • Lotus Eletre Carbon (LVCC West Hall, booth 4266 with P3 and 3SS and booth 3500 with HERE)
  • Rivian R1S and Polestar 3 activated with Dolby — vehicles on display and demos available by appointment (Park MGM/NoMad Hotel next to Dolby Live)
  • Lucid Air (LVCC West Hall booth 4964 with SoundHound AI)
Zeekr MIX
Rivian R1S

NVIDIA’s partners will also showcase their automotive solutions built on NVIDIA technologies, including:

  • Arbe: Delivering next-generation, ultra-high-definition radar technology, integrating with NVIDIA DRIVE AGX to revolutionize radar-based free-space mapping with cutting-edge AI capabilities. The integration empowers manufacturers to incorporate radar data effortlessly into their perception systems, enhancing safety applications and autonomous driving. (LVCC, West Hall 7406, Diamond Lot 323)
  • Cerence: Collaborating with NVIDIA to enhance its CaLLM family of language models, including the cloud-based Cerence Automotive Large Language Model, or CaLLM, powered by DRIVE Orin.
  • Foretellix: Integrating NVIDIA Omniverse Sensor RTX APIs into its Foretify AV test management platform, enhancing object-level simulation with physically accurate sensor simulations.
  • Imagry: Building AI-driven, HD-mapless autonomous driving solutions, accelerated by NVIDIA technology, that are designed for both self-driving passenger vehicles and urban buses. (LVCC, West Hall, 5976)
  • Lenovo Vehicle Computing: Previewing (by appointment) its Lenovo AD1, a powerful automotive-grade domain controller built on the NVIDIA DRIVE Thor platform, and tailored for SAE level 4 autonomous driving.
  • Provizio: Showcasing Provizio’s 5D perception Imaging Radar, accelerated by NVIDIA technology, that delivers unprecedented, scalable, on-the-edge radar perception capabilities, with on-vehicle demonstration rides at CES.
  • Quanta: Demonstrating (by appointment) in-house NVIDIA DRIVE AGX Hyperion cameras running on its electronic control unit powered by DRIVE Orin.
  • SoundHound AI: Showcasing its work with NVIDIA to bring voice generative AI directly to the edge, bringing the intelligence of cloud-based LLMs directly to vehicles. (LVCC, West Hall, 4964)
  • Vay: Offering innovative door-to-door mobility services by combining Vay’s remote driving capabilities with NVIDIA DRIVE advanced AI and computing power.
  • Zoox: Showcasing its latest robotaxi, which leverages NVIDIA technology, driving autonomously on the streets of Las Vegas and parked in the Zoox booth. (LVCC, West Hall 3316).

Safety Is the Way for Autonomous Innovation 

At CES, NVIDIA also announced that its DRIVE AGX Hyperion platform has achieved safety certifications from TÜV SÜD and TÜV Rheinland, setting new standards for autonomous vehicle safety and innovation.

To enhance safety measures, NVIDIA also launched the DRIVE AI Systems Inspection Lab, designed to help partners meet rigorous autonomous vehicle safety and cybersecurity requirements.

In addition, complementing its three computers designed to accelerate AV development — NVIDIA AGX, NVIDIA Omniverse running on OVX and NVIDIA DGX — NVIDIA has introduced the NVIDIA Cosmos platform. Cosmos’ world foundation models and advanced data processing pipelines can dramatically scale generated data and speed up physical AI system development. With the platform’s data flywheel capability, developers can effectively transform thousands of real-world driven miles into billions of virtual miles.

Transportation leaders using Cosmos to build physical AI for AVs include Fortellix, Uber, Waabi and Wayve.

Learn more about NVIDIA’s latest automotive news by watching NVIDIA founder and CEO Jensen Huang’s opening keynote at CES.

See notice regarding software product information.

Read More

NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones

NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones

A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety.

The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional safety.

The lab has been accredited by the ANSI National Accreditation Board (ANAB) according to the ISO/IEC 17020 assessment for standards, including:

  • Functional safety (ISO 26262)
  • SOTIF (ISO 21448)
  • Cybersecurity (ISO 21434)
  • UN-R regulations, including UN-R 79, UN-R 13-H, UN-R 152, UN-R 155, UN-R 157 and UN-R 171
  • AI functional safety (ISO PAS 8800 and ISO/IEC TR 5469)

“The launch of this new lab will help partners in the global automotive ecosystem create safe, reliable autonomous driving technology,” said Ali Kani, vice president of automotive at NVIDIA. “With accreditation by ANAB, the lab will carry out an inspection plan that combines functional safety, cybersecurity and AI — bolstering adherence to the industry’s safety standards.”

“ANAB is proud to be the accreditation body for the NVIDIA DRIVE AI Systems Inspection Lab,” said R. Douglas Leonard Jr., executive director of ANAB. “NVIDIA’s comprehensive evaluation verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that DRIVE ecosystem partners meet the highest benchmarks for functional safety, cybersecurity and AI integration.”

The new lab builds on NVIDIA’s ongoing safety compliance work with Mercedes-Benz and JLR. Inaugural participants in the lab include Continental and Sony SSS-America.

“We are pleased to participate in the newly launched NVIDIA Drive AI Systems Inspection Lab and to further intensify the fruitful, ongoing collaboration between our two companies,” said Nobert Hammerschmidt, head of components business at Continental.

“Self-driving vehicles have the capability to significantly enhance safety on roads,” said Marius Evensen, head of automotive image sensors at Sony SSS-America. “We look forward to working with NVIDIA’s DRIVE AI Systems Inspection Lab to help us deliver the highest levels of safety to our customers.”

“Compliance with functional safety, SOTIF and cybersecurity is particularly challenging for complex systems such as AI-based autonomous vehicles,” said Riccardo Mariani, head of industry safety at NVIDIA. “Through the DRIVE AI Systems Inspection Lab, the correctness of the integration of our partners’ products with DRIVE safety and cybersecurity requirements can be inspected and verified.”

Now open to all NVIDIA DRIVE AGX platform partners, the lab is expected to expand to include additional automotive and robotics products and add a testing component.

Complementing International Automotive Safety Standards

The NVIDIA DRIVE AI Systems Inspection Lab complements the missions of independent third-party certification bodies, including technical service organizations such as TÜV SÜD, TÜV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA.

Today’s announcement dovetails with recent significant safety certifications and assessments of NVIDIA automotive products:

TÜV SÜD granted the ISO 21434 Cybersecurity Process certification to NVIDIA for its automotive system-on-a-chip, platform and software engineering processes. Upon certification release, the NVIDIA DriveOS 6.0 operating system conforms with ISO 26262 Automotive Safety Integrity Level (ASIL) D standards.

“Meeting cybersecurity process requirements is of fundamental importance in the autonomous vehicle era,” said Martin Webhofer, CEO of TÜV SÜD Rail GmbH. “NVIDIA has successfully established processes, activities and procedures that fulfill the stringent requirements of ISO 21434. Additionally, NVIDIA DriveOS 6.0 conforms to ISO 26262 ASIL D standards, pending final certification activities.”

TÜV Rheinland performed an independent United Nations Economic Commission for Europe safety assessment of NVIDIA DRIVE AV related to safety requirements for complex electronic systems.

“NVIDIA has demonstrated thorough, high-quality, safety-oriented processes and technologies in the context of the assessment of the generic, non-OEM-specific parts of the SAE level 2 NVIDIA DRIVE system,” said Dominik Strixner, global lead functional safety automotive mobility at TÜV Rheinland.

To learn more about NVIDIA’s work in advancing autonomous driving safety, read the NVIDIA Self-Driving Safety Report.

Read More

How AI Is Helping Us Do Better—for the Planet and for Each Other

How AI Is Helping Us Do Better—for the Planet and for Each Other

Artificial intelligence and accelerated computing are being used to help solve the world’s greatest challenges.

NVIDIA has reinvented the computing stack — spanning GPUs, CPUs, DPUs, networking and software. Our platform drives the AI revolution, powering hundreds of millions of devices in every cloud and fueling 75% of the world’s TOP500 supercomputers.

Put in the hands of entrepreneurs and enterprises, developers and scientists, that platform becomes a system for invention, and a force for good across industries and geographies.

Here are five examples of how these technologies are being put to work from the past year:

Supporting Surgeons

Illinois-based startup SimBioSys has created TumorSight Viz, a technology that converts MRI images into 3D models of breast tissue. This helps surgeons better treat breast cancers by providing detailed visualizations of tumors and surrounding tissue.

Saving Lives and Energy

Researchers at the Wellcome Sanger Institute, a key player in the Human Genome Project, analyze tens of thousands of cancer genomes annually, providing insights into cancer formation and treatment effectiveness. NVIDIA accelerated computing and software drastically reduce the institute’s analysis runtime and energy consumption per genome.

Cleaning Up Our Waters

Clearbot, developed by University of Hong Kong grads, is an AI-driven sea-cleaning boat that autonomously collects trash from the water. Enabled by the NVIDIA Jetson platform, Clearbot is making a splash in Hong Kong and India, helping keep tourist regions clean.

Greening Recycling Plants

Greyparrot, a UK-based startup, has developed the Greyparrot Analyzer, an AI-powered device that offers “waste intelligence” to recycling plants. Using embedded cameras and machine learning, the analyzer identifies and differentiates materials on conveyor belts, significantly improving recycling efficiency.

Driving Technological Advancement in Africa

A new AI innovation hub has launched in Tunisia, part of NVIDIA’s efforts to train 100,000 developers across Africa. Built in collaboration with the NVIDIA Deep Learning Institute, the hub offers training, technologies and business networks to drive AI adoption across the continent.

All of these initiatives — whether equipping surgeons with new tools or making recycling plants greener — rely on the ingenuity of human beings across the globe, humans increasingly supercharged by AI.

Find more examples of how AI is helping people from across industries and the globe to make a difference and drive positive social impact.

Read More

GeForce NOW Rings in the New Year With 14 New Games

GeForce NOW Rings in the New Year With 14 New Games

GeForce NOW is kicking off 2025 by delivering 14 games to the cloud this month, with two available to stream this week so members can get started on their New Year’s gaming resolutions.

This year’s CES trade show will open with a keynote from NVIDIA founder and CEO Jensen Huang on Monday, Jan. 6. GeForce NOW is offering members front-row seats in a virtual stadium, so they can hear the latest announcements and get hyped with livestreams — no downloads or installations required.

It’s all powered by GeForce NOW cloud streaming and hosted by ZENOS, an innovative virtual stadium platform. Members can enter the virtual stadium starting at 3 p.m. PT on Monday, Jan. 6.

In addition, gear up to participate in NVIDIA GeForce LAN 50 gaming missions starting on Saturday, Jan. 4, at 4:30 p.m. PT. Stream #GeForceGreats games to unlock incredible in-game rewards with GeForce NOW.

Mission Possible

GeForce LAN 50 Gaming Missions
Your mission begins here.

It’s rewarding to be a GeForce NOW member. Unlock exclusive rewards during CES by doing what gamers do best — playing the game. Members can participate in GeForce LAN 50 gaming missions even without a game-ready rig.

To participate, stream the following featured GeForce Greats games on GeForce NOW and earn exclusive in-game items:

  • Diablo IV: Creeping Shadows Mount Armor Bundle
  • The Elder Scrolls Online: Pineblossom Vale Elk Mount
  • The Finals: Legendary Corrugatosaurus Mask
  • World of Warcraft: Armored Bloodwing Mount
  • Fallout 76: Settler Work Chief Outfit and Raider Nomad Outfit

Complete each game’s mission to become eligible for the associated reward. Rewards will be available to redeem on Thursday, Jan. 9. They’re available on a first-come, first-served basis, so make sure to jump in right away.

With GeForce NOW, members can participate in the event on any supported device, whether a PC, Mac, SHIELD TV or mobile device, with access to GeForce RTX gaming rigs in the cloud for maximum performance.

Front-Row Seat to NVIDIA at CES 2025

Virtual Stadium on GeForce NOW
Step into the action, wherever you are.

Join others around the world as NVIDIA celebrates the latest advancements in gaming, technology and generative AI at CES, starting with livestreams from the GeForce LAN 50 online event that’ll lead up to the show’s opening NVIDIA keynote — all from the comfort of home, no trip to Las Vegas needed.

Enter the virtual stadium using the GeForce NOW app on PC, Mac or through a web browser for those without a GeForce NOW membership. The virtual stadium is hosted by ZENOS, which uses NVIDIA’s cloud gaming infrastructure to deliver high-fidelity, low-latency streaming experiences directly via web browsers, making live events more accessible worldwide.

Virtual Stadium Rewards on GeForce NOW
Who’s number one?

Enhance the experience by signing in with a GeForce NOW membership, and create a “Ready Player Me” avatar and account to save digital characters for future visits. Members can link their Twitch accounts to chat, emote with other viewers in the stadium and collect NVIDIA-branded digital items, including NVIDIA foam fingers and jerseys, to customize their avatars.

New Year New Games

Look for the following games available to stream in the cloud this week:

Here’s what to expect for the rest of January:

  • Builders of Egypt (New release on Steam, Jan. 8)
  • Hyper Light Breaker (New release on Steam, Jan. 14)
  • Jötunnslayer: Hordes of Hel (New release on Steam, Jan. 21)
  • Eternal Strands (New release on Steam, Jan. 27)
  • Space Engineers 2 (New release on Steam, Jan. 27)
  • Orcs Must Die! Deathtrap (New release on Steam, Jan. 28)
  • Heart of the Machine (New release on Steam, Jan. 31)
  • Citizen Sleeper 2: Starward Vector (New release on Steam, Jan. 31)
  • Drova – Forsaken Kin (Steam)
  • Pax Dei (Steam)
  • Sniper Elite: Resistance (New release on Xbox, available on PC Game Pass, Jan. 30)
  • Voidwrought (Steam)

Outstanding December

In addition to the 13 games announced last month, four more joined the GeForce NOW library, including the new games listed this week:

  • Indiana Jones and the Great Circle (New release on Steam and Xbox, available on the Microsoft Store and PC Game Pass, Dec. 8)
  • Diablo Immortal (Battle.net)
  • Headquarters: World War II (Steam)
  • Zenless Zone Zero v1.4 (HoYoverse)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Dancing on Their Own: NVIDIA-Powered Robots to Adore From 2024

Dancing on Their Own: NVIDIA-Powered Robots to Adore From 2024

Artificial intelligence made big moves this year — as did the robots the technology is working behind.

From Silicon Valley to India, Boston to Japan, here are some of the autonomous machines and robotics technologies, powered by NVIDIA AI, that offered helping hands in 2024.

Clearbot Tidies Up Ocean Litter

Hong Kong- and India-based Clearbot, a member of the NVIDIA Inception program for cutting-edge startups, is making a splash with its autonomous trash-collection boats enabled by the NVIDIA Jetson platform for edge AI and robotics.

With plastic making up at least 85% of marine waste across the globe, Clearbot aims to remove waste from waterways before it gets into the seas.

The startup’s autonomous ocean vessels are equipped with two cameras — one for navigation and another for waste identification of what boats have scooped up.

Clearbot trained its garbage- and obstacle-identifying AI models using NVIDIA accelerated computing. The energy-efficient NVIDIA Jetson Xavier NX system-on-module allows the battery-powered water-cleaning boats to collect for eight hours at a time.

Figure Unveils Latest Humanoid Robot

Silicon Valley-based Figure introduced its Figure 02 conversational humanoid robot that taps into the NVIDIA Omniverse platform and accelerated computing for fully autonomous tasks.

New human-scale hands, six RGB cameras and perception AI models trained with synthetic data generated in the NVIDIA Isaac Sim robotics developer simulation platform enable Figure 02 to perform high-precision pick-and-place tasks required for smart manufacturing applications.

The company recently tested Figure 02 for data collection and use-case training at BMW Group’s Spartanburg, South Carolina, production line.

Figure is a part of NVIDIA Inception and among the initial members to join the NVIDIA Humanoid Robot Developer Program, which provides early access to advanced tools and computing technologies for humanoid robot development. This includes the latest releases of NVIDIA Isaac Sim, Isaac Lab, NIM microservices, OSMO, Jetson Thor and Project GR00T general-purpose humanoid foundation models.

ORBIT-Surgical Researchers Prep Robots for Surgery

The ORBIT-Surgical simulation framework — developed by researchers from the University of Toronto, UC Berkeley, ETH Zürich, Georgia Tech and NVIDIA — helps train robots that could augment the skills of surgical teams while reducing surgeons’ cognitive loads.

It supports more than a dozen maneuvers inspired by the training curriculum for minimally invasive surgeries.

Presented at this year’s ICRA robotics conference in Japan, the needle-moving robotics research — based on Isaac Sim and Omniverse — also introduced benchmarks for one-handed tasks such as picking up a piece of gauze, inserting a shunt into a blood vessel and lifting a suture needle to a specific position.

The benchmarks also include two-handed tasks, like passing a threaded needle through a ring pole.

Zordi Robots Grow Strawberries Indoors

Boston-based autonomous agriculture startup Zordi — with farms in southern New Jersey and western New York — is tapping into robotics to grow strawberries indoors.

Autonomous greenhouse systems can support regional markets across the globe, cutting down on the carbon footprint for transportation and providing fresher, better-tasting fruits grown more sustainably.

Zordi uses two types of robots within its operations. One is a scouting robot for gathering information on plant health using foundational models, the other a harvesting robot for delicately picking and placing fruits and handling other tasks.

The startup is exploring NVIDIA Jetson AGX Orin modules for gathering sensor data and running its AI models.

Cutting-edge companies weren’t the only ones to drive major robotics advancements this year. For example, high school and university students are developing robot guide dogs for the visually impaired and training robots to perform 1,000 household chores.

Learn more about the latest in generative AI and autonomous machines — including NVIDIA’s three-computer approach to robotics — by joining NVIDIA at CES, running Jan. 6-10 in Las Vegas.

Read More