AI Gets Real for Retailers: 9 Out of 10 Retailers Now Adopting or Piloting AI, Latest NVIDIA Survey Finds

AI Gets Real for Retailers: 9 Out of 10 Retailers Now Adopting or Piloting AI, Latest NVIDIA Survey Finds

Artificial intelligence is rapidly becoming the cornerstone of innovation in the retail and consumer packaged goods (CPG) industries.

Forward-thinking companies are using AI to reimagine their entire business models, from in-store experiences to omnichannel digital platforms, including ecommerce, mobile and social channels. This technological wave is simultaneously transforming advertising and marketing, customer engagement and supply chain operations. By harnessing AI, retailers and CPG brands are not just adapting to change — they’re actively shaping the future of commerce.

NVIDIA’s second annual “State of AI in Retail and CPG” survey provides insights into the adoption, investment and impact of AI, including generative AI; the top use cases and challenges; and a special section this year examining the use of AI in the supply chain. It’s an in-depth look at the current ecosystem of AI in retail and CPG, and how it’s transforming the industries.

Drawn from hundreds of responses from industry professionals, key highlights of the survey show:

  • 89% of respondents said they are either actively using AI in their operations or assessing AI projects, including trials and pilots (up from 82% in 2023)
  • 87% said AI had a positive impact on increasing annual revenue
  • 94% said AI has helped reduce annual operational costs
  • 97% said spending on AI would increase in the next fiscal year

Generative AI in Retail Takes Center Stage

Generative AI has found a strong foothold in retail and CPG, with over 80% of companies either using or piloting projects. Companies are harnessing the technology, especially for content generation in marketing and advertising, as well as customer analysis and analytics.

Consistent with last year’s survey, over 50% of retailers believe that generative AI is a strategic technology that will be a differentiator in the market.

The top use cases for generative AI in retail include:

  • Content generation for marketing (60%)
  • Predictive analytics (44%)
  • Personalized marketing and advertising (42%)
  • Customer analysis and segmentation (41%)
  • Digital shopping assistants or copilots (40%)

While some concerns about generative AI exist, specifically around data privacy, security and implementation costs, these concerns haven’t dampened retailers’ enthusiasm, with 93% of respondents saying they still plan to increase generative AI investment next year.

AI Across the Retail Landscape

AI use cases have proliferated across nearly every line of business in retail, with over 50% of retailers using AI in more than six different use cases throughout their operations.

In physical stores, the top three use cases are inventory management, analytics and insights, and adaptive advertising. For digital retail, they’re marketing and advertising content creation, and hyperpersonalized recommendations. And in the back office, the top use cases are customer analysis and predictive analytics.

AI has made a significant impact in retail and CPG, with improved insights and decision-making (43%) and enhanced employee productivity (42%) being listed as top benefits among survey respondents.

The most common AI challenge retailers faced in 2024 was a lack of easy to understand and explainable AI tools, underscoring a greater need for software and solutions — specifically around generative AI and AI agents — to enter the market to make it easier for companies to use AI solutions and understand how they work.

AI in the Supply Chain

Managing the supply chain has always been a challenge for retail and CPG companies, but it’s become increasingly difficult over the last several years due to tumultuous global events and shifting consumer preferences. Companies are feeling the pressure, with 59% of respondents saying that their supply chain challenges have grown in the last year.

Increasingly, companies are turning to AI to help address these challenges, and the impact of these AI solutions is starting to show up in results.

  • 58% said AI is helping to improve operational efficiency and throughput.
  • 45% are using AI to reduce supply chain costs.
  • 42% are employing AI to meet shifting customer expectations.

Investment in AI for supply chain management is set to grow, with 82% of companies planning to increase spending in the next fiscal year.

As the retail and CPG industries continue to embrace the power of AI, the findings from the latest survey underscore a pivotal shift in how businesses operate in a complex new landscape. Leading companies are harnessing advanced technologies — such as AI agents and physical AI — to enhance efficiency and drive revenue, as well as to position themselves as leaders in innovation, helping redefine the future of retail and CPG.

Download the “State of AI in Retail and CPG: 2025 Trends” report for in-depth results and insights.

Explore NVIDIA’s AI solutions and enterprise-level platforms for retail.

Read More

Hyundai Motor Group Embraces NVIDIA AI and Omniverse for Next-Gen Mobility

Hyundai Motor Group Embraces NVIDIA AI and Omniverse for Next-Gen Mobility

Driving the future of smart mobility, Hyundai Motor Group (the Group) is partnering with NVIDIA to develop the next generation of safe, secure mobility with AI and industrial digital twins.

Announced today at the CES trade show in Las Vegas, this latest work will elevate Hyundai Motor Group’s smart mobility innovation with NVIDIA accelerated computing, generative AI, digital twins and physical AI technologies.

The Group is launching a broad range of AI initiatives into its key mobility products, including software-defined vehicles and robots, along with optimizing its manufacturing lines.

“Hyundai Motor Group is exploring innovative approaches with AI technologies in various fields such as robotics, autonomous driving and smart factory,” said Heung-Soo Kim, executive vice president and head of the global strategy office at Hyundai Motor Group. “This partnership is set to accelerate our progress, positioning the Group as a frontrunner in driving AI-empowered mobility innovation.”

Hyundai Motor Group will tap into NVIDIA’s data-center-level computing and infrastructure to efficiently manage the massive data volumes essential for training its advanced AI models and building a robust autonomous vehicle (AV) software stack.

Manufacturing Intelligence With Simulation and Digital Twins

With the NVIDIA Omniverse platform running on NVIDIA OVX systems, Hyundai Motor Group will build a digital thread across its existing software tools to achieve highly accurate product design and prototyping in a digital twin environment. This will help boost engineering efficiencies, reduce costs and accelerate time to market.

The Group will also work with NVIDIA to create simulated environments for developing autonomous driving systems and validating self-driving applications.

Simulation is becoming increasingly critical in the safe deployment of AVs. It provides a safe way to test self-driving technology in any possible weather, traffic conditions or locations, as well as rare or dangerous scenarios.

Hyundai Motor Group will develop applications, like digital twins using Omniverse technologies, to optimize its existing and future manufacturing lines in simulation. These digital twins can improve production quality, streamline costs and enhance overall manufacturing efficiencies.

The company can also build and train industrial robots for safe deployment in its factories using NVIDIA Isaac Sim, a robotics simulation framework built on Omniverse.

NVIDIA is helping advance robotics intelligence with AI tools and libraries for automated manufacturing. As a result, Hyundai Motor Group can conduct industrial robot training in physically accurate virtual environments — optimizing manufacturing and enhancing quality.

This can also help make interactions with these robots and their real-world surroundings more intuitive and effective while ensuring they can work safely alongside humans.

Using NVIDIA technology, Hyundai Motor Group is driving the creation of safer, more intelligent vehicles, enhancing manufacturing with greater efficiency and quality, and deploying cutting-edge robotics to build a smarter, more connected digital workplace.

The partnership was formalized during a signing ceremony that took place last night at CES.

Learn more about how NVIDIA technologies are advancing autonomous vehicles.

Read More

GeForce NOW at CES: Bring PC RTX Gaming Everywhere With the Power of GeForce NOW

GeForce NOW at CES: Bring PC RTX Gaming Everywhere With the Power of GeForce NOW

This GFN Thursday recaps the latest cloud announcements from the CES trade show, including GeForce RTX gaming expansion across popular devices such as Steam Deck, Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico mixed-reality devices.

Gamers in India will also be able to access their PC gaming library at GeForce RTX 4080 quallity with an Ultimate membership for the first time in the region. This follows expansion in Chile and Columbia with GeForce NOW Alliance partner Digevo.

More AAA gaming is on the way, with highly anticipated titles DOOM: The Dark Ages and Avowed joining GeForce NOW’s extensive library of over 2,100 supported titles when they launch on PC later this year.

Plus, no GFN Thursday is complete without new games. Get ready for six new titles joining the cloud this week.

Head in the Clouds

CES 2025 is coming to a close, but GeForce NOW members still have lots to look forward to.

Members will be able to play over 2,100 titles from the GeForce NOW cloud library at GeForce RTX quality on Valve’s popular Steam Deck device with the launch of a native GeForce NOW app, coming later this year. Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency.

GeForce NOW delivers a stunning streaming experience, no matter how Steam Deck users choose to play, whether in handheld mode for high dynamic range (HDR)-quality graphics, connected to a monitor for up to 1440p 120 frames per second HDR, or hooked up to a TV for big-screen streaming at up to 4K 60 fps.

GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.

VR support on GeForce NOW
Get your head in the clouds.

Get immersed in a new dimension of big-screen gaming. In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico virtual- and mixed-reality devices — with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS.

DOOM The Dark Ages on GeForce NOW
Have a hell of a time in the cloud.

In addition, NVIDIA will launch the first GeForce RTX-powered data center in India this year, making gaming more accessible around the world. This follows the recent launch of GeForce NOW in Colombia and Chile — operated by GeForce NOW Alliance partner Digevo — as well as Thailand coming soon — to be operated by GeForce NOW Alliance partner Brothers Picture.

Game On

AAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software — the legendary studio behind the DOOM franchise. These titles will be available at launch on PC this year.

Avowed on GeForce NOW
Get ready to jump into the Living Lands.

Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic — harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions — which join the players’ parties — have unique abilities and storylines that can be influenced by gamers’ choices.

DOOM The Dark Ages on GeForce NOW
Have a hell of a time in the cloud.

DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayer’s rage in 2025.

Shiny New Games

Look for the following games available to stream in the cloud this week:

Marvel Rivals comes to the cloud.
  • Road 96 (New release on Xbox, available on PC Game Pass, Jan. 7)
  • Builders of Egypt (New release on Steam, Jan. 8)
  • DREDGE (Epic Games Store)
  • Drova – Forsaken Kin (Steam)
  • Kingdom Come: Deliverance (Xbox, available on Microsoft Store)
  • Marvel Rivals (Steam, coming to the cloud after the launch of Season 1)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Unveiling a New Era of Local AI With NVIDIA NIM Microservices and AI Blueprints

Unveiling a New Era of Local AI With NVIDIA NIM Microservices and AI Blueprints

Over the past year, generative AI has transformed the way people live, work and play, enhancing everything from writing and content creation to gaming, learning and productivity. PC enthusiasts and developers are leading the charge in pushing the boundaries of this groundbreaking technology.

Countless times, industry-defining technological breakthroughs have been invented in one place — a garage. This week marks the start of the RTX AI Garage series, which will offer routine content for developers and enthusiasts looking to learn more about NVIDIA NIM microservices and AI Blueprints, and how to build AI agents, creative workflow, digital human, productivity apps and more on AI PCs. Welcome to the RTX AI Garage.

This first installment spotlights announcements made earlier this week at CES, including new AI foundation models available on NVIDIA RTX AI PCs that take digital humans, content creation, productivity and development to the next level.

These models — offered as NVIDIA NIM microservices — are powered by new GeForce RTX 50 Series GPUs. Built on the NVIDIA Blackwell architecture, RTX 50 Series GPUs deliver up to 3,352 trillion AI operations per second of performance, 32GB of VRAM and feature FP4 compute, doubling AI inference performance and enabling generative AI to run locally with a smaller memory footprint.

NVIDIA also introduced NVIDIA AI Blueprints — ready-to-use, preconfigured workflows, built on NIM microservices, for applications like digital humans and content creation.

NIM microservices and AI Blueprints empower enthusiasts and developers to build, iterate and deliver AI-powered experiences to the PC faster than ever. The result is a new wave of compelling, practical capabilities for PC users.

Fast-Track AI With NVIDIA NIM

There are two key challenges to bringing AI advancements to PCs. First, the pace of AI research is breakneck, with new models appearing daily on platforms like Hugging Face, which now hosts over a million models. As a result, breakthroughs quickly become outdated.

Second, adapting these models for PC use is a complex, resource-intensive process. Optimizing them for PC hardware, integrating them with AI software and connecting them to applications requires significant engineering effort.

NVIDIA NIM helps address these challenges by offering prepackaged, state-of-the-art AI models optimized for PCs. These NIM microservices span model domains, can be installed with a single click, feature application programming interfaces (APIs) for easy integration, and harness NVIDIA AI software and RTX GPUs for accelerated performance.

At CES, NVIDIA announced a pipeline of NIM microservices for RTX AI PCs, supporting use cases spanning large language models (LLMs), vision-language models, image generation, speech, retrieval-augmented generation (RAG), PDF extraction and computer vision.

The new Llama Nemotron family of open models provide high accuracy on a wide range of agentic tasks. The Llama Nemotron Nano model, which will be offered as a NIM microservice for RTX AI PCs and workstations, excels at agentic AI tasks like instruction following, function calling, chat, coding and math.

Soon, developers will be able to quickly download and run these microservices on Windows 11 PCs using Windows Subsystem for Linux (WSL).

To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, NVIDIA previewed Project R2X, a vision-enabled PC avatar that can put information at a user’s fingertips, assist with desktop apps and video conference calls, read and summarize documents, and more. Sign up for Project R2X updates.

By using NIM microservices, AI enthusiasts can skip the complexities of model curation, optimization and backend integration and focus on creating and innovating with cutting-edge AI models.

What’s in an API?

An API is the way in which an application communicates with a software library. An API defines a set of “calls” that the application can make to the library and what the application can expect in return. Traditional AI APIs require a lot of setup and configuration, making AI capabilities harder to use and hampering innovation.

NIM microservices expose easy-to-use, intuitive APIs that an application can simply send requests to and get a response. In addition, they’re designed around the input and output media for different model types. For example, LLMs take text as input and produce text as output, image generators convert text to image, speech recognizers turn speech to text and so on.

The microservices are designed to integrate seamlessly with leading AI development and agent frameworks such as AI Toolkit for VSCode, AnythingLLM, ComfyUI, Flowise AI, LangChain, Langflow and LM Studio. Developers can easily download and deploy them from build.nvidia.com.

By bringing these APIs to RTX, NVIDIA NIM will accelerate AI innovation on PCs.

Enthusiasts are expected to be able to experience a range of NIM microservices using an upcoming release of the NVIDIA ChatRTX tech demo.

A Blueprint for Innovation

By using state-of-the-art models, prepackaged and optimized for PCs, developers and enthusiasts can quickly create AI-powered projects. Taking things a step further, they can combine multiple AI models and other functionality to build complex applications like digital humans, podcast generators and application assistants.

NVIDIA AI Blueprints, built on NIM microservices, are reference implementations for complex AI workflows. They help developers connect several components, including libraries, software development kits and AI models, together in a single application.

AI Blueprints include everything that a developer needs to build, run, customize and extend the reference workflow, which includes the reference application and source code, sample data, and documentation for customization and orchestration of the different components.

At CES, NVIDIA announced two AI Blueprints for RTX: one for PDF to podcast, which lets users generate a podcast from any PDF, and another for 3D-guided generative AI, which is based on FLUX.1 [dev] and expected be offered as a NIM microservice, offers artists greater control over text-based image generation.

With AI Blueprints, developers can quickly go from AI experimentation to AI development for cutting-edge workflows on RTX PCs and workstations.

Built for Generative AI

The new GeForce RTX 50 Series GPUs are purpose-built to tackle complex generative AI challenges, featuring fifth-generation Tensor Cores with FP4 support, faster G7 memory and an AI-management processor for efficient multitasking between AI and creative workflows.

The GeForce RTX 50 Series adds FP4 support to help bring better performance and more models to PCs. FP4 is a lower quantization method, similar to file compression, that decreases model sizes. Compared with FP16 — the default method that most models feature — FP4 uses less than half of the memory, and 50 Series GPUs provide over 2x performance compared with the previous generation. This can be done with virtually no loss in quality with advanced quantization methods offered by NVIDIA TensorRT Model Optimizer.

For example, Black Forest Labs’ FLUX.1 [dev] model at FP16 requires over 23GB of VRAM, meaning it can only be supported by the GeForce RTX 4090 and professional GPUs. With FP4, FLUX.1 [dev] requires less than 10GB, so it can run locally on more GeForce RTX GPUs.

With a GeForce RTX 4090 with FP16, the FLUX.1 [dev] model can generate images in 15 seconds with 30 steps. With a GeForce RTX 5090 with FP4, images can be generated in just over five seconds.

Get Started With the New AI APIs for PCs

NVIDIA NIM microservices and AI Blueprints are expected to be available starting next month, with initial hardware support for GeForce RTX 50 Series, GeForce RTX 4090 and 4080, and NVIDIA RTX 6000 and 5000 professional GPUs. Additional GPUs will be supported in the future.

NIM-ready RTX AI PCs are expected to be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.

GeForce RTX 50 Series GPUs and laptops deliver game-changing performance, power transformative AI experiences, and enable creators to complete workflows in record time. Rewatch NVIDIA CEO Jensen Huang’s  keynote to learn more about NVIDIA’s AI news unveiled at CES.

See notice regarding software product information.

Read More

Why World Foundation Models Will Be Key to Advancing Physical AI

Why World Foundation Models Will Be Key to Advancing Physical AI

In the fast-evolving landscape of AI, it’s becoming increasingly important to develop models that can accurately simulate and predict outcomes in physical, real-world environments to enable the next generation of physical AI systems.

Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, joined the NVIDIA AI Podcast to discuss the significance of world foundation models (WFM) — powerful neural networks that can simulate physical environments. WFMs can generate detailed videos from text or image input data and predict how a scene evolves by combining its current state (image or video) with actions (such as prompts or control signals).

“World foundation models are important to physical AI developers,” said Liu. “They can imagine many different environments and can simulate the future, so we can make good decisions based on this simulation.”

This is particularly valuable for physical AI systems, such as robots and self-driving cars, which must interact safely and efficiently with the real world.

Why Are World Foundation Models Important?

Building world models often requires vast amounts of data, which can be difficult and expensive to collect. WFMs can generate synthetic data, providing a rich, varied dataset that enhances the training process.

In addition, training and testing physical AI systems in the real world can be resource-intensive. WFMs provide virtual, 3D environments where developers can simulate and test these systems in a controlled setting without the risks and costs associated with real-world trials.

Open Access to World Foundation Models

At the CES trade show, NVIDIA announced NVIDIA Cosmos, a platform of generative WFMs that accelerate the development of physical AI systems such as robots and self-driving cars.

The platform is designed to be open and accessible, and includes pretrained WFMs based on diffusion and auto-regressive architectures, along with tokenizers that can compress videos into tokens for transformer models.

Liu explained that with these open models, enterprises and developers have all the ingredients they need to build large-scale models. The open platform also provides teams with the flexibility to explore various options for training and fine-tuning models, or build their own based on specific needs.

Enhancing AI Workflows Across Industries

WFMs are expected to enhance AI workflows and development in various industries. Liu sees particularly significant impacts in two areas:

“The self-driving car industry and the humanoid [robot] industry will benefit a lot from world model development,” said Liu. “[WFMs] can simulate different environments that will be difficult to have in the real world, to make sure the agent behaves respectively.”

For self-driving cars, these models can simulate environments that allow for comprehensive testing and optimization. For example, a self-driving car can be tested in various simulated weather conditions and traffic scenarios to help ensure it performs safely and efficiently before deployment on roads.

In robotics, WFMs can simulate and verify the behavior of robotic systems in different environments to make sure they perform tasks safely and efficiently before deployment.

NVIDIA is collaborating with companies like 1X, Huobi and XPENG to help address challenges in physical AI development and advance their systems.

“We are still in the infancy of world foundation model development — it’s useful, but we need to make it more useful,” Liu said. “We also need to study how to best integrate these world models into the physical AI systems in a way that can really benefit them.”

Listen to the podcast with Ming-Yu Liu, or read the transcript.

Learn more about NVIDIA Cosmos and the latest announcements in generative AI and robotics by watching the CES opening keynote by NVIDIA founder and CEO Jensen Huang, as well as joining NVIDIA sessions at the show.

Read More

Why Enterprises Need AI Query Engines to Fuel Agentic AI

Why Enterprises Need AI Query Engines to Fuel Agentic AI

Data is the fuel of AI applications, but the magnitude and scale of enterprise data often make it too expensive and time-consuming to use effectively.

According to IDC’s Global DataSphere1, enterprises will generate 317 zettabytes of data annually by 2028 — including the creation of 29 zettabytes of unique data — of which 78% will be unstructured data and 44% of that will be audio and video. Because of the extremely high volume and various data types, most generative AI applications use a fraction of the total amount of data being stored and generated.

For enterprises to thrive in the AI era, they must find a way to make use of all of their data. This isn’t possible using traditional computing and data processing techniques. Instead, enterprises need an AI query engine.

What Is an AI Query Engine?

Simply, an AI query engine is a system that connects AI applications, or AI agents, to data. It’s a critical component of agentic AI, as it serves as a bridge between an organization’s knowledge base and AI-powered applications, enabling more accurate, context-aware responses.

AI agents form the basis of an AI query engine, where they can gather information and do work to assist human employees. An AI agent will gather information from many data sources, plan, reason and take action. AI agents can communicate with users, or they can work in the background, where human feedback and interaction will always be available.

In practice, an AI query engine is a sophisticated system that efficiently processes large amounts of data, extracts and stores knowledge, and performs semantic search on that knowledge, which can be quickly retrieved and used by AI.

Diagram showing how an AI agent ingests data and uses it for decision-making.
An AI query engine processes, stores and retrieves data — connecting AI agents to insights.

AI Query Engines Unlock Intelligence in Unstructured Data

An enterprise’s AI query engine will have access to knowledge stored in many different formats, but being able to extract intelligence from unstructured data is one of the most significant advancements it enables.

To generate insights, traditional query engines rely on structured queries and data sources, such as relational databases. Users must formulate precise queries using languages like SQL, and results are limited to predefined data formats.

In contrast, AI query engines can process structured, semi-structured and unstructured data. Common unstructured data formats are PDFs, log files, images and video, and are stored on object stores, file servers and parallel file systems. AI agents communicate with users and with each other using natural language. This enables them to interpret user intent, even when it’s ambiguous, by accessing diverse data sources. These agents can deliver results in a conversational format, so that users can interpret results.

This capability makes it possible to derive more insights and intelligence from any type of data — not just data that fits neatly into rows and columns.

For example, companies like DataStax and NetApp are building AI data platforms that enable their customers to have an AI query engine for their next-generation applications.

Key Features of AI Query Engines

AI query engines possess several crucial capabilities:

  • Diverse data handling: AI query engines can access and process various data types, including structured, semi-structured and unstructured data from multiple sources, including text, PDF, image, video and specialty data types.
  • Scalability: AI query engines can efficiently handle petabyte-scale data, making all enterprise knowledge available to AI applications quickly.
  • Accurate retrieval: AI query engines provide high-accuracy, high-performance embedding, vector search and reranking of knowledge from multiple sources.
  • Continuous learning: AI query engines can store and incorporate feedback from AI-powered applications, creating an AI data flywheel in which the feedback is used to refine models and increase the effectiveness of the applications over time.

Retrieval-augmented generation is a component of AI query engines. RAG uses the power of generative AI models to act as a natural language interface to data, allowing models to access and incorporate relevant information from large datasets during the response generation process.

Using RAG, any business or other organization can turn its technical information, policy manuals, videos and other data into useful knowledge bases. An AI query engine can then rely on these sources to support such areas as customer relations, employee training and developer productivity.

Additional information-retrieval techniques and ways to store knowledge are in research and development, so the capabilities of an AI query engine are expected to rapidly evolve.

The Impact of AI Query Engines

Using AI query engines, enterprises can fully harness the power of AI agents to connect their workforces to vast amounts of enterprise knowledge, improve the accuracy and relevance of AI-generated responses, process and utilize previously untapped data sources, and create data-driven AI flywheels that continuously improve their AI applications.

Some examples include an AI virtual assistant that provides personalized, 24/7 customer service experiences, an AI agent for searching and summarizing video, an AI agent for analyzing software vulnerabilities or an AI research assistant.

Bridging the gap between raw data and AI-powered applications, AI query engines will grow to play a crucial role in helping organizations extract value from their data.

NVIDIA Blueprints can help enterprises get started connecting AI to their data. Learn more about NVIDIA Blueprints and try them in the NVIDIA API catalog.

  1.  IDC, Global DataSphere Forecast, 2024.

Read More

CES 2025: AI Advancing at ‘Incredible Pace,’ NVIDIA CEO Says

CES 2025: AI Advancing at ‘Incredible Pace,’ NVIDIA CEO Says

NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics, and agentic AI.

AI has been “advancing at an incredible pace,” he said before an audience of more than 6,000 packed into the Michelob Ultra Arena in Las Vegas.

“It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images and sound,” Huang said. Now, we’re entering the era of “physical AI, AI that can proceed, reason, plan and act.”

NVIDIA GPUs and platforms are at the heart of this transformation, Huang explained, enabling breakthroughs across industries, including gaming, robotics and autonomous vehicles (AVs).

Huang’s keynote showcased how NVIDIA’s latest innovations are enabling this new era of AI, with several groundbreaking announcements, including:

Huang started off his talk by reflecting on NVIDIA’s three-decade journey. In 1999, NVIDIA invented the programmable GPU. Since then, modern AI has fundamentally changed how computing works, he said. “Every single layer of the technology stack has been transformed, an incredible transformation, in just 12 years.”

Revolutionizing Graphics With GeForce RTX 50 Series

“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” Huang said.

With that, he introduced the NVIDIA GeForce RTX 5090 GPU, the most powerful GeForce RTX GPU so far, with 92 billion transistors and delivering 3,352 trillion AI operations per second (TOPS).

“Here it is — our brand-new GeForce RTX 50 series, Blackwell architecture,” Huang said, holding the blacked-out GPU aloft and noting how it’s able to harness advanced AI to enable breakthrough graphics. “The GPU is just a beast.”

“Even the mechanical design is a miracle,” Huang said, noting that the graphics card has two cooling fans.

More variations in the GPU series are coming. The GeForce RTX 5090 and GeForce RTX 5080 desktop GPUs are scheduled to be available Jan. 30. The GeForce RTX 5070 Ti and the GeForce RTX 5070 desktops are slated to be available starting in February. Laptop GPUs are expected in March.

DLSS 4 introduces Multi Frame Generation, working in unison with the complete suite of DLSS technologies to boost performance by up to 8x. NVIDIA also unveiled NVIDIA Reflex 2, which can reduce PC latency by up to 75%.

The latest generation of DLSS can generate three additional frames for every frame we calculate, Huang explained. “As a result, we’re able to render at incredibly high performance, because AI does a lot less computation.”

RTX Neural Shaders use small neural networks to improve textures, materials and lighting in real-time gameplay. RTX Neural Faces and RTX Hair advance real-time face and hair rendering, using generative AI to animate the most realistic digital characters ever. RTX Mega Geometry increases the number of ray-traced triangles by up to 100x, providing more detail.

Advancing Physical AI With Cosmos|

In addition to advancements in graphics, Huang introduced the NVIDIA Cosmos world foundation model platform, describing it as a game-changer for robotics and industrial AI.

The next frontier of AI is physical AI, Huang explained. He likened this moment to the transformative impact of large language models on generative AI.

“The ChatGPT moment for general robotics is just around the corner,” he explained.

Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own, Huang said.

Cosmos integrates generative models, tokenizers, and a video processing pipeline to power physical AI systems like AVs and robots.

Cosmos aims to bring the power of foresight and multiverse simulation to AI models, enabling them to simulate every possible future and select optimal actions.

Cosmos models ingest text, image or video prompts and generate virtual world states as videos, Huang explained. “Cosmos generations prioritize the unique requirements of AV and robotics use cases like real-world environments, lighting and object permanence.”

Leading robotics and automotive companies, including 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.

In addition, Hyundai Motor Group is adopting NVIDIA AI and Omniverse to create safer, smarter vehicles, supercharge manufacturing and deploy cutting-edge robotics.

Cosmos is open license and available on GitHub.

Empowering Developers With AI Foundation Models

Beyond robotics and autonomous vehicles, NVIDIA is empowering developers and creators with AI foundation models.

Huang introduced AI foundation models for RTX PCs that supercharge digital humans, content creation, productivity and development.

“These AI models run in every single cloud because NVIDIA GPUs are now available in every single cloud,” Huang said. “It’s available in every single OEM, so you could literally take these models, integrate them into your software packages, create AI agents and deploy them wherever the customers want to run the software.”

These models — offered as NVIDIA NIM microservices — are accelerated by the new GeForce RTX 50 Series GPUs.

The GPUs have what it takes to run these swiftly, adding support for FP4 computing, boosting AI inference by up to 2x and enabling generative AI models to run locally in a smaller memory footprint compared with previous-generation hardware.

Huang explained the potential of new tools for creators: “We’re creating a whole bunch of blueprints that our ecosystem could take advantage of. All of this is completely open source, so you could take it and modify the blueprints.”

Top PC manufacturers and system builders are launching NIM-ready RTX AI PCs with GeForce RTX 50 Series GPUs. “AI PCs are coming to a home near you,” Huang said.

While these tools bring AI capabilities to personal computing, NVIDIA is also advancing AI-driven solutions in the automotive industry, where safety and intelligence are paramount.

Innovations in Autonomous Vehicles

Huang announced the NVIDIA DRIVE Hyperion AV platform, built on the new NVIDIA AGX Thor system-on-a-chip (SoC), designed for generative AI models and delivering advanced functional safety and autonomous driving capabilities.

“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”

DRIVE Hyperion, the first end-to-end AV platform, integrates advanced SoCs, sensors, and safety systems for next-gen vehicles, a sensor suite and an active safety and level 2 driving stack, with adoption by automotive safety pioneers such as Mercedes-Benz, JLR and Volvo Cars.

Huang highlighted the critical role of synthetic data in advancing autonomous vehicles. Real-world data is limited, so synthetic data is essential for training the autonomous vehicle data factory, he explained.

Powered by NVIDIA Omniverse AI models and Cosmos, this approach “generates synthetic driving scenarios that enhance training data by orders of magnitude.”

Using Omniverse and Cosmos, NVIDIA’s AI data factory can scale “hundreds of drives into billions of effective miles,” Huang said, dramatically increasing the datasets needed for safe and advanced autonomous driving.

“We are going to have mountains of training data for autonomous vehicles,” he added.

Toyota, the world’s largest automaker, will build its next-generation vehicles on the NVIDIA DRIVE AGX Orin, running the safety-certified NVIDIA DriveOS operating system, Huang said.

“Just as computer graphics was revolutionized at such an incredible pace, you’re going to see the pace of AV development increasing tremendously over the next several years,” Huang said. These vehicles will offer functionally safe, advanced driving assistance capabilities.

Agentic AI and Digital Manufacturing

NVIDIA and its partners have launched AI Blueprints for agentic AI, including PDF-to-podcast for efficient research and video search and summarization for analyzing large quantities of video and images — enabling developers to build, test and run AI agents anywhere.

AI Blueprints empower developers to deploy custom agents for automating enterprise workflows This new category of partner blueprints integrates NVIDIA AI Enterprise software, including NVIDIA NIM microservices and NVIDIA NeMo, with platforms from leading providers like CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases.

Additionally, Huang announced new Llama Nemotron.

Developers can use NVIDIA NIM microservices to build AI agents for tasks like customer support, fraud detection, and supply chain optimization.

Available as NVIDIA NIM microservices, the models can supercharge AI agents on any accelerated system.

NVIDIA NIM microservices streamline video content management, boosting efficiency and audience engagement in the media industry.

Moving beyond digital applications, NVIDIA’s innovations are paving the way for AI to revolutionize the physical world with robotics.

“All of the enabling technologies that I’ve been talking about are going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs, in general robotics.”

In manufacturing, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation will help developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.

Huang emphasized the importance of training robots efficiently, using NVIDIA’s Omniverse to generate millions of synthetic motions for humanoid training.

The Mega blueprint enables large-scale simulation of robot fleets, adopted by leaders like Accenture and KION for warehouse automation.

These AI tools set the stage for NVIDIA’s latest innovation: a personal AI supercomputer called Project DIGITS.

NVIDIA Unveils Project Digits

Putting NVIDIA Grace Blackwell on every desk and at every AI developer’s fingertips, Huang unveiled NVIDIA Project DIGITS.

“I have one more thing that I want to show you,” Huang said. “None of this would be possible if not for this incredible project that we started about a decade ago. Inside the company, it was called Project DIGITS — deep learning GPU intelligence training system.”

Huang highlighted the legacy of NVIDIA’s AI supercomputing journey, telling the story of how in 2016 he delivered the first NVIDIA DGX system to OpenAI. “And obviously, it revolutionized artificial intelligence computing.”

The new Project DIGITS takes this mission further. “Every software engineer, every engineer, every creative artist — everybody who uses computers today as a tool — will need an AI supercomputer,” Huang said.

Huang revealed that Project DIGITS, powered by the GB10 Grace Blackwell Superchip, represents NVIDIA’s smallest yet most powerful AI supercomputer. “This is NVIDIA’s latest AI supercomputer,” Huang said, showcasing the device. “It runs the entire NVIDIA AI stack — all of NVIDIA software runs on this. DGX Cloud runs on this.”

The compact yet powerful Project DIGITS is expected to be available in May.

A Year of Breakthroughs

“It’s been an incredible year,” Huang said as he wrapped up the keynote. Huang highlighted NVIDIA’s major achievements: Blackwell systems, physical AI foundation models, and breakthroughs in agentic AI and robotics

“I want to thank all of you for your partnership,” Huang said.

See notice regarding software product information.

Read More

NVIDIA Unveils ‘Mega’ Omniverse Blueprint for Building Industrial Robot Fleet Digital Twins

NVIDIA Unveils ‘Mega’ Omniverse Blueprint for Building Industrial Robot Fleet Digital Twins

According to Gartner, the worldwide end-user spending on all IT products for 2024 was $5 trillion. This industry is built on a computing fabric of electrons, is fully software-defined, accelerated — and now generative AI-enabled. While huge, it’s a fraction of the larger physical industrial market that relies on the movement of atoms.

Today’s 10 million factories, nearly 200,000 warehouses and 40 million miles of highways form the “computing” fabric of our physical world. But that vast network of production facilities and distribution centers is still laboriously and manually designed, operated and optimized.

In warehousing and distribution, operators face highly complex decision optimization problems — matrices of variables and interdependencies across human workers, robotic and agentic systems and equipment. Unlike the IT industry, the physical industrial market is still waiting for its own software-defined moment.

That moment is coming.

Virtual facility with people, machinery and robots all moving around the facility floor. Digital representations of the pathways and sensor inputs can be visualized with colorful arrays.
Choreographed integration of human workers, robotic and agentic systems and equipment in a facility digital twin. Image courtesy of Accenture, KION Group.

NVIDIA today at CES announced “Mega,” an Omniverse Blueprint for developing, testing and optimizing physical AI and robot fleets at scale in a digital twin before deployment into real-world facilities.

Advanced warehouses and factories use fleets of hundreds of autonomous mobile robots, robotic arm manipulators and humanoids working alongside people. With implementations of increasingly complex systems of sensor and robot autonomy, it requires coordinated training in simulation to optimize operations, help ensure safety and avoid disruptions.

Mega offers enterprises a reference architecture of NVIDIA accelerated computing, AI, NVIDIA Isaac and NVIDIA Omniverse technologies to develop and test digital twins for testing AI-powered robot brains that drive robots, video analytics AI agents, equipment and more for handling enormous complexity and scale. The new framework brings software-defined capabilities to physical facilities, enabling continuous development, testing, optimization and deployment.

Developing AI Brains With World Simulator for Autonomous Orchestration

With Mega-driven digital twins, including a world simulator that coordinates all robot activities and sensor data, enterprises can continuously update facility robot brains for intelligent routes and tasks for operational efficiencies.

The blueprint uses Omniverse Cloud Sensor RTX APIs that enable robotics developers to render sensor data from any type of intelligent machine in the factory, simultaneously, for high-fidelity large-scale sensor simulation. This allows robots to be tested in an infinite number of scenarios within the digital twin, using synthetic data in a software-in–the-loop pipeline with NVIDIA Isaac ROS.

Digital facility with workers and robots moving around the floor. Images on either side of this view are tapped into various sensors mounted on the virtual robots moving around the facility.
Operational efficiency is gained with sensor simulation. Image courtesy of Accenture, KION Group.

Supply chain solutions company KION Group is collaborating with Accenture and NVIDIA as the first to adopt Mega for optimizing operations in retail, consumer packaged goods, parcel services and more.

Jensen Huang, founder and CEO of NVIDIA, offered a glimpse into the future of this collaboration on stage at CES, demonstrating how enterprises can navigate a complex web of decisions using the Mega Omniverse Blueprint.

“At KION, we leverage AI-driven solutions as an integral part of our strategy to optimize our customers’ supply chains and increase their productivity,” said Rob Smith, CEO of KION GROUP AG. “With NVIDIA’s AI leadership and Accenture’s expertise in digital technologies, we are reinventing warehouse automation. Bringing these strong partners together, we are creating a vision for future warehouses that are part of a smart agile system, evolve with the world around them and can handle nearly any supply chain challenge.”

Creating Operational Efficiencies With Mega Omniverse Blueprint

Creating operational efficiencies, KION and Accenture are embracing the Mega Omniverse Blueprint to build next-generation supply chains for KION and its customers. KION can capture and digitalize a warehouse digital twin in Omniverse by using computer-aided design files, video, lidar, image and AI-generated data.

KION uses the Omniverse digital twin as a virtual training and testing environment for its industrial AI’s robot brains, powered by NVIDIA Isaac, tapping into smart cameras, forklifts, robotic equipment and digital humans. Integrating the Omniverse digital twin, KION’s warehouse management software can create and assign missions for robot brains, like moving a load from one place to another.

Digital facility with workers and robots moving around the floor. Dashboard metrics are placed over the viewport of the digital twin, which showcase various throughput and productivity metrics related to the scene.
Graphical data is easily introduced into the Omniverse viewport showcasing productivity and throughput among other desired metrics. Image courtesy of Accenture, KION Group.

These simulated robots can carry out tasks by perceiving and reasoning in environments, and they’re capable of planning next motions and then taking actions that are simulated in the digital twin. The robot brains perceive the results deciding the next action, and this cycle continues with Mega precisely tracking the state and position of all the assets in the digital twin.

Delivering Services With Mega for Facilities Everywhere

Accenture, global leader in professional services, is adopting Mega as part of its AI Refinery for Simulation and Robotics, built on NVIDIA AI and Omniverse, to help organizations use AI simulation to reinvent factory and warehouse design and ongoing operations.

With the blueprint, Accenture is delivering new services — including Custom Robotics and Manufacturing Foundation Model Training and Finetuning; Intelligent Humanoid Robotics; and AI-Powered Industrial Manufacturing and Logistics Simulation and Optimization — to expand the power of physical AI and  simulation to the world’s factories and warehouse operators.  Now, for example, an organization can explore numerous options for their warehouse before choosing and implementing the best one.

“As organizations enter the age of industrial AI, we are helping them use AI-powered simulation and autonomous robots to reinvent the process of designing new facilities and optimizing existing operations,” said Julie Sweet, chair and CEO of Accenture. “Our collaboration with NVIDIA and KION will help our clients plan their operations in digital twins, where they can run hundreds of options and quickly select the best for current or changing market conditions, such as seasonal market demand or workforce availability.  This represents a new frontier of value for our clients to achieve using technology, data and AI.”

Join NVIDIA at CES

See notice regarding software product information.

Read More

Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTX

Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTX

Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy.

These larger models require massive amounts of diverse data for training, fine-tuning and validation. But collecting such data — including from rare edge cases and potentially hazardous scenarios, like a pedestrian crossing in front of an autonomous vehicle (AV) at night or a human entering a welding robot work cell — can be incredibly difficult and resource-intensive.

To help developers fill this gap, NVIDIA Omniverse Cloud Sensor RTX APIs enable physically accurate sensor simulation for generating datasets at scale. The application programming interfaces (APIs) are designed to support sensors commonly used for autonomy — including cameras, radar and lidar — and can integrate seamlessly into existing workflows to accelerate the development of autonomous vehicles and robots of every kind.

Omniverse Sensor RTX APIs are now available to select developers in early access. Organizations such as Accenture, Foretellix, MITRE and Mcity are integrating these APIs via domain-specific blueprints to provide end customers with the tools they need to deploy the next generation of industrial manufacturing robots and self-driving cars.

Powering Industrial AI With Omniverse Blueprints

In complex environments like factories and warehouses, robots must be orchestrated to safely and efficiently work alongside machinery and human workers. All those moving parts present a massive challenge when designing, testing or validating operations while avoiding disruptions.

Mega is an Omniverse Blueprint that offers enterprises a reference architecture of NVIDIA accelerated computing, AI, NVIDIA Isaac and NVIDIA Omniverse technologies. Enterprises can use it to develop digital twins and test AI-powered robot brains that drive robots, cameras, equipment and more to handle enormous complexity and scale.

Integrating Omniverse Sensor RTX, the blueprint lets robotics developers simultaneously render sensor data from any type of intelligent machine in a factory for high-fidelity, large-scale sensor simulation.

With the ability to test operations and workflows in simulation, manufacturers can save considerable time and investment, and improve efficiency in entirely new ways.

International supply chain solutions company KION Group and Accenture are using the Mega blueprint to build Omniverse digital twins that serve as virtual training and testing environments for industrial AI’s robot brains, tapping into data from smart cameras, forklifts, robotic equipment and digital humans.

The robot brains perceive the simulated environment with physically accurate sensor data rendered by the Omniverse Sensor RTX APIs. They use this data to plan and act, with each action precisely tracked with Mega, alongside the state and position of all the assets in the digital twin. With these capabilities, developers can continuously build and test new layouts before they’re implemented in the physical world.

Driving AV Development and Validation

Autonomous vehicles have been under development for over a decade, but barriers in acquiring the right training and validation data and slow iteration cycles have hindered large-scale deployment.

To address this need for sensor data, companies are harnessing the NVIDIA Omniverse Blueprint for AV simulation, a reference workflow that enables physically accurate sensor simulation. The workflow uses Omniverse Sensor RTX APIs to render the camera, radar and lidar data necessary for AV development and validation.

AV toolchain provider Foretellix has integrated the blueprint into its Foretify AV development toolchain to transform object-level simulation into physically accurate sensor simulation.

The Foretify toolchain can generate any number of testing scenarios simultaneously. By adding sensor simulation capabilities to these scenarios, Foretify can now enable  developers to evaluate the completeness of their AV development, as well as train and test at the levels of fidelity and scale needed to achieve large-scale and safe deployment. In addition, Foretellix will use the newly announced NVIDIA Cosmos platform to generate an even greater diversity of scenarios for verification and validation.

Nuro, an autonomous driving technology provider with one of the largest level 4 deployments in the U.S., is using the Foretify toolchain to train, test and validate its self-driving vehicles before deployment.

In addition, research organization MITRE is collaborating with the University of Michigan’s Mcity testing facility to build a digital AV validation framework for regulatory use, including a digital twin of Mcity’s 32-acre proving ground for autonomous vehicles. The project uses the AV simulation blueprint to render physically accurate sensor data at scale in the virtual environment, boosting training effectiveness.

The future of robotics and autonomy is coming into sharp focus, thanks to the power of high-fidelity sensor simulation. Learn more about these solutions at CES by visiting Accenture at Ballroom F at the Venetian and Foretellix booth 4016 in the West Hall of Las Vegas Convention Center.

Learn more about the latest in automotive and generative AI technologies by joining NVIDIA at CES.

See notice regarding software product information.

Read More

Now See This: NVIDIA Launches Blueprint for AI Agents That Can Analyze Video

Now See This: NVIDIA Launches Blueprint for AI Agents That Can Analyze Video

The next big moment in AI is in sight — literally.

Today, more than 1.5 billion enterprise level cameras deployed worldwide are generating roughly 7 trillion hours of video per year. Yet, only a fraction of it gets analyzed.

It’s estimated that less than 1% of video from industrial cameras is watched live by humans, meaning critical operational incidents can go largely unnoticed.

This comes at a high cost. For example, manufacturers are losing trillions of dollars annually to poor product quality or defects that they could’ve spotted earlier, or even predicted, by using AI agents that can perceive, analyze and help humans take action.

Interactive AI agents with built-in visual perception capabilities can serve as always-on video analysts, helping factories run more efficiently, bolster worker safety, keep traffic running smoothly and even up an athlete’s game.

To accelerate the creation of such agents, NVIDIA today announced early access to a new version of the NVIDIA AI Blueprint for video search and summarization. Built on top of the NVIDIA Metropolis platform — and now supercharged by NVIDIA Cosmos Nemotron vision language models (VLMs), NVIDIA Llama Nemotron large language models (LLMs) and NVIDIA NeMo Retriever — the blueprint provides developers with the tools to build and deploy AI agents that can analyze large quantities of video and image content.

The blueprint integrates the NVIDIA AI Enterprise software platform — which includes NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for retrieval-augmented generation — to enable batch video processing that’s 30x faster than watching it in real time.

The blueprint contains several agentic AI features — such as chain-of-thought reasoning, task planning and tool calling — that can help developers streamline the creation of powerful and diverse visual agents to solve a range of problems.

AI agents with video analysis abilities can be combined with other agents with different skill sets to enable even more sophisticated agentic AI services. Enterprises have the flexibility to build and deploy their AI agents from the edge to the cloud.

How Video Analyst AI Agents Can Help Industrial Businesses 

AI agents with visual perception and analysis skills can be fine-tuned to help businesses with industrial operations by:

  • Increasing productivity and reducing waste: Agents can help ensure standard operating procedures are followed during complex industrial processes like product assembly. They can also be fine-tuned to carefully watch and understand nuanced actions, and the sequence in which they’re implemented.
  • Boosting asset management efficiency through better space utilization: Agents can help optimize inventory storage in warehouses by performing 3D volume estimation and centralizing understanding across various camera streams.
  • Improving safety through auto-generation of incident reports and summaries: Agents can process huge volumes of video and summarize it into contextually informative reports of accidents. They can also help ensure personal protective equipment compliance in factories, improving worker safety in industrial settings.
  • Preventing accidents and production problems: AI agents can identify atypical activity to quickly mitigate operational and safety risks, whether in a warehouse, factory or airport, or at a traffic intersection or other municipal setting.
  • Learning from the past: Agents can search through operations video archives, find relevant information from the past and use it to solve problems or create new processes.

Video Analysts for Sports, Entertainment and More

Another industry where video analysis AI agents stand to make a mark is sports — a $500 billion market worldwide, with hundreds of billions in projected growth over the next several years.

Coaches, teams and leagues — whether professional or amateur — rely on video analytics to evaluate and enhance player performance, prioritize safety and boost fan engagement through player analytics platforms and data visualization. With visually perceptive AI agents, athletes now have unprecedented access to deeper insights and opportunities for improvement.

During his CES opening keynote, NVIDIA founder and CEO Jensen Huang demonstrated an AI video analytics agent that assessed the fastball pitching skills of an amateur baseball player compared with a professional’s. Using video captured from the ceremonial first pitch that Huang threw for the San Francisco Giants baseball team, the video analytics AI agent was able to suggest areas for improvement.

The $3 trillion media and entertainment industry is also poised to benefit from video analyst AI agents. Through the NVIDIA Media2 initiative, these agents will help drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.

Worldwide Adoption and Availability 

Partners from around the world are integrating the blueprint for building AI agents for video analysis into their own developer workflows, including Accenture, Centific, Deloitte, EY, Infosys, Linker Vision, Pegatron, TATA Consultancy Services (TCS), Telit Cinterion and VAST.

Apply for early access to the NVIDIA Blueprint for video search and summarization.

See notice regarding software product information.

Editor’s note: Omdia is the source for 1.5 billion enterprise-level cameras deployed.   

Read More