What Is Agentic AI?

What Is Agentic AI?

AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.

How Does Agentic AI Work?

Agentic AI uses a four-step process for problem-solving:

  1. Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.
  2. Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.
  3. Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.
  4. Learn: Agentic AI continuously improves through a feedback loop, or
    “data flywheel,” where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.

Fueling Agentic AI With Enterprise Data

Across industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.

AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.

Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.

The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.

Agentic AI in Action

The potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.

Agentic AI Use Cases

Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.

There’s also growing interest in digital humans — AI-powered agents that embody a company’s brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.

Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.

Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. It’s projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.

Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.

AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.

How to Get Started

With its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.

To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.

NVIDIA partners, including Accenture, are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.

Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.

Read More

What Is Agentic AI?

What Is Agentic AI?

AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.

How Does Agentic AI Work?

Agentic AI uses a four-step process for problem-solving:

  1. Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.
  2. Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.
  3. Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.
  4. Learn: Agentic AI continuously improves through a feedback loop, or
    “data flywheel,” where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.

Fueling Agentic AI With Enterprise Data

Across industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.

AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.

Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.

The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.

Agentic AI in Action

The potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.

Agentic AI Use Cases

Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.

There’s also growing interest in digital humans — AI-powered agents that embody a company’s brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.

Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.

Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. It’s projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.

Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.

AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.

How to Get Started

With its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.

To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.

NVIDIA partners, including Accenture, are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.

Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.

Read More

What Is Agentic AI?

What Is Agentic AI?

AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.

How Does Agentic AI Work?

Agentic AI uses a four-step process for problem-solving:

  1. Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.
  2. Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.
  3. Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.
  4. Learn: Agentic AI continuously improves through a feedback loop, or
    “data flywheel,” where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.

Fueling Agentic AI With Enterprise Data

Across industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.

AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.

Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.

The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.

Agentic AI in Action

The potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.

Agentic AI Use Cases

Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.

There’s also growing interest in digital humans — AI-powered agents that embody a company’s brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.

Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.

Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. It’s projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.

Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.

AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.

How to Get Started

With its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.

To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.

NVIDIA partners, including Accenture, are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.

Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.

Read More

What Is Agentic AI?

What Is Agentic AI?

AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.

How Does Agentic AI Work?

Agentic AI uses a four-step process for problem-solving:

  1. Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.
  2. Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.
  3. Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.
  4. Learn: Agentic AI continuously improves through a feedback loop, or
    “data flywheel,” where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.

Fueling Agentic AI With Enterprise Data

Across industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.

AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.

Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.

The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.

Agentic AI in Action

The potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.

Agentic AI Use Cases

Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.

There’s also growing interest in digital humans — AI-powered agents that embody a company’s brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.

Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.

Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. It’s projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.

Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.

AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.

How to Get Started

With its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.

To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.

NVIDIA partners, including Accenture, are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.

Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.

Read More

What Is Agentic AI?

What Is Agentic AI?

AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.

How Does Agentic AI Work?

Agentic AI uses a four-step process for problem-solving:

  1. Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.
  2. Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.
  3. Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.
  4. Learn: Agentic AI continuously improves through a feedback loop, or
    “data flywheel,” where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.

Fueling Agentic AI With Enterprise Data

Across industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.

AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.

Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.

The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.

Agentic AI in Action

The potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.

Agentic AI Use Cases

Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.

There’s also growing interest in digital humans — AI-powered agents that embody a company’s brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.

Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.

Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. It’s projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.

Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.

AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.

How to Get Started

With its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.

To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.

NVIDIA partners, including Accenture, are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.

Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.

Read More

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

At ROSCon in Odense, one of Denmark’s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.

Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.

Generative AI Comes to ROS Community

ReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.

The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAI’s Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.

The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.

At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.

Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.

Enhancing ROS Workflows With a ‘Sim-First’ Approach

Simulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginner’s Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.

Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxglove’s custom extension, built on Isaac Sim.

New Capabilities for Isaac ROS 3.2

NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.

Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.

Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot’s (AMR) environmental awareness and performance in dynamic settings like warehouses.

Partners Adopting NVIDIA Isaac 

Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.

  • Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.
  • Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.
  • Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.
  • Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.
  • Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.
  • LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.
  • Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.

    Connecting With Partners at ROSCon

    ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:

  • “Nav2 User Meetup” Birds of a Feather session with Steve Macenski from Open Navigation LLC
  • “ROS in Large-Scale Factory Automation” with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG
  • “Integrating AI in Robot Manipulation Workflows” Birds of a Feather session with Kalyan Vadrevu from NVIDIA
  • “Accelerating Robot Learning at Scale in Simulation” Birds of a Feather session with Markus Wuensch from NVIDIA
  • “On Use of Nav2 Docking” with Open Navigation’s Macenski

Additionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.

The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.

For the latest updates, visit the ROSCon page.

Read More

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

At ROSCon in Odense, one of Denmark’s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.

Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.

Generative AI Comes to ROS Community

ReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.

The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAI’s Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.

The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.

At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.

Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.

Enhancing ROS Workflows With a ‘Sim-First’ Approach

Simulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginner’s Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.

Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxglove’s custom extension, built on Isaac Sim.

New Capabilities for Isaac ROS 3.2

NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.

Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.

Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot’s (AMR) environmental awareness and performance in dynamic settings like warehouses.

Partners Adopting NVIDIA Isaac 

Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.

  • Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.
  • Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.
  • Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.
  • Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.
  • Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.
  • LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.
  • Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.

    Connecting With Partners at ROSCon

    ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:

  • “Nav2 User Meetup” Birds of a Feather session with Steve Macenski from Open Navigation LLC
  • “ROS in Large-Scale Factory Automation” with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG
  • “Integrating AI in Robot Manipulation Workflows” Birds of a Feather session with Kalyan Vadrevu from NVIDIA
  • “Accelerating Robot Learning at Scale in Simulation” Birds of a Feather session with Markus Wuensch from NVIDIA
  • “On Use of Nav2 Docking” with Open Navigation’s Macenski

Additionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.

The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.

For the latest updates, visit the ROSCon page.

Read More

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

At ROSCon in Odense, one of Denmark’s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.

Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.

Generative AI Comes to ROS Community

ReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.

The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAI’s Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.

The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.

At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.

Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.

Enhancing ROS Workflows With a ‘Sim-First’ Approach

Simulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginner’s Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.

Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxglove’s custom extension, built on Isaac Sim.

New Capabilities for Isaac ROS 3.2

NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.

Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.

Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot’s (AMR) environmental awareness and performance in dynamic settings like warehouses.

Partners Adopting NVIDIA Isaac 

Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.

  • Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.
  • Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.
  • Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.
  • Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.
  • Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.
  • LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.
  • Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.

    Connecting With Partners at ROSCon

    ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:

  • “Nav2 User Meetup” Birds of a Feather session with Steve Macenski from Open Navigation LLC
  • “ROS in Large-Scale Factory Automation” with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG
  • “Integrating AI in Robot Manipulation Workflows” Birds of a Feather session with Kalyan Vadrevu from NVIDIA
  • “Accelerating Robot Learning at Scale in Simulation” Birds of a Feather session with Markus Wuensch from NVIDIA
  • “On Use of Nav2 Docking” with Open Navigation’s Macenski

Additionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.

The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.

For the latest updates, visit the ROSCon page.

Read More

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

At ROSCon in Odense, one of Denmark’s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.

Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.

Generative AI Comes to ROS Community

ReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.

The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAI’s Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.

The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.

At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.

Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.

Enhancing ROS Workflows With a ‘Sim-First’ Approach

Simulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginner’s Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.

Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxglove’s custom extension, built on Isaac Sim.

New Capabilities for Isaac ROS 3.2

NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.

Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.

Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot’s (AMR) environmental awareness and performance in dynamic settings like warehouses.

Partners Adopting NVIDIA Isaac 

Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.

  • Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.
  • Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.
  • Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.
  • Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.
  • Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.
  • LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.
  • Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.

    Connecting With Partners at ROSCon

    ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:

  • “Nav2 User Meetup” Birds of a Feather session with Steve Macenski from Open Navigation LLC
  • “ROS in Large-Scale Factory Automation” with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG
  • “Integrating AI in Robot Manipulation Workflows” Birds of a Feather session with Kalyan Vadrevu from NVIDIA
  • “Accelerating Robot Learning at Scale in Simulation” Birds of a Feather session with Markus Wuensch from NVIDIA
  • “On Use of Nav2 Docking” with Open Navigation’s Macenski

Additionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.

The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.

For the latest updates, visit the ROSCon page.

Read More

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem

At ROSCon in Odense, one of Denmark’s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.

Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.

Generative AI Comes to ROS Community

ReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.

The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAI’s Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.

The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.

At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.

Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.

Enhancing ROS Workflows With a ‘Sim-First’ Approach

Simulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginner’s Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.

Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxglove’s custom extension, built on Isaac Sim.

New Capabilities for Isaac ROS 3.2

NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.

Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.

Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot’s (AMR) environmental awareness and performance in dynamic settings like warehouses.

Partners Adopting NVIDIA Isaac 

Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.

  • Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.
  • Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.
  • Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.
  • Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.
  • Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.
  • LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.
  • Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.

    Connecting With Partners at ROSCon

    ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:

  • “Nav2 User Meetup” Birds of a Feather session with Steve Macenski from Open Navigation LLC
  • “ROS in Large-Scale Factory Automation” with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG
  • “Integrating AI in Robot Manipulation Workflows” Birds of a Feather session with Kalyan Vadrevu from NVIDIA
  • “Accelerating Robot Learning at Scale in Simulation” Birds of a Feather session with Markus Wuensch from NVIDIA
  • “On Use of Nav2 Docking” with Open Navigation’s Macenski

Additionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.

The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.

For the latest updates, visit the ROSCon page.

Read More