Imbue’s Kanjun Qiu Shares Insights on How to Build Smarter AI Agents

Imbue’s Kanjun Qiu Shares Insights on How to Build Smarter AI Agents

Imagine a future in which everyone is empowered to build and use their own AI agents. That future may not be far off, as new software is infused with intelligence through collaborative AI systems that work alongside users rather than merely automating tasks.

In this episode of the NVIDIA AI Podcast, Kanjun Qiu, CEO of Imbue, discusses the rise of AI agents, drawing parallels between the personal computer revolution of the late 1970s and 80s and today’s AI agent transformation. She details Imbue’s approach to building reasoning capabilities into its products, the challenges of verifying the correctness of AI outputs and how Imbue is focusing on post-training and fine-tuning to improve verification capabilities.

Learn more about Imbue, and read more about AI agents, including how virtual assistants can enhance customer service experiences.

And hear more about the future of AI and graphics by tuning in to the CES keynote, delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT.

Time Stamps

1:21 – What are AI agents? And Imbue’s approach to them.

9:00 – Where are AI agents being used the most today?

17:05 – Why building a good user experience around agents requires invention.

26:28 – How reasoning and verification capabilities factor into Imbue’s products.

You Might Also Like… 

Zoom CTO Xuedong “XD” Huang on How AI Revolutionizes Productivity 

Zoom is now transforming into an AI-first platform. CTO Xuedong Huang discusses Zoom’s AI Companion 2.0 and the company’s “federated AI” strategy, which aims to integrate multiple large language models to enhance productivity and collaboration.

How Roblox Uses Generative AI to Enhance User Experiences

Roblox is enhancing its colorful online platform with generative AI to improve user safety and inclusivity through features like automated chat filters and real-time text translation. Anupam Singh, VP of AI and growth engineering at Roblox, explores how AI coding assistants are helping creators focus more on creative expression.

Rendered.ai CEO Nathan Kundtz on Using AI to Build Better AI

Data is crucial for training AI and machine learning systems, and synthetic data offers a solution to the challenges of compiling real-world data. Nathan Kundtz, founder and CEO of Rendered.ai, discusses how his company’s platform generates synthetic data to enhance AI models.

Subscribe to the AI Podcast

Get the AI Podcast through Apple Podcasts, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Read More

AI at Your Service: Digital Avatars With Speech Capabilities Offer Interactive Customer Experiences

AI at Your Service: Digital Avatars With Speech Capabilities Offer Interactive Customer Experiences

Editor’s note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copilots. The series will also highlight the NVIDIA software and hardware powering advanced AI agents, which form the foundation of AI query engines that gather insights and perform tasks to transform everyday experiences and reshape industries.

To enhance productivity and upskill workers, organizations worldwide are seeking ways to provide consistent, around-the-clock customer service with greater speed, accuracy and scale.

Intelligent AI agents offer one such solution. They deliver advanced problem-solving capabilities and integrate vast and disparate sources of data to understand and respond to natural language.

Powered by generative AI and agentic AI, digital avatars are boosting efficiency across industries like healthcare, telecom, manufacturing, retail and more. According to Gartner, by 2028, 45% of organizations with more than 500 employees will use employee AI avatars to expand the capacity of human capital.1

From educating prospects on policies to giving customers personalized solutions, AI is helping organizations optimize revenue streams and elevate employee knowledge and productivity.

Where Context-Aware AI Avatars Are Most Impactful

Staying ahead in a competitive, evolving market requires continuous learning and analysis. AI avatars — also referred to as digital humans — are addressing key concerns and enhancing operations across industries.

One key benefit of agentic digital human technology is the ability to offer consistent, multilingual support and personalized guidance for a variety of use cases.

For instance, a medical-based AI agent can provide 24/7 virtual intake and support telehealth services. Or, a virtual financial advisor can help enhance client security and financial literacy by alerting bank customers of potential fraud, or offering personalized offers and investment tips based on their unique portfolio.

These digital humans boost efficiency, cut costs and enhance customer loyalty. Some key ways digital humans can be applied include:

  • Personalized, On-Brand Customer Assistance: A digital human interface can provide a personal touch when educating new customers on a company’s products and service portfolios. They can provide ongoing customer support, offering immediate responses and solving problems without the need for a live operator.
  • Enhanced Employee Onboarding: Intelligent AI assistants can offer streamlined, adaptable, personalized employee onboarding, whether in hospitals or offices, by providing consistent access to updated institutional knowledge at scale. With pluggable, customizable retrieval-augmented generation (RAG), these assistants can deliver real-time answers to queries while maintaining a deep understanding of company-specific data.
  • Seamless Communication Across Languages: In global enterprises, communication barriers can slow down operations. AI-powered avatars with natural language processing capabilities can communicate effortlessly across languages. This is especially useful in customer service or employee training environments where multilingual support is crucial.

Learn more by listening to the NVIDIA AI Podcast episode with Kanjun Qiu, CEO of Imbue, who shares insights on how to build smarter AI agents.

Interactive AI Agents With Text-to-Speech and Speech-to-Text

With text-to-speech and speech-to-text capabilities, AI agents can offer enhanced interactivity and engagement in customer service interactions.

SoftServe, an IT consulting and digital services provider, has built several digital humans for a variety of use cases, highlighting the technology’s potential to enhance user experiences.

SoftServe’s Digital Concierge is accelerated by NVIDIA AI Blueprints and NVIDIA ACE technologies to rapidly deploy scalable, customizable digital humans across diverse infrastructures.

GEN, SoftServe’s virtual customer service assistant and digital concierge, makes customer service more engaging by providing lifelike interactions, continuous availability, personalized responses and simultaneous access to all necessary knowledge bases.

SoftServe also developed FINNA, an AI-powered virtual financial advisor that can provide financial guidance tailored to a client’s profile and simplify complex financial terminology. It helps streamline onboarding and due diligence, supporting goal-oriented financial planning and risk assessment.

AISHA is another AI-powered digital human developed by SoftServe with NVIDIA technology. Created for the UAE Ministry of Justice, the digital human significantly improves judicial processes by reducing case review times, enhancing the accuracy of rulings and providing rapid access to legal databases. It demonstrates how generative AI can bridge the gap between technology and meaningful user interaction to enhance customer service and operational efficiency in the judicial sector.

How to Design AI Agents With Avatar and Speech Features

Designing AI agents with avatar and speech features involves several key steps

  1. Determine the use case: Choose between 2D or 3D avatars based on the required level of immersion and interaction.
  2. Avatar development:
    • For 3D avatars, use specialized software and technical expertise to create lifelike movements and photorealism.
    • For 2D avatars, opt for quicker development suitable for web-embedded solutions.
  3. Integrate speech technologies: Use NVIDIA Riva for world-class automatic speech recognition, along with text-to-speech to enable verbal interactions.
  4. Rendering options: Use NVIDIA Omniverse RTX Renderer technology or Unreal Engine tools for 3D avatars to achieve high-quality output and compute efficiency.
  5. Deployment: Tap cloud-native deployment for real-time output and scalability, particularly for interactive web or mobile applications.

For an overview on how to design interactive customer service tools, read the technical blogs on how to “Build a Digital Human Interface for AI Apps With an NVIDIA AI Blueprint” and “Expanding AI Agent Interface Options With 2D and 3D Digital Human Avatars.”

NVIDIA AI Blueprint for Digital Humans

The latest release of the NVIDIA AI Blueprint for digital humans introduces several updates that enhance the interactivity and responsiveness of digital avatars, including dynamic switching between RAG models. Users can experience this directly in preview.

The integration of the Audio2Face-2D microservice in the blueprint means developers can create 2D digital humans, which require significantly less processing power compared with 3D models, for web- and mobile-based applications.

2D avatars are better suited for simpler interactions and platforms where photorealism isn’t necessary. This makes them ideal for scenarios like telemedicine, where quick loading times with lower bandwidth requirements are crucial.

Another significant update is the introduction of user attention detection through vision AI. This feature enables digital humans to detect when a user is present — even if they are idle or on mute — and initiate interaction, such as greeting the user. This capability is particularly beneficial in kiosk scenarios, where engaging users proactively can enhance the service experience.

Getting Started

NVIDIA AI Blueprints make it easy to start building and setting up virtual assistants by offering ready-made workflows and tools to accelerate deployment. Whether for a simple AI-powered chatbot or a fully animated digital human interface, the blueprints offer resources to create AI assistants that are scalable, aligned with an organization’s brand and deliver a responsive, efficient customer support experience.

 

1. Gartner®, Hype Cycle™ for the Future of Work, 2024, Tori Paulman, Emily Rose, etc., July 2024

GARTNER is a registered trademark and service mark and Hype Cycle is a trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Read More

NVIDIA Awards up to $60,000 Research Fellowships to PhD Students

NVIDIA Awards up to $60,000 Research Fellowships to PhD Students

For more than two decades, the NVIDIA Graduate Fellowship Program has supported graduate students doing outstanding work relevant to NVIDIA technologies. Today, the program announced the latest awards of up to $60,000 each to 10 Ph.D. students involved in research that spans all areas of computing innovation.

Selected from a highly competitive applicant pool, the awardees will participate in a summer internship preceding the fellowship year. Their work puts them at the forefront of accelerated computing — tackling projects in autonomous systems, computer architecture, computer graphics, deep learning, programming systems, robotics and security.

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

The 2025-2026 fellowship recipients are:

  • Anish Saxena, Georgia Institute of Technology — Rethinking data movement across the stack — spanning large language model architectures, system software and memory systems — to improve the efficiency of LLM training and inference.
  • Jiawei Yang, University of Southern California — Creating scalable, generalizable foundation models for autonomous systems through self-supervised learning, leveraging neural reconstruction to capture detailed environmental geometry and dynamic scene behaviors, and enhancing adaptability in robotics, digital twin technologies and autonomous driving.
  • Jiayi (Eris) Zhang, Stanford University — Developing intelligent algorithms, models and tools for enhancing user creativity and productivity in design, animation and simulation.
  • Ruisi Cai, University of Texas at Austin — Working on efficient training and inference for large foundation models as well as AI security and privacy.
  • Seul Lee, Korea Advanced Institute of Science and Technology — Developing generative models for molecules and exploration strategies in chemical space for drug discovery applications.
  • Sreyan Ghosh, University of Maryland, College Park — Advancing audio processing and reasoning by designing resource-efficient models and training techniques, improving audio representation learning and enhancing audio perception for AI systems.
  • Tairan He, Carnegie Mellon University — Researching the development of humanoid robots, with a focus on advancing whole-body loco-manipulation through large-scale simulation-to-real learning.
  • Xiaogeng Liu, University of Wisconsin–Madison — Developing robust and trustworthy AI systems, with an emphasis on evaluating and enhancing machine learning models to ensure consistent performance and resilience against diverse attacks and unforeseen inputs.
  • Yunze Man, University of Illinois Urbana-Champaign — Developing vision-centric reasoning models for multimodal and embodied AI agents, with a focus on object-centric perception systems in dynamic scenes, vision foundation models for open-world scene understanding and generation, and large multimodal models for embodied reasoning and robotics planning.
  • Zhiqiang Xie, Stanford University — Building infrastructures to enable more efficient, scalable and complex compound AI systems while enhancing the observability and reliability of such systems.

We also acknowledge the 2025-2026 fellowship finalists:

  • Bo Zhao, University of California, San Diego
  • Chenning Li, Massachusetts Institute of Technology
  • Dacheng Li, University of California, Berkeley
  • Jiankai Sun, Stanford University
  • Wenlong Huang, Stanford University

Read More

AI in Your Own Words: NVIDIA Debuts NeMo Retriever Microservices for Multilingual Generative AI Fueled by Data

AI in Your Own Words: NVIDIA Debuts NeMo Retriever Microservices for Multilingual Generative AI Fueled by Data

In enterprise AI, understanding and working across multiple languages is no longer optional — it’s essential for meeting the needs of employees, customers and users worldwide.

Multilingual information retrieval — the ability to search, process and retrieve knowledge across languages — plays a key role in enabling AI to deliver more accurate and globally relevant outputs.

Enterprises can expand their generative AI efforts into accurate, multilingual systems using NVIDIA NeMo Retriever embedding and reranking NVIDIA NIM microservices, which are now available on the NVIDIA API catalog. These models can understand information across a wide range of languages and formats, such as documents, to deliver accurate, context-aware results at massive scale.

With NeMo Retriever, businesses can now:

  • Extract knowledge from large and diverse datasets for additional context to deliver more accurate responses.
  • Seamlessly connect generative AI to enterprise data in most major global languages to expand user audiences.
  • Deliver actionable intelligence at greater scale with 35x improved data storage efficiency through new techniques such as long context support and dynamic embedding sizing.
New NeMo Retriever microservices reduce storage volume needs by 35x, enabling enterprises to process more information at once and fit large knowledge bases on a single server. This makes AI solutions more accessible, cost-effective and easier to scale across organizations.

Leading NVIDIA partners like DataStax, Cohesity, Cloudera, Nutanix, SAP, VAST Data and WEKA are already adopting these microservices to help organizations across industries securely connect custom models to diverse and large data sources. By using retrieval-augmented generation (RAG) techniques, NeMo Retriever enables AI systems to access richer, more relevant information and effectively bridge linguistic and contextual divides.

Wikidata Speeds Data Processing From 30 Days to Under Three Days 

In partnership with DataStax, Wikimedia has implemented NeMo Retriever to vector-embed the content of Wikipedia, serving billions of users. Vector embedding — or “vectorizing” —  is a process that transforms data into a format that AI can process and understand to extract insights and drive intelligent decision-making.

Wikimedia used the NeMo Retriever embedding and reranking NIM microservices to vectorize over 10 million Wikidata entries into AI-ready formats in under three days, a process that used to take 30 days. That 10x speedup enables scalable, multilingual access to one of the world’s largest open-source knowledge graphs.

This groundbreaking project ensures real-time updates for hundreds of thousands of entries that are being edited daily by thousands of contributors, enhancing global accessibility for developers and users alike. With Astra DB’s serverless model and NVIDIA AI technologies, the DataStax offering delivers near-zero latency and exceptional scalability to support the dynamic demands of the Wikimedia community.

DataStax is using NVIDIA AI Blueprints and integrating the NVIDIA NeMo Customizer, Curator, Evaluator and Guardrails microservices into the LangFlow AI code builder to enable the developer ecosystem to optimize AI models and pipelines for their unique use cases and help enterprises scale their AI applications.

Language-Inclusive AI Drives Global Business Impact

NeMo Retriever helps global enterprises overcome linguistic and contextual barriers and unlock the potential of their data. By deploying robust, AI solutions, businesses can achieve accurate, scalable and high-impact results.

NVIDIA’s platform and consulting partners play a critical role in ensuring enterprises can efficiently adopt and integrate generative AI capabilities, such as the new multilingual NeMo Retriever microservices. These partners help align AI solutions to an organization’s unique needs and resources, making generative AI more accessible and effective. They include:

  • Cloudera plans to expand the integration of NVIDIA AI in the Cloudera AI Inference Service. Currently embedded with NVIDIA NIM, Cloudera AI Inference will include NVIDIA NeMo Retriever to improve the speed and quality of insights for multilingual use cases.
  • Cohesity introduced the industry’s first generative AI-powered conversational search assistant that uses backup data to deliver insightful responses. It uses the NVIDIA NeMo Retriever reranking microservice to improve retrieval accuracy and significantly enhance the speed and quality of insights for various applications.
  • SAP is using the grounding capabilities of NeMo Retriever to add context to its Joule copilot Q&A feature and information retrieved from custom documents.
  • VAST Data is deploying NeMo Retriever microservices on the VAST Data InsightEngine with NVIDIA to make new data instantly available for analysis. This accelerates the identification of business insights by capturing and organizing real-time information for AI-powered decisions.
  • WEKA is integrating its WEKA AI RAG Reference Platform (WARRP) architecture with NVIDIA NIM and NeMo Retriever into its low-latency data platform to deliver scalable, multimodal AI solutions, processing hundreds of thousands of tokens per second.

Breaking Language Barriers With Multilingual Information Retrieval

Multilingual information retrieval is vital for enterprise AI to meet real-world demands. NeMo Retriever supports efficient and accurate text retrieval across multiple languages and cross-lingual datasets. It’s designed for enterprise use cases such as search, question-answering, summarization and recommendation systems.

Additionally, it addresses a significant challenge in enterprise AI — handling large volumes of large documents. With long-context support, the new microservices can process lengthy contracts or detailed medical records while maintaining accuracy and consistency over extended interactions.

These capabilities help enterprises use their data more effectively, providing precise, reliable results for employees, customers and users while optimizing resources for scalability. Advanced multilingual retrieval tools like NeMo Retriever can make AI systems more adaptable, accessible and impactful in a globalized world.

Availability

Developers can access the multilingual NeMo Retriever microservices, and other NIM microservices for information retrieval, through the NVIDIA API catalog, or a no-cost, 90-day NVIDIA AI Enterprise developer license.

Learn more about the new NeMo Retriever microservices and how to use them to build efficient information retrieval systems.

Read More

NVIDIA Unveils Its Most Affordable Generative AI Supercomputer

NVIDIA Unveils Its Most Affordable Generative AI Supercomputer

NVIDIA is taking the wraps off a new compact generative AI supercomputer, offering increased performance at a lower price with a software upgrade.

The new NVIDIA Jetson Orin Nano Super Developer Kit, which fits in the palm of a hand, provides everyone from commercial AI developers to hobbyists and students, gains in generative AI capabilities and performance. And the price is now $249, down from $499.

Available today, it delivers as much as a 1.7x leap in generative AI inference performance, a 70% increase in performance to 67 INT8 TOPS, and a 50% increase in memory bandwidth to 102GB/s compared with its predecessor.

Whether creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots, the Jetson Orin Nano Super is an ideal solution to fetch.

The Gift That Keeps on Giving

The software updates available to the new Jetson Orin Nano Super will also boost generative AI performance for those who already own the Jetson Orin Nano Developer Kit.

Jetson Orin Nano Super is suited for those interested in developing skills in generative AI, robotics or computer vision. As the AI world is moving from task-specific models into foundation models, it also provides an accessible platform to transform ideas into reality.

Powerful Performance With Super for Generative AI

The enhanced performance of the Jetson Orin Nano Super delivers gains for all popular generative AI models and transformer-based computer vision.

The developer kit consists of a Jetson Orin Nano 8GB system-on-module (SoM) and a reference carrier board, providing an ideal platform for prototyping edge AI applications.

The SoM features an NVIDIA Ampere architecture GPU with tensor cores and a 6-core Arm CPU, facilitating multiple concurrent AI application pipelines and high-performance inference. It can support up to four cameras, offering higher resolution and frame rates than previous versions.

Extensive Generative AI Software Ecosystem and Community

Generative AI is evolving quickly. The NVIDIA Jetson AI lab offers immediate support for those cutting-edge models from the open-source community and provides easy-to-use tutorials. Developers can also get extensive support from the broader Jetson community and inspiration from projects created by developers.

Jetson runs NVIDIA AI software including NVIDIA Isaac for robotics, NVIDIA Metropolis for vision AI and NVIDIA Holoscan for sensor processing. Development time can be reduced with NVIDIA Omniverse Replicator for synthetic data generation and NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NGC catalog.

Jetson ecosystem partners offer additional AI and system software, developer tools and custom software development. They can also help with cameras and other sensors, as well as carrier boards and design services for product solutions.

Boosting Jetson Orin Performance for All With Super Mode

The software updates to boost 1.7X generative AI performance will also be available to the Jetson Orin NX and Orin Nano series of systems on modules.

Existing Jetson Orin Nano Developer Kit owners can upgrade the JetPack SDK to unlock boosted performance today.

Learn more about Jetson Orin Nano Super Developer Kit.

See notice regarding software product information.

Read More

Tech Leader, AI Visionary, Endlessly Curious Jensen Huang to Keynote CES 2025

Tech Leader, AI Visionary, Endlessly Curious Jensen Huang to Keynote CES 2025

On Jan. 6 at 6:30 p.m. PT, NVIDIA founder and CEO Jensen Huang — with his trademark leather jacket and an unwavering vision — will step onto the CES 2025 stage.

From humble beginnings as a busboy at a Denny’s to founding NVIDIA, Huang’s story embodies innovation and perseverance.

Huang has been named the world’s best CEO by Fortune and The Economist, as well as one of TIME magazine’s 100 most influential people in the world.

Today, NVIDIA is a driving force behind breakthroughs in AI and accelerated computing, technologies transforming industries ranging from healthcare, to automotive and entertainment.

Across the globe, NVIDIA’s innovations enable advanced chatbots, robots, software-defined vehicles, sprawling virtual worlds, hypersynchronized factory floors and much more.

NVIDIA’s accelerated computing and AI platforms power hundreds of millions of computers, available from major cloud providers and server manufacturers.

They fuel 76% of the world’s fastest supercomputers on the TOP500 list and are supported by a thriving community of more than 5 million developers.

For decades, Huang has led NVIDIA through revolutions that ripple across industries.

GPUs redefined gaming as an art form, and NVIDIA’s AI tools empower labs, factory floors and Hollywood sets. From self-driving cars to automated industrial processes, these tools are foundational to the next generation of technological breakthroughs.

CES has long been the stage for the unveiling of technological advancements, and Huang’s keynote is no exception.

Since its inception in 1967, CES has unveiled iconic innovations, including transistor radios, VCRs and HDTVs.

Over the decades, CES has launched numerous NVIDIA flagship innovations, from a first look at NVIDIA SHIELD to NVIDIA DRIVE for autonomous vehicles.

NVIDIA at CES 2025

The keynote is just the beginning.

From Jan. 7-10, NVIDIA will host press, analysts, customers and partners at the Fontainebleau Resort Las Vegas.

The space will feature hands-on demos showcasing innovations in AI, robotics and accelerated computing across NVIDIA’s automotive, consumer, enterprise, Omniverse and robotics portfolios.

Meanwhile, NVIDIA’s technologies will take center stage on the CES show floor at the Las Vegas Convention Center, where partners will highlight AI-powered technologies, immersive gaming experiences and groundbreaking automotive advancements.

Attendees can also participate in NVIDIA’s “Explore to Win” program, an interactive scavenger hunt featuring missions, points and prizes.

Curious about the future? Tune in live on NVIDIA’s website or the company’s YouTube channels to witness how NVIDIA is shaping the future of technology.

Read More

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

From heart-pounding action games to remastered classics, there’s something for everyone this GFN Thursday.

Six new titles join the cloud this week, starting with The Thing: Remastered. Face the horrors of the Antarctic as the game oozes onto GeForce NOW. Nightdive Studios’ revival of the cult-classic 2002 survival-horror game came to the cloud as a surprise at the PC Gaming Show last week. Since then, GeForce NOW members have been able to experience all the bone-chilling action in the sequel to the title based on Universal Pictures’ genre-defining 1982 film.

And don’t miss out on the limited-time GeForce NOW holiday sale, which offers 50% off the first month of a new Ultimate or Performance membership. The 25% off Day Pass sale ends today — take advantage of the offer to experience 24 hours of cloud gaming with all the benefits of Ultimate or Performance membership.

It’s Alive!

The Thing Remastered on GeForce NOW@
Freeze enemies, not frame rates.

The Thing: Remastered brings the 2002 third-person shooter into the modern era with stunning visual upgrades, including improved character models, textures and animations, all meticulously crafted to enhance the game’s already-tense atmosphere.

Playing as Captain J.F. Blake, leader of a U.S. governmental rescue team, navigate the blood-curdling aftermath of the events depicted in the original film. Trust is a precious commodity as members command their squad through 11 terrifying levels, never knowing who might harbor the alien within. The remaster introduces enhanced lighting and atmospheric effects that make the desolate research facility more immersive and frightening than ever.

With an Ultimate or Performance membership, stream this blood-curdling experience in all its remastered glory without the need for high-end hardware. GeForce NOW streams from powerful GeForce RTX-powered servers in the cloud, rendering every shadow, every flicker of doubt in teammates’ eyes and every grotesque transformation with crystal-clear fidelity.

The Performance tier now offers up to 1440p resolution, allowing members to immerse themselves in the game’s oppressive atmosphere with even greater clarity. Ultimate members can experience the paranoia-inducing gameplay at up to 4K resolution and 120 frames per second, making every heart-pounding moment feel more real than ever.

Feast on This

Dive into the depths of a gothic vampire saga, slide through feudal Japan and flip burgers at breakneck speed with GeForce NOW and the power of the cloud. Grab a controller and rally the gaming squad to stream these mouth-watering additions.

Legacy of Kain Soul Reaver 1&2 Remastered on GeForce NOW
Time to rise again.

The highly anticipated Legacy of Kain Soul Reaver 1&2 Remastered from Aspyr and Crystal Dynamics breathes new life into the classic vampire saga genre. These beloved titles have been meticulously overhauled to offer stunning visuals and improved controls. Join the epic conflict of Kain and Raziel in the gothic world of Nosgoth and traverse between the Spectral and Material Realms to solve puzzles, reveal new paths and defeat foes.

The Spirit of the Samurai on GeForce NOW
Defend the forbidden village.

The Spirit of the Samurai from Digital Mind Games and Kwalee brings a blend of Souls and Metroidvania elements to feudal Japan. This stop-motion inspired 2D action-adventure game offers three playable characters and intense combat with legendary Japanese weapons, all set against a backdrop of mythological landscapes.

Fast Food Simulator on GeForce NOW
The ice cream machine actually works.

Or take on the chaotic world of fast-food management with Fast Food Simulator, a multiplayer simulation game from No Ceiling Games. Take orders, make burgers and increase earnings by dealing with customers. Play solo or co-op with up to four players and take on unexpected and bizarre events that can occur at any moment.

Shift between realms in Legacy of Kain at up to 4K 120 fps with an Ultimate membership, slice through The Spirit of the Samurai’s mythical landscapes in stunning 1440p with RTX ON with a Performance membership or manage a fast-food empire with silky-smooth gameplay. With extended sessions and priority access, members will have plenty of time to master these diverse worlds.

Play On

Diablo Immortal on GeForce NOW
Evil never sleeps.

Diablo Immortal — the action-packed role-playing game from Blizzard Entertainment, set in the dark fantasy world of Sanctuary — bridges the stories of Diablo II and Diablo III. Choose from a variety of classes, each offering unique playstyles and devastating abilities, to battle through diverse zones and randomly generated rifts, and uncover the mystery of the shattered Worldstone while facing off against hordes of demonic enemies.

Since its launch, the game has offered frequent updates, including two new character classes, new zones, gear, competitive events and more demonic stories to experience. With its immersive storytelling, intricate character customization and endless replayability, Diablo Immortal provides members with a rich, hellish adventure to stream from the cloud across devices.

Look for the following games available to stream in the cloud this week:

  • Indiana Jones and the Great Circle (New release on Steam and Xbox, available on the Microsoft Store and PC Game Pass, Dec. 8)
  • Fast Food Simulator (New release on Steam, Dec. 10)
  • Legacy of Kain Soul Reaver 1&2 Remastered (New release on Steam, Dec. 10)
  • The Spirit of the Samurai (New release on Steam, Dec. 12)
  • Diablo Immortal (Battle.net)
  • The Lord of the Rings: Return to Moria (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Vay, a Berlin-based provider of automotive-grade remote driving (teledriving) technology, is offering an alternative approach to autonomous driving.

Through the company’s app, a user can hail a car, and a professionally trained teledriver will remotely drive the vehicle to the customer’s location. Once the car arrives, the user manually drives it.

After completing their trip, the user can end the rental in the app and pull over to a safe location to exit the car, away from traffic flow. There’s no need to park the vehicle, as the teledriver will handle the parking or drive the car to the next customer.

This system offers sustainable, door-to-door mobility, with the unique advantage of having a human driver remotely controlling the vehicle in real time.

Vay’s technology is built on the NVIDIA DRIVE AGX centralized compute platform, running the NVIDIA DriveOS operating system for safe, AI-defined autonomous vehicles.

These technologies enable Vay’s fleets to process large volumes of camera and other vehicle data over the air. DRIVE AGX’s real-time, low-latency video streaming capabilities provide enhanced situational awareness for teledrivers, while its automotive-grade design ensures reliability in any driving condition.

“By combining Vay’s innovative remote driving capabilities with the advanced AI and computing power of NVIDIA DRIVE AGX, we’re setting a new standard for remotely driven vehicles,” said Justin Spratt, chief business officer at Vay. “This collaboration helps us bring safe, reliable and accessible driverless options to the market and provides an adaptable solution that can be deployed in real-world environments now — not years from now.”

High-Quality Video Stream

Vay’s advanced technology stack includes NVIDIA DRIVE AGX software that’s optimized for latency and processing power. By harnessing NVIDIA GPUs specifically designed for autonomous driving, the company’s teledriving system can process and transmit high-definition video feeds in real time, delivering critical situational awareness to the teledriver, even in complex environments. In the event of an emergency, the vehicle can safely bring itself to a complete stop.

“Working with NVIDIA, Vay is setting a new standard in driverless technology,” said Bogdan Djukic, cofounder and vice president of engineering, teledrive experience and autonomy at Vay. “We are proud to not only accelerate the deployment of remotely driven and autonomous vehicles but also to expand the boundaries of what’s possible in urban transportation, logistics and beyond — transforming mobility for both businesses and communities.”

Reshaping Mobility With Teledriving

Vay’s technology enables professionally trained teledrivers to remotely drive vehicles from specialized teledrive stations equipped with industry-standard controls, such as a steering wheel and pedals.

The company’s teledrivers are totally immersed in the drive — road traffic sounds, such as those from emergency vehicles and other warning signals, are transmitted via microphones to the operator’s headphones. Camera sensors reproduce the car’s surroundings and transmit them to the screens of the teledrive station with minimum latency. The vehicles can operate at speeds of up to 26 mph.

Vay’s technology effectively addresses complex edge cases with human supervision, enhancing safety while significantly reducing costs and development challenges.

Vay is a member of NVIDIA Inception, a program that nurtures AI startups with go-to-market support, expertise and technology. Last year, Vay became the first and only company in Europe to teledrive a vehicle on public streets without a safety driver.

Since January, Vay has been operating its commercial services in Las Vegas. The startup recently secured a partnership with Bayanat, a provider of AI-powered geospatial solutions, and is working with Ush and Poppy, Belgium-based car-sharing companies, as well as Peugeot, a French automaker.

In October, Vay announced a $35 million investment from the European Investment Bank, which will help it roll out its technology across Europe and expand its development team.

Learn more about the NVIDIA DRIVE platform.

Read More

Built for the Era of AI, NVIDIA RTX AI PCs Enhance Content Creation, Gaming, Entertainment and More

Built for the Era of AI, NVIDIA RTX AI PCs Enhance Content Creation, Gaming, Entertainment and More

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.

NVIDIA and GeForce RTX GPUs are built for the era of AI.

RTX GPUs feature specialized AI Tensor Cores that can deliver more than 1,300 trillion operations per second (TOPS) of processing power for cutting-edge performance in gaming, creating, everyday productivity and more. Today there are more than 600 deployed AI-powered games and apps that are accelerated by RTX.

RTX AI PCs can help anyone start their AI journey and supercharge their work.

Every RTX AI PC comes with regularly updated NVIDIA Studio Drivers — fine-tuned in collaboration with developers — that enhance performance in top creative apps and are tested extensively to deliver maximum stability. Download the December Studio Driver today.

The importance of large language models (LLM) continues to grow. Two benchmarks were introduced this week to spotlight LLM performance on various hardware: MLPerf Client v0.5 and Procyon AI Text Generation. These LLM-based benchmarks, which internal tests have shown accurately replicate real-world performance, are easy to run.

This holiday season, content creators can participate in the #WinterArtChallenge, running through February. Share winter-themed art on Facebook, Instagram or X with #WinterArtChallenge for a chance to be featured on NVIDIA Studio social media channels.

Advanced AI

With NVIDIA and GeForce RTX GPUs, AI elevates everyday tasks and activities, as covered in our AI Decoded blog series. For example, AI can enable:

Faster creativity: With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated up to 2.2x faster than on an NPU. And thanks to software optimizations using the NVIDIA TensorRT SDK, the applications used to run these models, like ComfyUI, get an additional 60% boost.

Greater gaming: NVIDIA DLSS technology boosts frame rates and improves image quality, using AI to automatically generate pixels in video games. With ongoing improvements, including to Ray Reconstruction, DLSS enables richer visual quality for more immersive gameplay.

Enhanced entertainment: RTX Video Super Resolution uses AI to enhance video by removing compression artifacts and sharpening edges while upscaling video quality. RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range, enabling more vivid, dynamic colors when streamed in Google Chrome, Microsoft Edge, Mozilla Firefox or VLC media player.

Improved productivity: The NVIDIA ChatRTX tech demo app connects a large language model, like Meta’s Llama, to a user’s data for quickly querying notes, documents or images. Free for RTX GPU owners, the custom chatbot provides quick, contextually relevant answers. Since it runs locally on Windows RTX PCs and workstations, results are fast and private.

This snapshot of AI capabilities barely scratches the surface of the technology’s possibilities. With an NVIDIA or GeForce RTX GPU-powered system, users can also supercharge their STEM studies and research, and tap into the NVIDIA Studio suite of AI-powered tools.

Decisions, Decisions

More than 200 powerful RTX AI PCs are capable of running advanced AI.

ASUS’ Vivobook Pro 16X comes with up to a GeForce RTX 4070 Laptop GPU.

ASUS’ Vivobook Pro 16X comes with up to a GeForce RTX 4070 Laptop GPU, as well as a superbright 550-nit panel, ultrahigh contrast ratio and ultrawide 100% DCI-P3 color gamut. It’s available on Amazon and ASUS.com.

Dell’s Inspiron 16 Plus 7640 comes with up to a GeForce RTX 4060 Laptop GPU.

Dell’s Inspiron 16 Plus 7640 comes with up to a GeForce RTX 4060 Laptop GPU and a 16:10 aspect ratio display, ideal for users working on multiple projects. It boasts military-grade testing for added reliability and an easy-to-use, built-in Trusted Platform Module to protect sensitive data. It’s available on Amazon and Dell.com.

GIGABYTE’s AERO 16 OLED comes with up to a GeForce RTX 4070 Laptop GPU.

GIGABYTE’s AERO 16 OLED, equipped with up to a GeForce RTX 4070 Laptop GPU, is designed for professionals, designers and creators. The 16:10 thin-bezel 4K+ OLED screen is certified by multiple third parties to provide the best visual experience with X-Rite 2.0 factory-by-unit color calibration and Pantone Validated color calibration. It’s available on Amazon and GIGABYTE.com.

MSI’s Creator M14 comes with up to a GeForce RTX 4070 Laptop GPU.

MSI’s Creator M14 comes with up to a GeForce RTX 4070 Laptop GPU, delivering a quantum leap in performance with DLSS 3 to enable lifelike virtual worlds with full ray tracing. Plus, its Max-Q suite of technologies optimizes system performance, power, battery life and acoustics for peak efficiency. Purchase one on Amazon or MSI.com.

These are just a few of the many RTX AI PCs available, with some on sale, including the Acer Nitro V, ASUS TUF 16″, HP Envy 16″ and Lenovo Yoga Pro 9i.

Follow NVIDIA Studio on Facebook, Instagram and X. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Into the Omniverse: How OpenUSD-Based Simulation and Synthetic Data Generation Advance Robot Learning

Into the Omniverse: How OpenUSD-Based Simulation and Synthetic Data Generation Advance Robot Learning

Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Scalable simulation technologies are driving the future of autonomous robotics by reducing development time and costs.

Universal Scene Description (OpenUSD) provides a scalable and interoperable data framework for developing virtual worlds where robots can learn how to be robots. With SimReady OpenUSD-based simulations, developers can create limitless scenarios based on the physical world.

And NVIDIA Isaac Sim is advancing perception AI-based robotics simulation. Isaac Sim is a reference application built on the NVIDIA Omniverse platform for developers to simulate and test AI-driven robots in physically based virtual environments.

At AWS re:Invent, NVIDIA announced that Isaac Sim is now available on Amazon EC2 G6e instances powered by NVIDIA L40S GPUs. These powerful instances enhance the performance and accessibility of Isaac Sim, making high-quality robotics simulations more scalable and efficient.

These advancements in Isaac Sim mark a significant leap for robotics development. By enabling realistic testing and AI model training in virtual environments, companies can reduce time to deployment and improve robot performance across a variety of use cases.

Advancing Robotics Simulation With Synthetic Data Generation

Robotics companies like Cobot, Field AI and Vention are using Isaac Sim to simulate and validate robot performance while others, such as SoftServe and Tata Consultancy Services, use synthetic data to bootstrap AI models for diverse robotics applications.

The evolution of robot learning has been deeply intertwined with simulation technology. Early experiments in robotics relied heavily on labor-intensive, resource-heavy trials. Simulation is a crucial tool for the creation of physically accurate environments where robots can learn through trial and error, refine algorithms and even train AI models using synthetic data.

Physical AI describes AI models that can understand and interact with the physical world. It embodies the next wave of autonomous machines and robots, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses.

Robotics simulation, which forms the second computer in the three computer solution, is a cornerstone of physical AI development that lets engineers and researchers design, test and refine systems in a controlled virtual environment.

A simulation-first approach significantly reduces the cost and time associated with physical prototyping while enhancing safety by allowing robots to be tested in scenarios that might otherwise be impractical or hazardous in real life.

With a new reference workflow, developers can accelerate the generation of synthetic 3D datasets with generative AI using OpenUSD NIM microservices. This integration streamlines the pipeline from scene creation to data augmentation, enabling faster and more accurate training of perception AI models.

Synthetic data can help address the challenge of limited, restricted or unavailable data needed to train various types of AI models, especially in computer vision. Developing action recognition models is a common use case that can benefit from synthetic data generation.

To learn how to create a human action recognition video dataset with Isaac Sim, check out the technical blog on Scaling Action Recognition Models With Synthetic Data. 3D simulations offer developers precise control over image generation, eliminating hallucinations.

Robotic Simulation for Humanoids

Humanoid robots are the next wave of embodied AI, but they present a challenge at the intersection of mechatronics, control theory and AI. Simulation is crucial to solving this challenge by providing a safe, cost-effective and versatile platform for training and testing humanoids.

With NVIDIA Isaac Lab, an open-source unified framework for robot learning built on top of Isaac Sim, developers can train humanoid robot policies at scale via simulations. Leading commercial robot makers are adopting Isaac Lab to handle increasingly complex movements and interactions.

NVIDIA Project GR00T, an active research initiative to enable the humanoid robot ecosystem of builders, is pioneering workflows such as GR00T-Gen to generate robot tasks and simulation-ready environments in OpenUSD. These can be used for training generalist robots to perform manipulation, locomotion and navigation.

Recently published research from Project GR00T also shows how advanced simulation can be used to train interactive humanoids. Using Isaac Sim, the researchers developed a single unified controller for physically simulated humanoids called MaskedMimic. The system is capable of generating a wide range of motions across diverse terrains from intuitive user-defined intents.

Physics-Based Digital Twins Simplify AI Training

Partners across industries are using Isaac Sim, Isaac Lab, Omniverse, and OpenUSD to design, simulate and deploy smarter, more capable autonomous machines:

  • Agility uses Isaac Lab to create simulations that let simulated robot behaviors transfer directly to the robot, making it more intelligent, agile and robust when deployed in the real world.
  • Cobot uses Isaac Sim with its AI-powered cobot, Proxie, to optimize logistics in warehouses, hospitals, manufacturing sites and more.
  • Cohesive Robotics has integrated Isaac Sim into its software framework called Argus OS for developing and deploying robotic workcells used in high-mix manufacturing environments.
  • Field AI, a builder of robot foundation models, uses Isaac Sim and Isaac Lab to evaluate the performance of its models in complex, unstructured environments across industries such as construction, manufacturing, oil and gas, mining, and more.
  • Fourier uses NVIDIA Isaac Gym and Isaac Lab to train its GR-2 humanoid robot, using reinforcement learning and advanced simulations to accelerate development, enhance adaptability and improve real-world performance.
  • Foxglove integrates Isaac Sim and Omniverse to enable efficient robot testing, training and sensor data analysis in realistic 3D environments.
  • Galbot used Isaac Sim to verify the data generation of DexGraspNet, a large-scale dataset of 1.32 million ShadowHand grasps, advancing robotic hand functionality by enabling scalable validation of diverse object interactions across 5,355 objects and 133 categories.
  • Standard Bots is simulating and validating the performance of its R01 robot used in manufacturing and machining setups.
  • Wandelbots integrates its NOVA platform with Isaac Sim to create physics-based digital twins and intuitive training environments, simplifying robot interaction and enabling seamless testing, validation and deployment of robotic systems in real-world scenarios.

Learn more about how Wandelbots is advancing robot learning with NVIDIA technology in this livestream recording:

Get Plugged Into the World of OpenUSD

NVIDIA experts and Omniverse Ambassadors are hosting livestream office hours and study groups to provide robotics developers with technical guidance and troubleshooting support for Isaac Sim and Isaac Lab. Learn how to get started simulating robots in Isaac Sim with this new, free course on NVIDIA Deep Learning Institute (DLI).

For more on optimizing OpenUSD workflows, explore the new self-paced Learn OpenUSD training curriculum that includes free DLI courses for 3D practitioners and developers. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and the AOUSD website.

Don’t miss the CES keynote delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT for more on the future of AI and graphics.

Stay up to date by subscribing to NVIDIA news, joining the community, and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.

Featured image courtesy of Fourier.

Read More