Explainable AI: Insights from Arthur’s Adam Wenchel

Explainable AI: Insights from Arthur’s Adam Wenchel

Arthur.ai enhances the performance of AI systems across various metrics like accuracy, explainability and fairness. In this episode of the NVIDIA AI Podcast, recorded live at GTC 2024, host Noah Kravitz sits down with Adam Wenchel, cofounder and CEO of Arthur, to discuss the challenges and opportunities of deploying generative AI. Their conversation spans a range of topics, including AI bias, the observability of AI systems and the practical implications of AI in business. For more on Arthur, visit arthur.ai.

Time Stamps:

  • 00:11: Introduction and background on Adam Wenchel and Arthur.ai.
  • 01:31: Discussion on the mission and services of Arthur.
  • 02:31: Real-world use cases of LLMs and generative AI in enterprises.
  • 06:22: Challenges in deploying AI systems internally within companies.
  • 08:23: The process of adapting AI models for specific business needs.
  • 09:26: Exploring AI observability and the importance of real-time monitoring.
  • 11:36: Addressing bias in AI systems and its implications.
  • 15:21: Wenchel’s journey from cybersecurity to AI and founding Arthur.
  • 20:38: Cybersecurity concerns with generative AI and large language models.
  • 21:37: Future of work and AI’s role in enhancing job performance.
  • 24:27: Future directions for Arthur and ongoing projects.

You Might Also Like…

ITIF’s Daniel Castro on Energy-Efficient AI and Climate Change – Ep. 215

AI-driven change is in the air, as are concerns about the technology’s environmental impact. In this episode of NVIDIA’s AI Podcast, Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, speaks with host Noah Kravitz about the motivation behind his AI energy use report, which addresses misconceptions about the technology’s energy consumption.

DigitalPath’s Ethan Higgins on Using AI to Fight Wildfires – Ep. 211

DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real-time. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.

Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep. 203

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community.

How Alex Fielding and Privateer Space Are Taking on Space Debris – Ep. 196

In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

AI Takes a Bow: Interactive GLaDOS Robot Among 9 Winners in Hackster.io Challenge

AI Takes a Bow: Interactive GLaDOS Robot Among 9 Winners in Hackster.io Challenge

YouTube robotics influencer Dave Niewinski has developed robots for everything from driveable La-Z-Boy chairs to an AI-guided cornhole tosser and horse-drawn chariot racing.

His recent Interactive Animatronic GLaDOS project was among nine winners in the Hackster AI Innovation Challenge. About 100 contestants vied for prizes from NVIDIA and Sparkfun by creating open-source projects to advance the use of AI in edge computing, robotics and IoT.

Niewinski won first place in the generative AI applications category for his innovative robot based on the GLaDOS guide from game series Portal, the first-person puzzle platform from video game developer Valve.

Other top winners included contestants Andrei Ciobanu and Allen Tao, who took first prize in the generative AI models for the edge and AI at the edge applications categories, respectively. Ciobanu used generative AI to help virtually try on clothes, while Tao developed a ROS-based robot to map the inside of a home to help find things.

Harnessing LLMs for Robots

Niewinski builds custom applications for robotics at his Armoury Labs business in Waterloo, Ontario, Canada, where he uses the NVIDIA Jetson platform for edge AI and robotics, creating open-source tutorials and YouTube videos following his experiences.

He built his interactive GLaDOS robot to create a personal assistant for himself in the lab. It handles queries using Transformer-based speech recognition, text-to-speech, and large language models (LLMs) running onboard an NVIDIA Jetson AGX Orin, which interfaces with a robot arm and camera for interactions.

GLaDOS can track his whereabouts in the lab, move in different directions to face him and respond quickly to queries.

“I like doing things with robots that people will look at and say it’s not what they had immediately expected,” he said.

He wanted the assistant to sound like the original GLaDOS from Portal and respond quickly. Fortunately, the gaming company Valve has put all of the voice lines from Portal and Portal 2 on its website, allowing Niewinski to download the audio to help train a model.

“Using Jetson, your average question-and-answer stuff runs pretty quick for speech,” he said.

Niewinski used NVIDIA’s open-source NeMo toolkit to fine-tune a voice for GLaDOS, training a spectrogram generator network called FastPitch and HiFiGAN vocoder network to refine the audio quality.

Both networks are deployed on Orin with NVIDIA Riva to enable speech recognition and synthesis that’s been optimized to run at many times the real-time rate of speech, so that it can run alongside the LLM while maintaining a smooth, interactive delivery.

For generating realistic responses from GLaDOS, Niewinski uses a locally hosted LLM called OpenChat that he runs in Docker from jetson-containers, saying that it was a drop-in replacement for OpenAI’s API. All of this AI is running on the Jetson module, using the latest open-source ML software stack built with CUDA and JetPack.

To enable GLaDOS to move, Niewinski developed the interactions for a Unitree Z1 robotic arm. It has a stereo camera and models for seeing and tracking a human speaking and a 3D-printed GLaDOS head and body shell around the arm.

Trying on Generative AI for Fashion Fit

Winner Ciobanu, based in Romania, aimed to improve the virtual clothing try-on experience with the help of generative AI, taking a top prize for his EdgeStyle: Fashion Preview at the Edge.

He used AI models such as YOLOv5, SAM and OpenPose to extract and refine data from images and videos. Then he used Stable Diffusion to generate the images, which he said was key to achieving accurate virtual try-ons.

This system taught the model how clothes fit different poses on people, which he said enhanced the realism of the try-ons.

“It’s quite handy as it allows users to see how clothes would look on them without actually trying them on,” said Ciobanu.

The NVIDIA JetPack SDK provided all the tools needed to run AI models smoothly on the Jetson Orin, he said.

“It’s super-helpful to have a stable set of tools, especially when you’re dealing with AI tech that keeps changing,” said Ciobanu. “It really cut down on the time and hassle for us developers, letting us focus more on the cool stuff we’re building instead of getting stuck on tech issues.”

 Finding Lost Items With Robot Assistance

Winner Tao, based in Ontario, Canada, created a robot to lessen the burden of searching for things lost around the house. His An Eye for an Item project took top honors at the Hackster challenge.

“Finding lost objects is a chore, and recent developments in zero-shot object detection and LLMs make it feasible for a computer to detect arbitrary objects for us based on textual or pictorial descriptions, presenting an opportunity for automation,” said Tao.

Tao said he needed robot computing capabilities to catalog objects in any unstructured environment — whether a living room or large warehouse. And he needed it to also perform real-time calculations for localization to help with navigation, as well as running inference on larger object detection models.

“Jetson Orin was a perfect fit, supporting all functionality from text and image queries into NanoDB, to real-time odometry feedback, including leveraging Isaac ROS’ hardware-accelerated AprilTag detections for drift correction,” he said.

Other winners of the AI Innovation Challenge include:

  • George Profenza, Escalator people tracker, 2nd place, Generative AI Applications category
  • Dimiter Kendri, Cooking meals with a local AI assistant using Jetson AGX Orin, 3rd place, Generative AI Applications category
  • Vy Phan, ClearWaters Underwater Image Enhancement with Generative AI, 2nd place, Generative AI Models category
  • Huy Mai, Realtime Language Segment Anything on Jetson Orin, 2nd place, Generative AI Models category
  • Fakhrur Razi, Autonomous Intelligent Robotic Shopping Cart, 2nd place, AI at the Edge Open category
  • Team Kinetika, Counting for Inspection and Quality Control with TensorRT, 3rd place, AI at the Edge Open category

Learn more about NVIDIA Jetson Orin for robotics and edge AI applications. Get started creating your own projects at the Jetson AI Lab.  

Read More

Say It Again: ChatRTX Adds New AI Models, Features in Latest Update

Say It Again: ChatRTX Adds New AI Models, Features in Latest Update

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

Chatbots powered by large-language AI models have transformed computing, and NVIDIA ChatRTX lets users interact with their local data, accelerated by NVIDIA RTX-powered Windows PCs and workstations. A new update, first demoed at GTC in March, expands the power of this RTX-accelerated chatbot app with additional features and support for new models.

The NVIDIA RTX Remix beta update brings NVIDIA DLSS 3.5 with Ray Reconstruction to the modding platform for even more impressive real-time ray tracing.

Say It Out Loud

ChatRTX uses retrieval-augmented generation, NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring chatbot capabilities to RTX-powered Windows PCs and workstations. Backed by its powerful large language models (LLMs), users can query their notes and documents with ChatRTX, which can quickly generate relevant responses, while running locally on the user’s device.

The latest version adds support for additional LLMs, including Gemma, the latest open, local LLM trained by Google. Gemma was developed from the same research and technology used to create the company’s Gemini models and is built for responsible AI development. ChatRTX also now supports ChatGLM3, an open, bilingual (English and Chinese) LLM based on the general language model framework.

Users can also interact with image data thanks to support for Contrastive Language-Image Pre-training from OpenAI. CLIP is a neural network that, through training and refinement, learns visual concepts from natural language supervision — that is, the model recognizes what it’s “seeing” in image collections. With CLIP support in ChatRTX, users can interact with photos and images on their local devices through words, terms and phrases, without the need for complex metadata labeling.

The new ChatRTX release also lets people chat with their data using their voice. Thanks to support for Whisper, an automatic speech recognition system that uses AI to process spoken language, users can send voice queries to the application and ChatRTX will provide text responses.

Download ChatRTX today.

Mix It Up

With RTX Remix, modders can transform classic PC games into RTX remasters using AI-accelerated tools on the NVIDIA Omniverse platform.

Now, they can use DLSS 3.5 with Ray Reconstruction in their projects with just a few clicks, thanks to an update to RTX Remix available this week. Its advanced, AI-powered neural renderer improves the fidelity, responsiveness and quality of ray-traced effects, giving NVIDIA GeForce RTX gamers an even better experience.

AI powers other elements of the Remix workflow, too. Modders can use generative AI texture tools to analyze low-resolution textures from classic games, generate physically accurate materials — including normal and roughness maps — and upscale the resolution by up to 4x. Tools like this also save modders time, quickly handling a task that could otherwise become tedious.

Learn more about the new RTX Remix beta update on the GeForce page.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

SEA.AI Navigates the Future With AI at the Helm

SEA.AI Navigates the Future With AI at the Helm

Talk about commitment. When startup SEA.AI, an NVIDIA Metropolis partner, set out to create a system that would use AI to scan the seas to enhance maritime safety, entrepreneur Raphael Biancale wasn’t afraid to take the plunge. He donned a lifejacket and jumped into the ocean.

It’s a move that demonstrates Biancale’s commitment and pioneering approach. The startup, founded in 2018 and based in Linz, Austria, with subsidiaries in France, Portugal and the US, had to build its first-of-its-kind training data from scratch in order to train an AI to help oceangoers of all kinds scan the seas.

And to do that, the company needed photos of what a person in the water looked like. That’s when Biancale, now the company’s head of research, walked the plank.

The company has come a long way since then, with a full product line powered by NVIDIA AI technology that lets commercial and recreational sailors detect objects on the seas, whether potential hazards or people needing rescue.

It’s an effort inspired by Biancale’s experience on a night sail. The lack of visibility and situational awareness illuminated the pressing need for advanced safety technologies in the maritime world that AI is bringing to the automotive industry.

AI, of course, is finding its way into all things aquatic. Startup Saildrone’s autonomous sailing vessels can help conduct data-gathering for science, fisheries, weather forecasting, ocean mapping and maritime security. Other researchers are using AI to interpret whale songs and even protect beachgoers from dangerous rip currents.

SEA.AI, however, promises to make the seas safer for everyone who steps aboard a boat. First introduced for ocean racing sailboats, SEA.AI’s system has quickly evolved into an AI-powered situational awareness system that can be deployed on everything from recreational sail and powerboats to commercial shipping vessels.

SEA.AI directly addresses one of the most significant risks for all these vessels: collisions. Thanks to SEA.AI, commercial and recreational oceangoers worldwide can travel with more confidence.

How SEA.AI Works

At the heart of SEA.AI’s approach is a proprietary database of over 9 million annotated marine objects which is growing constantly.

When combined with high-tech optical sensors and the latest AI technology from NVIDIA, SEA.AI’s systems can recognize and classify objects in real-time, significantly improving maritime safety.

SEA.AI technology can detect a person in water up to 700 meters — almost half a mile — away, a dinghy up to 3,000 meters, and motorboats up to 7,500 meters.

This capability ensures maritime operators can identify hazards before they pose a threat. It complements older marine safety systems that rely on radar and satellite signals.

SEA.AI solutions integrate with central marine display units from industry-leading manufacturers like Raymarine, B&G, Garmin and Furuno as well as Android and iOS-based mobile devices. This provides broad applicability across the maritime sector, from recreational vessels to commercial and government ships.

The NVIDIA Jetson edge AI platform is integral to SEA.AI’s success. The platform for robotics and embedded computing applications enables SEA.AI products to achieve unparalleled processing power and efficiency, setting a new standard in maritime safety by quickly detecting, analyzing and alerting operators to objects.

AI Integration and Real-Time Object Detection

SEA.AI uses NVIDIA’s AI and machine vision technology to offer real-time object detection and classification, providing maritime operators with immediate identification of potential hazards.

SEA.AI is bringing its approach to oceangoers of all kinds with three product lines.

One, SEA.AI Sentry, provides 360-degree situational awareness for commercial vessels and motor yachts with features like collision avoidance, object tracking and perimeter surveillance.

Another, SEA.AI Offshore,  provides bluewater sailors with high-tech safety and convenience with simplified installation across several editions that can suit different detection and technical needs.

The third, SEA.AI Competition, offers reliable object detection for ocean racing and performance yacht sailors. Its ultra-lightweight design ensures maximum performance when navigating at high speeds.

With a growing team of more than 60 and a distribution network spanning over 40 countries, SEA.AI is charting a course to help ensure every journey on the waves is safer.

Read More

AI Drives Future of Transportation at Asia’s Largest Automotive Show

AI Drives Future of Transportation at Asia’s Largest Automotive Show

The latest trends and technologies in the automotive industry are in the spotlight at the Beijing International Automotive Exhibition, aka Auto China, which opens to the public on Saturday, April 27.

An array of NVIDIA auto partners is embracing this year’s theme, “New Era, New Cars,” by making announcements and showcasing their latest offerings powered by NVIDIA DRIVE, the platform for AI-defined vehicles.

NVIDIA Auto Partners Announce New Vehicles and Technologies

Image courtesy of JIYUE.

Electric vehicle (EV) makers Chery (booth E107) and JIYUE (booth W206), a joint venture between Baidu and Geely (booth W204), announced they have adopted the next-generation NVIDIA DRIVE Thor centralized car computer.

DRIVE Thor will integrate the new NVIDIA Blackwell GPU architecture, designed for transformer, large language model and generative AI workloads.

In addition, a number of automakers are building next-gen vehicles on NVIDIA DRIVE Orin, including:

smart, a joint venture between Mercedes-Benz and Geely, previewed its largest and most spacious model to date, an electric SUV called #5. It will be built on its Pilot Assist 3.0 intelligent driving-assistance platform, powered by NVIDIA DRIVE Orin, which supports point-to-point automatic urban navigation. smart #5 will be available for purchase in the second half of this year. smart will be at booth E408.

NIO, a pioneer in the premium smart EV market, unveiled its updated ET7 sedan, featuring upgraded cabin intelligence and smart-driving capabilities. NIO also showcased its 2024 ET5 and ES7. All 2024 models are equipped with four NVIDIA DRIVE Orin systems-on-a-chip (SoCs). Intelligent-driving capabilities in urban areas will fully launch soon. NIO will be at booth E207.

Image courtesy of GWM.

GWM revealed the WEY Blue Mountain (Lanshan) Intelligent Driving Edition, its luxury, high-end SUV. This upgraded vehicle is built on GWM’s Coffee Pilot Ultra intelligent-driving system, powered by NVIDIA DRIVE Orin, and can support features such as urban navigate-on-autopilot (NOA) and cross-floor memory parking. GWM will be at booth E303.

XPENG, a designer and manufacturer of intelligent EVs, announced that it is streamlining the design workflow of its flagship XPENG X9 using the NVIDIA Omniverse platform. In March, XPENG announced it will adopt NVIDIA DRIVE Thor for its next-generation EV fleets. XPENG will be at booth W402.

Innovation on Display

On the exhibition floor, NVIDIA partners are showcasing their NVIDIA DRIVE-powered vehicles:

Image courtesy of BYD.

BYD, DENZA and YANGWANG are featuring their latest vehicles built on NVIDIA DRIVE Orin. The largest EV maker in the world, BYD is building both its Ocean and Dynasty series on NVIDIA DRIVE Orin. In addition, BYDE, a subsidiary of BYD, will tap into the NVIDIA Isaac and NVIDIA Omniverse platforms to develop tools and applications for virtual factory planning and retail configurators. BYD will be at booth W106, DENZA at W408 and YANGWANG at W105.

DeepRoute.ai is showcasing its new intelligent driving-platform, DeepRoute IO, and highlighting its end-to-end model. Powered by NVIDIA DRIVE Orin, the first mass-produced car built on DeepRoute IO will focus on assisted driving and parking. DeepRoute.ai will be at booth W4-W07.

Hyper, a luxury brand owned by GAC AION, is displaying its latest Hyper GT and Hyper HT models, powered by NVIDIA DRIVE Orin. These vehicles feature advanced level 2+ driving capabilities in high-speed environments. Hyper recently announced it selected DRIVE Thor for its next-generation EVs with level 4 driving capabilities. Hyper will be at booth W310.

IM Motors is exhibiting the recently launched L6 Super Intelligent Vehicle. The entire lineup of the IM L6 is equipped with NVIDIA DRIVE Orin to power intelligent driving abilities, including urban NOA features. IM Motors will be at booth W205.

Li Auto is showcasing its recently released L6 model, as well as L7, L8, L9 and MEGA. Models equipped with Li Auto’s AD Max system are powered by dual NVIDIA DRIVE Orin SoCs, which help bring ever-upgrading intelligent functionality to Li Auto’s NOA feature. Li Auto will be at booth E405.

Image courtesy of Lotus.

Lotus is featuring a full range of vehicles, including the Emeya electric hyper-GT powered by NVIDIA DRIVE Orin. Lotus will be at booth E403.

Mercedes-Benz is exhibiting its Concept CLA Class, the first car to be developed on the all-new Mercedes-Benz Modular Architecture. The Concept CLA Class fully runs on MB.OS, which handles infotainment, automated driving, comfort and charging. Mercedes-Benz will be at booth E404.

Momenta is rolling out a new NVIDIA DRIVE Orin solution to accelerate commercialization of urban NOA capabilities at scale.

Image courtesy of Polestar.

Polestar is featuring the Polestar 3, the Swedish car manufacturer’s battery electric mid-size luxury crossover SUV powered by DRIVE Orin. Polestar will be at booth E205.

SAIC R Motors is showcasing the Rising Auto R7 and F7 powered by NVIDIA DRIVE Orin at booth W406.

WeRide is exhibiting Chery’s Exeed Sterra ET SUV and ES sedan, both powered by NVIDIA DRIVE Orin. The vehicles demonstrate progress made by Bosch and WeRide on level 2 to level 3 autonomous-driving technology. WeRide will be at booth E1-W04.

Xiaomi is displaying its NVIDIA DRIVE Orin-powered SU7 and “Human x Car x Home” smart ecosystem, designed to seamlessly connect people, cars and homes, at booth W203.

ZEEKR unveiled its SEA-M architecture and is showcasing the ZEEKR 007 powered by NVIDIA DRIVE Orin at booth E101.

Auto China runs through Saturday, May 4, at the China International Exhibition Center in Beijing.

Learn more about the industry-leading designs and technologies NVIDIA is developing with its automotive partners.

Featured image courtesy of JIYUE.

Read More

Blast From the Past: Stream ‘StarCraft’ and ‘Diablo’ on GeForce NOW

Blast From the Past: Stream ‘StarCraft’ and ‘Diablo’ on GeForce NOW

Support for Battle.net on GeForce NOW expands this GFN Thursday, as titles from the iconic StarCraft and Diablo series come to the cloud.

StarCraft Remastered, StarCraft II, Diablo II: Resurrected and Diablo III are part of 16 new games joining the GeForce NOW library of more than 1,900 titles.

Plus, a new update rolling out for members this week brings AV1 streaming to Mac M3 computers. This feature will improve game-streaming quality for members on M3, M3 Pro and M3 Max devices.

Plenty of Space in Hell

Dive into the original Blizzard games that set the stage for real-time strategy and action role-playing games (RPGs). StarCraft Remastered, StarCraft II, Diablo II: Resurrected and Diablo III bring galactic warfare, epic quests and legendary battles to the cloud.

StarCraft Remastered on GeForce NOW
Oh my Zerg.

In StarCraft Remastered, command one of three races — Terran, Zerg or Protoss — as they desperately struggle for survival. Build bases, gather resources and engage in intense battles using unique units and strategies.

Time to Plyon to the cloud.

Continue the saga with StarCraft II, with enhanced graphics and extended storytelling. Save the galaxy from emergent threats in full-length Terran, Zerg and Protoss campaigns. Take charge of all multiplayer units solo in Versus Mode, team up with a friend for Co-Op Missions or explore community-created game modes in the Arcade.

Diablo III on GeForce NOW
The fires of hell heat up the cloud once again.

In Diablo III, become a hero to battle the forces of darkness, uncover ancient secrets and face powerful foes in the action RPG set in the world of Sanctuary. With various character classes, intense combat and a rich loot system, members can experience a gripping single-player experience and cooperative multiplayer adventures.

Diablo II on GeForce NOW
Remastered goodness.

Pursue the mysterious Dark Wanderer and battle the denizens of hell in the remastered action RPG Diablo II: Resurrected. The title’s classic Diablo gameplay — enhanced with stunning 3D visuals for all the environments, characters and monsters — enable a nostalgic, high-quality return to hell.

Stream all the action at up to 4K resolution or up to 240 frames per second with an Ultimate membership. These top games join the Battle.net games first added to GeForce NOW, including Diablo IV, Overwatch 2, Call of Duty HQ and Hearthstone.

Remember the Cloud

Remnant II DLC on GeForce NOW
Unrelenting odds are no problem for the cloud.

The second downloadable content (DLC) for Ark Games’ Remnant II is available for members to stream. Experience a brand-new storyline, area, weapons, bosses and more in The Forgotten Kingdom.

Piece together the forgotten history of the lost tribe of Yaesha in an attempt to quell the vengeful wrath of Lydusa, an ancient stone spirit. Navigate the lingering traces of torment, treachery and death that haunt the land’s once-proud ziggurats. Traverse new dungeons, acquire powerful gear — including a new Archetype, “The Invoker” — meet unexpected allies and face new threats to return a semblance of peace to the forgotten kingdom.

GeForce NOW members will be able to stream the DLC without waiting around for downloads. Uncover the secrets of the lost tribe with an Ultimate membership for eight-hour gaming sessions and support for ultrawide resolutions.

New Adventures

Manor Lords on GeForce NOW
Grow from a humble hamlet to a hub for the kingdom in “Manor Lords.”

Guide a medieval village as it grows into a bustling city in Manor Lords, streaming this week on GeForce NOW. Manage resources and production chains in this historically accurate city builder while expanding the land through large-scale tactical battles.

Check out the full list of new games this week:

  • Dead Island 2 (New release on Steam, April 22)
  • Bellwright (New release on Steam, April 23)
  • Phantom Fury (New release on Steam, April 23)
  • Oddsparks: An Automation Adventure (New release on Steam, April 24)
  • Age of Water (New release on Steam, April 25)
  • Manor Lords (New release on Steam and Xbox, April 26, available on PC Game Pass)
  • 9-Bit Armies: A Bit Too Far (Steam)
  • Diablo II: Resurrected (Battle.net)
  • Diablo III (Battle.net)
  • Dragon’s Dogma 2 Character Creator & Storage (Steam)
  • Islands of Insight (Steam)
  • Metaball (Steam)
  • StarCraft Remastered (Battle.net)
  • StarCraft II (Battle.net)
  • Stargate: Timekeepers (Steam)
  • Tortuga – A Pirate’s Tale (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Into the Omniverse: Unlocking the Future of Manufacturing With OpenUSD on Siemens Teamcenter X

Into the Omniverse: Unlocking the Future of Manufacturing With OpenUSD on Siemens Teamcenter X

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Universal Scene Description, aka OpenUSD, is elevating the manufacturing game. Siemens, a leader in industrial technology, has embraced OpenUSD as a cornerstone of its digital transformation journey, using it to help bridge the gap between physical and virtual worlds.

Siemens is adding support for OpenUSD in its Siemens Xcelerator platform applications, starting with Teamcenter X software.

The integration empowers manufacturers to create photorealistic, robust digital twins that mirror real-world counterparts with unprecedented fidelity and efficiency. This allows for optimized resource utilization, minimized waste and enhanced product quality through comprehensive simulation and analysis — all of which align with sustainability and quality objectives.

For a company such as Siemens — one whose software touches all parts of the manufacturing cycle — digitalization can mean helping customers save time and costs, streamline workflows and reduce risk of manufacturing defects.

Ian Fisher, a member of Siemens Digital Industries Software team, is no stranger to the impact of embracing digital transformation — especially one powered by OpenUSD and generative AI.

“We are an industrial company where data is king,” he said. “OpenUSD comes in from the media side of the world, and we are looking to bring its openness and flexibility into the industrial world.”

Enterprises of all sizes depend on Siemens’ Teamcenter software, part of the Siemens Xcelerator platform, to develop and deliver products at scale. By connecting NVIDIA Omniverse — a platform of APIs and services based on OpenUSD — with Teamcenter X, Siemens’ cloud-based product lifecycle management software, engineering teams can make their physics-based digital twins more photorealistic and immersive, improving accuracy and minimizing waste and errors within workflows.

Siemens’ adoption of OpenUSD means that companies like HD Hyundai, a leader in sustainable ship manufacturing, can consolidate and visualize complex engineering projects directly within Teamcenter X. Find out more in the demo:

OpenUSD is touching other parts of Siemens as well. Siemens produces inverters, drive controllers and motors for more than 30,000 customers worldwide. Its lead electronics plant, GWE, in Erlangen, Germany, has been developing use cases from AI-enabled computer vision for defect detection to training pick-and-place robots.

One of their main challenges has been acquiring data to train the AI models that fuel these use cases. By building custom synthetic data generation pipelines using Omniverse Replicator, powered by OpenUSD, the engineers were able to generate large sets of diverse training data by varying many parameters including color, texture, background, lighting and more — allowing them to not only bootstrap but also quickly iterate on their AI models.

Committed to a future of widespread OpenUSD integration, Siemens was one of eight new general members that joined the Alliance for OpenUSD (AOUSD) last month, an organization dedicated to interoperability of 3D content through standardization.

Watch Fisher and other special guests discuss the impact of OpenUSD on industrial digitization workflows in this livestream replay:

Get Plugged Into the World of OpenUSD

Siemens and OpenUSD took center stage this week at Hannover Messe, the world’s leading industrial trade fair. Siemens CEO Roland Busch and Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, shared their vision on the potential of OpenUSD for customers in all industries.

For more on how Siemens is using OpenUSD to build and test complex AI-based automation systems completely virtually, watch the replay of the GTC session, “Virtual Commissioning of AI Vision Systems With OpenUSD.” All other sessions from GTC’s OpenUSD Day are available for viewing on demand.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of Siemens, HD Hyundai.

Read More

How Virtual Factories Are Making Industrial Digitalization a Reality

How Virtual Factories Are Making Industrial Digitalization a Reality

To address the shift to electric vehicles, increased semiconductor demand, manufacturing onshoring, and ambitions for greater sustainability, manufacturers are investing in new factory developments and re-engineering their existing facilities.

These projects often run over budget and schedule, due to complex and manual planning processes, legacy technology infrastructure, and disconnected tools, data and teams.

To address these challenges, manufacturers are embracing digitalization and virtual factories, powered by technologies like digital twins, the Universal Scene Description (OpenUSD) ecosystem and generative AI, that enable new possibilities from planning to operations.

What Is a Virtual Factory?

A virtual factory is a physically accurate representation of a real factory. These digital twins of factories allow manufacturers to model, simulate, analyze and optimize their production processes, resources and operations without the need for a physical prototype or pilot plant.

Benefits of Virtual Factories

Virtual factories unlock many benefits and possibilities for manufacturers, including:

  • Streamlined Communication: Instead of teams relying on in-person meetings and static planning documents for project alignment, virtual factories streamline communication and ensure that critical design and operations decisions are informed by the most current data.
  • Contextualized Planning: During facility design, construction and commissioning, virtual factories allow project stakeholders to visualize designs in the context of the entire facility and production process. Planning and operations teams can compare and verify built structures with the virtual designs in real time and decrease costs by identifying errors and incorporating feedback early in the review process.
  • Optimized Facility Designs: Connecting virtual factories to simulations of processes and discrete events enables teams to optimize facility designs for production and material flow, ergonomic work design, safety and overall utilization.
  • Intelligent and Optimized Operations: Operations teams can integrate their virtual factories with valuable production data from Internet of Things technology at the edge, and tap AI to drive further optimizations.

Virtual Factories: A Testing Ground for AI and Robotics

Robotics developers are increasingly using virtual factories to train and test AI and autonomous systems that run in physical factories. For example, virtual factories can enable developers and manufacturing teams to simulate digital workers and autonomous mobile robots (AMRs), vision AI agents and sensors to create a centralized map of worker activity throughout a facility. By fusing data from simulated camera streams with multi-camera tracking, developers can generate occupancy maps that inform optimal AMR routes.

Developers can also use these physically accurate virtual factories to train and test AI agents capable of managing their robot fleets, to ensure AI-enabled robots can adapt to real-world unpredictability and to identify streamlined configurations for human-robot collaboration.

What Are the Foundations of a Virtual Factory

Building large-scale, physically accurate virtual factories that unlock these transformational possibilities requires bringing together many tools, data formats and technologies to harmonize the representation of real-world aspects in the digital world.

Originally invented by Pixar Animation Studios, OpenUSD encompasses a collection of tools and capabilities that enable the data interoperability developers and manufacturers require to achieve their digitalization goals.

OpenUSD’s core superpower is flexible data modeling. 3D input can be accepted from source applications and combined with a variety of data, including from computer-aided design software, live sensors, documentation and maintenance records, through a unified data pipeline. OpenUSD enables developers to share these data types across different simulation tools and AI models, providing insights for all stakeholders. Data can be synced from the factory floor to the digital twin, surfacing real-time insights for factory managers and teams.

By developing virtual factory solutions on OpenUSD, developers can enhance collaboration for factory teams, allowing them to review plans, discuss optimization opportunities and make decisions in real time.

To support and accelerate the development of the OpenUSD ecosystem, Pixar, Adobe, Apple, Autodesk and NVIDIA formed the Alliance for OpenUSD, which is building open standards for USD in core specification, materials, geometry and more.

Industrial Use Cases for Virtual Factories

To unlock the potential of virtual factories, industry leaders including Autodesk, Continental, Pegatron, Rockwell Automation, Siemens and Wistron are developing virtual-factory solutions on OpenUSD and NVIDIA Omniverse, a platform of application programming interfaces (APIs) and software development kits that enable developers to build applications for complex 3D and industrial digitalization workflows based on OpenUSD.

FlexSim, an Autodesk company, uses OpenUSD to enable factory teams to analyze, visualize and optimize real-world processes with its simulation modeling for complex systems and operations. The discrete-event simulation software provides an intuitive drag-and-drop interface to create 3D simulation models, account for real-world variability, run “what-if” scenarios and perform in-depth analyses.

Developers at Continental, a leading German automotive technology company, developed ContiVerse, a factory planning and manufacturing operations application on OpenUSD and NVIDIA Omniverse. The application helps Continental optimize factory layouts and plan production processes collaboratively, leading to an expected 13% reduction in time to market. 

Partnering with software company SoftServe, Continental also developed Industrial Co-Pilot, which combines AI-driven insights with immersive visualization to deliver real-time guidance and predictive analytics to engineers. This is expected to reduce maintenance effort and downtime by 10%.

Pegatron, one of the world’s largest manufacturers of smartphones and consumer electronics, is developing virtual-factory solutions on OpenUSD to accelerate the development of new factories — as well as to minimize change orders, optimize operations and maximize production-line throughput in existing facilities.

Rockwell Automation is integrating NVIDIA Omniverse Cloud APIs and OpenUSD with its Emulate3D digital twin software to bring manufacturing teams data interoperability, live collaboration and physically based visualization for designing, building and operating industrial-scale digital twins of production systems.

Siemens, a leading technology company for automation, digitalization and sustainability and a member of the Alliance for OpenUSD, is adopting Omniverse Cloud APIs within its Siemens Xcelerator Platform, starting with Teamcenter X, the industry-leading cloud-based product lifecycle management software. This will help teams design, build and test next-generation products, manufacturing processes and factories virtually, before they’re built in the physical world.

Wistron, a leading global technology service provider and electronics manufacturer, is digitalizing new and existing factories with OpenUSD. By developing virtual-factory solutions on NVIDIA Omniverse, Wistron enables its factory teams to collaborate remotely to refine layout configurations, optimize surface mount technology and in-circuit testing lines, and transform product-on-dock testing. 

With these solutions, Wistron has achieved a 51% boost in worker efficiency and 50% reduction in production process times. Layout optimization and real-time monitoring have decreased defect rates by 40%. And construction time on Wistron’s new NVIDIA DGX factory was cut in half, from about five months to just two and a half months.

Learn more at the Virtual Factory Use Case page, where a reference architecture provides an overview of components and capabilities developers should consider when developing virtual-factory solutions.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. 

Read More

NVIDIA to Acquire GPU Orchestration Software Provider Run:ai

NVIDIA to Acquire GPU Orchestration Software Provider Run:ai

To help customers make more efficient use of their AI computing resources, NVIDIA today announced it has entered into a definitive agreement to acquire Run:ai, a Kubernetes-based workload management and orchestration software provider.

Customer AI deployments are becoming increasingly complex, with workloads distributed across cloud, edge and on-premises data center infrastructure.

Managing and orchestrating generative AI, recommender systems, search engines and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure.

Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on premises, in the cloud or in hybrid environments.

The company has built an open platform on Kubernetes, the orchestration layer for modern AI and cloud infrastructure. It supports all popular Kubernetes variants and integrates with third-party AI tools and frameworks.

Run:ai customers include some of the world’s largest enterprises across multiple industries, which use the Run:ai platform to manage data-center-scale GPU clusters.

“Run:ai has been a close collaborator with NVIDIA since 2020 and we share a passion for helping our customers make the most of their infrastructure,” said Omri Geller, Run:ai cofounder and CEO. “We’re thrilled to join NVIDIA and look forward to continuing our journey together.”

The Run:ai platform provides AI developers and their teams:

  • A centralized interface to manage shared compute infrastructure, enabling easier and faster access for complex AI workloads.
  • Functionality to add users, curate them under teams, provide access to cluster resources, control over quotas, priorities and pools, and monitor and report on resource use.
  • The ability to pool GPUs and share computing power — from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters — for separate tasks.
  • Efficient GPU cluster resource utilization, enabling customers to gain more from their compute investments.

NVIDIA will continue to offer Run:ai’s products under the same business model for the immediate future. And NVIDIA will continue to invest in the Run:ai product roadmap as part of NVIDIA DGX Cloud, an AI platform co-engineered with leading clouds for enterprise developers, offering an integrated, full-stack service optimized for generative AI.

NVIDIA DGX and DGX Cloud customers will gain access to Run:ai’s capabilities for their AI workloads, particularly for large language model deployments. Run:ai’s solutions are already integrated with NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI Enterprise software, among other products.

NVIDIA’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility.

Together with Run:ai, NVIDIA will enable customers to have a single fabric that accesses GPU solutions anywhere. Customers can expect to benefit from better GPU utilization, improved management of GPU infrastructure and greater flexibility from the open architecture.

Read More

Forecasting the Future: AI2’s Christopher Bretherton Discusses Using Machine Learning for Climate Modeling

Forecasting the Future: AI2’s Christopher Bretherton Discusses Using Machine Learning for Climate Modeling

Can machine learning help predict extreme weather events and climate change? Christopher Bretherton, senior director of climate modeling at the Allen Institute for Artificial Intelligence, or AI2, explores the technology’s potential to enhance climate modeling with AI Podcast host Noah Kravitz in an episode recorded live at the NVIDIA GTC global AI conference. Bretherton explains how machine learning helps overcome the limitations of traditional climate models and underscores the role of localized predictions in empowering communities to prepare for climate-related risks. Through ongoing research and collaboration, Bretherton and his team aim to improve climate modeling and enable society to better mitigate and adapt to the impacts of climate change.

Stay tuned for more episodes recorded live from GTC, and watch the replay of Bretherton’s GTC session on using machine learning for climate modeling.

Time Stamps

2:03: What is climate modeling and how can it prepare us for climate change?

5:28: How can machine learning help enhance climate modeling?

7:21: What were the limitations of traditional climate models?

10:24: How does a climate model work?

12:11: What information can you get from a climate model?

13:26: What are the current climate models telling us about the future?

15:56: How does machine learning help enable localized climate modeling?

18:39: What, if anything, can individuals or small communities do to prepare for what climate change has in store for us?

25:59: How do you measure the accuracy or performance of an emulator that’s doing something like climate modeling out into the future?

You Might Also Like…

ITIF’s Daniel Castro on Energy-Efficient AI and Climate Change – Ep. 215

AI-driven change is in the air, as are concerns about the technology’s environmental impact. In this episode of NVIDIA’s AI Podcast, Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, speaks with host Noah Kravitz about the motivation behind his AI energy use report, which addresses misconceptions about the technology’s energy consumption.

DigitalPath’s Ethan Higgins on Using AI to Fight Wildfires – Ep. 211

DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real-time. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.

Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep. 203

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community.

How Alex Fielding and Privateer Space Are Taking on Space Debris – Ep. 196

In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More