Johnson & Johnson MedTech Works With NVIDIA to Broaden AI’s Reach in Surgery

Johnson & Johnson MedTech Works With NVIDIA to Broaden AI’s Reach in Surgery

AI — already used to connect, analyze and offer predictions based on operating room data — will be critical to the future of surgery, boosting operating room efficiency and clinical decision-making.

That’s why NVIDIA is working with Johnson & Johnson MedTech to test new AI capabilities for the company’s connected digital ecosystem for surgery. It aims to enable open innovation and accelerate the delivery of real-time insights at scale to support medical professionals before, during and after procedures.

J&J MedTech is in 80% of the world’s operating rooms and trains more than 140,000 healthcare professionals each year through its education programs.

Seeking to bring together its legacy and digital ecosystem in surgery with NVIDIA’s leading AI solutions — including the NVIDIA IGX edge computing platform and NVIDIA Holoscan edge AI platform for building medical devices — J&J MedTech can accelerate the infrastructure needed to deploy AI-powered software applications for surgery. IGX and Holoscan can support secure, real-time processing from devices across the operating room to provide clinical insights and improve surgical outcomes.

Unveiled at NVIDIA GTC, the global AI conference taking place March 18-21 in San Jose, Calif., and online, this work could also facilitate the deployment of third-party models and applications developed across the digital surgery ecosystem by providing a common AI compute platform.

“AI models are currently being created by experts in surgery in various parts of the world,” said Shan Jegatheeswaran, vice president and global head of digital at J&J MedTech. “If we can create a trusted, open ecosystem that enables and accelerates coordination, it would create a flywheel of innovation where different groups can collaborate and connect at scale, improving access to advanced analytics across the surgical experience.”

An Open Ecosystem for AI Innovation: Building on NVIDIA Holoscan and IGX 

J&J MedTech is working with NVIDIA to test how industrial-grade edge AI capabilities purpose-built for medical environments could benefit surgery.

“Our connected digital ecosystem will help break down the traditional barriers to entry for developers seeking to build applications and deploy analytics in the operating room,” Jegatheeswaran said. “We’re making it simpler for those who want to participate in the surgical workflow by eliminating the heavy lifting of building a secure, enterprise-grade platform.”

NVIDIA Holoscan accelerates the development and deployment of real-time AI applications to process data streams.

Holoscan includes reference pipelines to build AI applications for a variety of medical use cases, including endoscopy, ultrasound and other sensors. It runs on NVIDIA IGX, which includes NVIDIA Jetson Orin modules, NVIDIA RTX A6000 GPUs and NVIDIA ConnectX networking technology to enable high-speed data streaming from medical devices or operating room video feeds.

NVIDIA supports the IGX software stack with NVIDIA AI Enterprise, the enterprise operating system for production-grade AI.

Fueling Surgical AI With Device Data

The J&J MedTech team envisions the potential of NVIDIA-accelerated edge analytics behind its connected digital ecosystem as an enabler of AI-powered applications fueled by device, patient and other surgical data.

Developers could leverage continuous learning, where an algorithm improves based on data collected by the deployed device. Real-world footage collected by an endoscope, for example, could be used to refine an AI model that identifies organs, tissue and potential tumors in real time on an operating room display to support clinical decision-making.

“Surgical technologies will get more intelligent over time, bringing the power of advanced analytics to surgeons and hospitals,” said Jegatheeswaran. “A collection of AI models could act like driver-assistance technology for surgeons, amplifying their ability to deliver care while reducing cognitive load.”

One example is AI that removes personally identifiable information from surgical videos so they can be used downstream for research purposes — or, when processed in real time, enable hospitals to bring in external experts through telepresence to consult during a surgery while maintaining patient privacy.

Future applications could enable surgeons to interact with chatbots to gain insights about a patient’s medical history or best practices for handling certain complications. Other models could improve operating room efficiency by using video feeds to understand when a procedure is almost complete, alerting the next surgical team that a room will soon be available.

Discover the latest in AI and healthcare at GTC, running in San Jose, Calif., and online through Thursday, March 21. Tune in to a special address on generative AI in healthcare delivered by Kimberly Powell, vice president of healthcare at NVIDIA, on Tuesday at 8 a.m. PT.

Watch the GTC keynote address by NVIDIA founder and CEO Jensen Huang below:

Read More

BNY Mellon, First Global Bank to Deploy AI Supercomputer Powered by NVIDIA DGX SuperPOD With DGX H100

BNY Mellon, First Global Bank to Deploy AI Supercomputer Powered by NVIDIA DGX SuperPOD With DGX H100

Moving fast to accelerate its AI journey, BNY Mellon, a global financial services company celebrating its 240th anniversary, revealed Monday that it has become the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems.

Thanks to the strong collaborative relationship between NVIDIA Professional Services and BNY Mellon, the team was able to install and configure the DGX SuperPOD ahead of typical timelines.

The system, equipped with dozens of NVIDIA DGX systems and NVIDIA InfiniBand networking and based on the DGX SuperPOD reference architecture, delivers computer processing performance that hasn’t been seen before at BNY Mellon.

“Key to our technology strategy is empowering our clients through scalable, trusted platforms and solutions,” said BNY Mellon Chief Information Officer Bridget Engle. “By deploying NVIDIA’s AI supercomputer, we can accelerate our processing capacity to innovate and launch AI-enabled capabilities that help us manage, move and keep our clients’ assets safe.”

Powered by its new system, BNY Mellon plans to use NVIDIA AI Enterprise software to support the build and deployment of AI applications and manage AI infrastructure.

NVIDIA AI Software: A Key Component in BNY Mellon’s Toolbox

Founded by Alexander Hamilton in 1784, BNY Mellon oversees nearly $50 trillion in assets for its clients and helps companies and institutions worldwide access the money they need, support governments in funding local projects, safeguard investments for millions of individuals and more.

BNY Mellon has long been at the forefront of AI and accelerated computing in the financial services industry. Its AI Hub has more than 20 AI-enabled solutions in production. These solutions support predictive analytics, automation and anomaly detection, among other capabilities.

While the firm recognizes that AI presents opportunities to enhance its processes to reduce risk across the organization, it is also actively working to consider and manage potential risks associated with AI through its robust risk management and governance processes.

Some of the use cases supported by DGX SuperPOD include deposit forecasting, payment automation, predictive trade analytics and end-of-day cash balances.

More are coming. The company identified more than 600 opportunities in AI during a firmwide exercise last year, and dozens are already in development using such NVIDIA AI Enterprise software as NVIDIA NeMo, NVIDIA Triton Inference Server and NVIDIA Base Command.

Triton Inference Server is inference-serving software that streamlines AI inferencing or puts trained AI models to work.

Base Command powers the DGX SuperPOD, delivering the best of NVIDIA software that enables businesses and their data scientists to accelerate AI development.

NeMo is an end-to-end platform for developing custom generative AI, anywhere. It includes tools for training, and retrieval-augmented generation, guardrailing and toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI.

Fueling Innovation Through Top Talent

With the new DGX SuperPOD, these tools will enable BNY Mellon to streamline and accelerate innovation within the firm and across the global financial system.

Hundreds of data scientists, solutions architects and risk, control and compliance professionals have been using the NVIDIA DGX platform, which delivers the world’s leading solutions for enterprise AI development at scale, for several years.

By leveraging their new NVIDIA DGX SuperPOD will help the company rapidly expand its on-premises AI infrastructure.

The new system also underscores the company’s commitment to adopting new technologies and attracting top talent across the world to help drive its innovation agenda forward.

Read More

NVIDIA Isaac Taps Generative AI for Manufacturing and Logistics Applications

NVIDIA Isaac Taps Generative AI for Manufacturing and Logistics Applications

The NVIDIA Isaac robotics platform is tapping into the latest generative AI and advanced simulation technologies to accelerate AI-enabled robotics.

At GTC today, NVIDIA announced Isaac Manipulator and Isaac Perceptor — a collection of foundation models, robotics tools and GPU-accelerated libraries.

On stage before a crowd of 10,000-plus, NVIDIA founder and CEO Jensen Huang demonstrated Project GR00T, which stands for Generalist Robot 00 Technology, a general-purpose foundation model for humanoid robot learning. Project GR00T leverages  various tools from the NVIDIA Isaac robotics platform to create AI for humanoid robots.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Huang. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps toward artificial general robotics.”

NVIDIA also announced a new computer for humanoid robots based on the NVIDIA Thor system-on-a-chip, and new tools for the NVIDIA Isaac robotics platform, including Isaac Lab for robot learning and NVIDIA OSMO for hybrid-cloud workflow orchestration, which are instrumental in the development of Project GR00T and foundation models for robots.

Introducing Isaac Manipulator for Robotic Arms

 

NVIDIA Isaac Manipulator offers a collection of state-of-the-art motion generation and modular AI capabilities for robotic arms, with a robust collection of foundation models and GPU-accelerated libraries.

Robotics developers can use combinations of software components customized for specific tasks to perceive and interact with surroundings, enabling the building of scalable and repeatable workflows for dynamic manipulation tasks by accelerating AI model training and task programming.

“Incorporating new tools for foundation model generation into the Isaac platform accelerates the development of smarter, more flexible robots that can be generalized to do many tasks,” said Deepu Talla, vice president of robotics and edge computing at NVIDIA.

Leading robotics companies Yaskawa, Solomon, PickNik Robotics, READY Robotics, Franka Robotics, and Universal Robots, a Teradyne company, are partnering with NVIDIA to bring Isaac Manipulator to their customers.

“By bringing NVIDIA AI tools and capabilities to Yaskawa’s automation solutions, we’re pushing the boundaries of where robots can be deployed across industries,“ said Masahiro Ogawa, President,  Yaskawa. “This will significantly influence various industries.”

NVIDIA is introducing foundation models to augment existing robot manipulation systems. These will help develop robots to sense, adapt and reprogram for varied environments and applications in smart manufacturing, handling pick-and-place tasks, machine tending and assembly with the following:

  • FoundationPose is a pioneering foundation model for 6D pose estimation and tracking of previously unseen objects.
  • cuMotion taps into the parallel processing of NVIDIA GPUs for solving robot motion planning problems at industrial scale by running many trajectory optimizations at the same time to provide the best solution.
  • FoundationGrasp is a transformer based model that can make dense grasp predictions for unknown 3D objects.
  • SyntheticaDETR is an object detection model for indoor environments that allows faster detection, rendering and training with new objects.

Introducing Isaac Perceptor for Autonomous Mobile Robots Visual AI

Manufacturing and fulfillment operations are adopting autonomous mobile robots (AMRs) to improve efficiency and worker safety as well as to reduce error rates and costs.

Isaac Perceptor provides multi-camera, 360-degree vision capabilities, offering early industry partners  such as ArcBest, BYD and KION Group advanced visual AI for their AMR installations that assist in material handling operations.

The NVIDIA Nova Orin DevKit — created in collaboration with Segway Robotics and Leopard Imaging — allows companies to quickly develop, evaluate and deploy Isaac Perceptor.

“ArcBest is collaborating with NVIDIA to bring leading-edge machine vision technology into the logistics space,” said Michael Newcity, chief innovation officer of ArcBest and president of ArcBest Technologies. “Using the Isaac Perceptor platform in our Vaux Smart Autonomy AMR forklifts and reach trucks enables better perception, semantic-aware navigation and 3D mapping for obstacle detection in material handling processes across warehouses, distribution centers and manufacturing facilities.”

Project GR00T for Humanoid Robotics Development Takes a Bow

Demonstrated at GTC, GR00T-powered humanoid robots can take multimodal instructions — text, video and demonstrations — as well as their previous interactions to produce the desired action for the robot. GR00T was shown on four humanoid robots from different companies, including Agility Robotics, Apptronik, Fourier Intelligence and Unitree Robotics.

Humanoid robots are complex systems that require heterogeneous computing to meet the needs of high frequency low level controls, sensor fusion and perception, task planning and human-robot interaction. NVIDIA unveiled a new Jetson Thor-based computer for humanoid robots, built on the NVIDIA Thor SoC.

Jetson Thor includes a next-generation GPU based on the NVIDIA Blackwell Architecture with a transformer engine delivering 800 teraflops of 8-bit floating point AI performance to run multimodal generative AI models like GR00T. With an integrated functional safety processor, a high-performance CPU cluster and 100GB of ethernet bandwidth, it significantly simplifies design and integration efforts.

 

 

Project GR00T uses Isaac tools that are available to robotics developers for building and testing foundation models. These include Isaac Lab, a new lightweight simulation app built in Isaac Sim to train this humanoid robot model at scale, and OSMO, a cloud workflow orchestration platform for managing the training and simulation workloads.

Accelerating Robot Learning With Isaac Lab

Robots that require advanced locomotion skills, whether with walking or grasping, need to use deep reinforcement learning in a simulated environment and be trained repeatedly in a virtual environment to learn skills. However, this utility becomes more useful when the model transfers to the real robot deployment, which has been demonstrated with Project GR00T.

As the successor to Isaac Gym, Isaac Lab benefits from NVIDIA Omniverse technologies for physics-informed, photorealistic, perception-based reinforcement learning tasks. Isaac Lab is an open-source, performance-optimized application for robot learning built on the Isaac Sim platform. It incorporates reinforcement learning APIs and a developer-friendly tasking framework.

Enabling Cloud-Native Robotics Workflow Scheduling With NVIDIA OSMO 

 

 

NVIDIA OSMO scales workloads across distributed environments. For robotics workloads with complex multi-stage and multi-container workflows, the platform provides a location-agnostic deployment option and dataset management and traceability features for deployed models.

“Boston Dynamics employs a range of machine learning, reinforcement learning and AI technologies to power our robots,” said Pat Marion, machine learning and perception lead at Boston Dynamics. “To effectively manage the large training workloads, we’re using NVIDIA OSMO, an infrastructure solution that lets our machine learning engineers streamline their workflows and dedicate their expertise to tackling the hard robotics problems.”

OSMO supports GR00T, for example, by concurrently running models on NVIDIA DGX for training and NVIDIA OVX servers for live reinforcement learning in simulation. This workload involves generating and training models iteratively in a loop. OSMO’s ability to manage and schedule workloads across distributed environments allows for the seamless coordination of DGX and OVX systems, enabling efficient and iterative model development. Once the model is ready for testing and validation, OSMO can uniquely orchestrate software-in-the-loop workflows on OVX (x86-64) as well as hardware-in-the-loop workflows with NVIDIA Jetson (aarch64) compute resources.

Supporting the ROS Ecosystem of Developers

NVIDIA joined the Open Source Robotics Alliance (OSRA) as a founding member and platinum sponsor. OSRA is a new initiative by Open Source Robotics Foundation to foster collaboration, innovation and technical guidance in the robotics community by supporting several open-source robotics projects, including the Robot Operating System (ROS).

“The increasing capability of autonomous robots is driving a rise in demand for more powerful but still energy-efficient onboard computing,” said Vanessa Yamzon Orsi, CEO of Open Robotics. “The ROS community is experiencing this demand firsthand, and our users are increasingly taking advantage of advanced accelerated computing hardware from industry leaders such as NVIDIA.”

NVIDIA Isaac Perceptor with Nova Orin evaluation kit, Isaac Manipulator, Isaac Lab and OSMO will be made available to customers and partners in the second quarter of this year. Learn more about Project GR00T.

Read More

NVIDIA Omniverse Expands Worlds Using Apple Vision Pro

NVIDIA Omniverse Expands Worlds Using Apple Vision Pro

NVIDIA is bringing OpenUSD-based Omniverse enterprise digital twins to the Apple Vision Pro.

Announced today at NVIDIA GTC, a new software framework built on Omniverse Cloud APIs, or application programming interfaces, lets developers easily send their Universal Scene Description (OpenUSD) industrial scenes from their content creation applications to the NVIDIA Graphics Delivery Network (GDN), a global network of graphics-ready data centers that can stream advanced 3D experiences to Apple Vision Pro.

In a demo unveiled at the global AI conference, NVIDIA presented an interactive, physically accurate digital twin of a car streamed in full fidelity to Apple Vision Pro’s high-resolution displays.

The demo featured a designer wearing the Vision Pro, using a car configurator application developed by CGI studio Katana on the Omniverse platform. The designer toggles through paint and trim options and even enters the vehicle — leveraging the power of spatial computing by blending 3D photorealistic environments with the physical world.

Bringing the Power of RTX Enterprise Cloud Rendering to Spatial Computing

Spatial computing has emerged as a powerful technology for delivering immersive experiences and seamless interactions between people, products, processes and physical spaces. Industrial enterprise use cases require incredibly high-resolution displays and powerful sensors operating at high frame rates to make manufacturing experiences true to reality.

This new Omniverse-based workflow combines Apple Vision Pro groundbreaking high-resolution displays with NVIDIA’s powerful RTX cloud rendering to deliver spatial computing experiences with just the device and an internet connection.

This cloud-based approach allows real-time physically based renderings to be streamed seamlessly to Apple Vision Pro, delivering high-fidelity visuals without compromising details of the massive, engineering fidelity datasets.

“The breakthrough ultra-high-resolution displays of Apple Vision Pro, combined with photorealistic rendering of OpenUSD content streamed from NVIDIA accelerated computing, unlocks an incredible opportunity for the advancement of immersive experiences,” said Mike Rockwell, vice president of the Vision Products Group at Apple. “Spatial computing will redefine how designers and developers build captivating digital content, driving a new era of creativity and engagement.”

“Apple Vision Pro is the first untethered device which allows for enterprise customers to realize their work without compromise,” said Rev Lebaredian, vice president of simulation at NVIDIA. “We look forward to our customers having access to these amazing tools.”

The workflow also introduces hybrid rendering, a groundbreaking technique that combines local and remote rendering on the device. Users can render fully interactive experiences in a single application from Apple’s native SwiftUI and Reality Kit with the Omniverse RTX Renderer streaming from GDN.

NVIDIA GDN, available in over 130 countries, taps NVIDIA’s global cloud-to-edge streaming infrastructure to deliver smooth, high-fidelity, interactive experiences. By moving heavy compute tasks to GDN, users can tackle the most demanding rendering use cases, no matter the size or complexity of the dataset.

Enhancing Spatial Computing Workloads Across Use Cases

The Omniverse-based workflow showed potential for a wide range of use cases. For example, designers could use the technology to see their 3D data in full fidelity, with no loss in quality or model decimation. This means designers can interact with trustworthy simulations that look and behave like the real physical product. This also opens new channels and opportunities for e-commerce experiences.

In industrial settings, factory planners can view and interact with their full engineering factory datasets, letting them optimize their workflows and identify potential bottlenecks.

For developers and independent software vendors, NVIDIA is building the capabilities that would allow them to use the native tools on Apple Vision Pro to seamlessly interact with existing data in their applications.

Learn more about NVIDIA Omniverse and GDN.

Read More

NVIDIA and Siemens Bring Immersive Visualization and Generative AI to Industrial Design and Manufacturing

NVIDIA and Siemens Bring Immersive Visualization and Generative AI to Industrial Design and Manufacturing

Generative AI and digital twins are changing the way companies in multiple industries design, manufacture and operate their products.

Siemens, a leading technology company for automation, digitalization and sustainability, announced today at NVIDIA GTC that it is expanding its partnership with NVIDIA by adopting new NVIDIA Omniverse Cloud APIs, or application programming interfaces, with its Siemens Xcelerator platform applications, starting with Teamcenter X. Teamcenter X is Siemens’ industry-leading cloud-based product lifecycle management (PLM) software.

NVIDIA Omniverse is a platform of APIs and services based on Universal Scene Description (OpenUSD) that enables developers to build generative AI-powered tools, applications and services for industrial digital twins and automation.

Enterprises of all sizes depend on Teamcenter software, part of the Siemens Xcelerator platform, to develop and deliver products at scale. By connecting NVIDIA Omniverse with Teamcenter X, Siemens will be able to provide engineering teams with the ability to make their physics-based digital twins more immersive and photorealistic, helping eliminate workflow waste and reduce errors.

Through the use of Omniverse APIs, workflows such as applying materials, lighting environments and other supporting scenery assets in physically based renderings will be dramatically accelerated using generative AI.

AI integrations will also allow engineering data to be contextualized as it would appear in the real world, allowing other stakeholders — from sales and marketing teams to decision-makers and customers — to benefit from deeper insight and understanding of real-world product appearance.

Unifying and Visualizing Complex Industrial Datasets

Traditionally, companies have relied heavily on physical prototypes and costly modifications to complete large-scale industrial projects and build complex, connected products. That approach is expensive and error-prone, limits innovation and slows time to market.

By connecting Omniverse Cloud APIs to the Xcelerator platform, Siemens will enable its customers to enhance their digital twins with physically based rendering, helping supercharge industrial-scale design and manufacturing projects. With the ability to connect generative AI APIs or agents, users can effortlessly generate 3D objects or high-dynamic range image backgrounds to view their assets in context.

This means that companies like HD Hyundai, a leader in sustainable ship manufacturing, can unify and visualize complex engineering projects directly within Teamcenter X. At NVIDIA GTC, Siemens and NVIDIA demonstrated how HD Hyundai could use the software to visualize digital twins of liquified natural gas carriers, which can comprise over 7 million discrete parts, helping validate their product before moving to production.

Interoperable, photoreal and physics-based digital twins like these accelerate engineering collaboration and allow customers to minimize workflow waste, save time and costs, and reduce risk of manufacturing defects.

Combining Digital and Physical Worlds With Omniverse APIs

Omniverse Cloud APIs enable data interoperability and physically based rendering for industrial-scale design and manufacturing projects in Teamcenter X. This starts with a real-time, embedded, photoreal viewport powered by the USD Render and USD Write APIs, which engineers can use to interactively navigate, edit and iterate on a shared model of their live data.

The USD Query API lets Teamcenter X users navigate and interact with physically accurate scenes, while the USD Notify API automatically provides real-time design and scene updates. To facilitate cloud-based collaboration and data exchange, Teamcenter X will leverage the Omniverse Channel API to establish a secure connection between multiple users across devices.

In the future, Siemens plans to bring NVIDIA accelerated computing, generative AI and Omniverse to more of its Siemens Xcelerator portfolio.

Learn more about NVIDIA Omniverse, Siemens Xcelerator and the partnership.

Get started with NVIDIA Omniverse, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and Twitter. For more, join the Omniverse community on the  forums, Discord server, Twitch and YouTube channels.

Read More

NVIDIA Supercharges Autonomous System Development with Omniverse Cloud APIs

NVIDIA Supercharges Autonomous System Development with Omniverse Cloud APIs

While simulation is critical for training, testing and deploying autonomy,  achieving real-world fidelity is incredibly challenging.

It requires accurate modeling of the physics and behavior of an autonomous system’s sensors and surroundings.

Designed to address this challenge by delivering large-scale, high-fidelity sensor simulation, Omniverse Cloud APIs, announced today at NVIDIA GTC, are poised to accelerate the path to autonomy. They bring together a rich ecosystem of simulation tools, applications and sensors.

The application programming interfaces address the critical need for high-fidelity sensor simulations to safely explore the myriad real-world scenarios autonomous systems will encounter.

In addition, the Omniverse Cloud platform offers application developers access to a range of powerful Universal Scene Description (OpenUSD), RTX and generative AI-enabled service-level cloud APIs to bring interoperability and physically based rendering to next-generation tools.

Simulation Key to Unlocking New Levels of Safety

As demand increases for robots, AVs, and other AI systems, developers are seeking to accelerate their workflows. Sensor data powers these systems’ perception capabilities, enabling them to comprehend their environment and make informed decisions in real time.

Traditionally, developers have used real-world data for training, testing and validation.

However, these methods are limited in covering rare scenarios or data that can’t be captured in the real world. Sensor simulation provides a seamless way to effectively test countless “what if” scenarios and diverse environmental conditions.

With Omniverse Cloud APIs, developers can enhance the workflows they’re already using with high-fidelity sensor simulation to tackle the challenge of developing full-stack autonomy.

This not only streamlines the development process but also lowers the barriers to entry for companies of virtually all sizes developing autonomous machines.

The Ecosystem Advantage

By bringing together an expansive ecosystem of simulators, verification and validation (V&V) tools, content and sensor developers, the Omniverse Cloud APIs enable a universal environment for AI system development.

Developers and software vendors such as CARLA, MathWorks, MITRE, Foretellix and Voxel51 underscore the broad appeal of these APIs in autonomous vehicles.

CARLA is an open-source AV simulator used by more than 100,000 developers. With Omniverse Cloud APIs, CARLA users can enhance their existing workflows with high-fidelity sensor simulation.

Similarly, MITRE, a nonprofit that operates federally funded R&D centers and is dedicated to improving safety in technology, is building a Digital Proving Ground for the AV industry to validate self-driving solutions. The DPG will use the Omniverse APIs to enable core sensor simulation capabilities for their developers.

MathWorks and Foretellix provide critical simulation tools for authoring, executing, monitoring, and debugging of testing scenarios. As the GTC demo showed, combining such simulation and test automation tools with the APIs forms a powerful test environment for AV development. On the showfloor, Foretellix is showing an in-depth look at this solution in Booth 630.

And, by integrating the APIs with Voxel51’s FiftyOne platform, developers can easily visualize and organize ground-truth data generated in simulation for streamlined training and testing.

Leading industrial-sensor solution provider SICK AG is working on integrating these APIs in its sensor development process to reduce the number of physical prototypes, iterate quickly on design modifications and validate the eventual performance. These validated sensor models can eventually be used by autonomous systems developers in their applications.

Developers will also have access to sensor models from a variety of manufacturers, including lidar makers Hesai, Innoviz Technologies, Luminar, MicroVision, Robosense, and Seyond, visual sensor suppliers OMNIVISION, onsemi, and Sony Semiconductor Solutions, and Continental, FORVIA HELLA, and Arbe for radar.

Additionally, AI/ML developers can call on these APIs to generate large and diverse sets of synthetic data — critical input for training and validating perception models that power these autonomous systems.

Empowering Developers and Accelerating Innovation

By reducing the traditional barriers to high-fidelity sensor simulation, NVIDIA Omniverse Cloud APIs empower developers to address complex AI problems without significant infrastructure overhauls.

This democratization of access to advanced simulation tools promises to accelerate innovation, allowing developers to quickly adapt to and integrate the latest technological advancements into their testing and development processes.

Apply here for early access to Omniverse Cloud APIs.

Get started with NVIDIA Omniverse, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and Twitter. For more, join the Omniverse community on the  forums, Discord server, Twitch and YouTube channels.

Read More

Staying in Sync: NVIDIA Combines Digital Twins With Real-Time AI for Industrial Automation

Staying in Sync: NVIDIA Combines Digital Twins With Real-Time AI for Industrial Automation

Real-time AI is helping with the heavy lifting in manufacturing, factory logistics and robotics.

In such industries — often involving bulky products, expensive equipment, cobot environments and logistically complex facilities — a simulation-first approach is ushering in the next phase of automation.

NVIDIA founder and CEO Jensen Huang today demonstrated in his GTC keynote how developers can use digital twins to develop, test and refine their large-scale, real-time AIs entirely in simulation before rolling them out in industrial infrastructure, saving significant time and cost.

NVIDIA Omniverse, Metropolis, Isaac and cuOpt interact in AI gyms where developers can train AI agents to help robots and humans navigate unpredictable or complex events.

In the demo, a digital twin of a 100,000-square-foot warehouse — built using the NVIDIA Omniverse platform for developing and connecting OpenUSD applications — operates as a simulation environment for dozens of digital workers and multiple autonomous mobile robots (AMRs), vision AI agents and sensors.

Each AMR, running the NVIDIA Isaac Perceptor multi-sensor stack, processes visual information from six sensors, all simulated in the digital twin.

At the same time, the NVIDIA Metropolis platform for vision AI creates a single centralized map of worker activity across the entire warehouse, fusing together data from 100 simulated ceiling-mounted camera streams with multi-camera tracking. This centralized occupancy map helps inform optimal AMR routes calculated by the NVIDIA cuOpt engine for solving complex routing problems.

cuOpt, a record-breaking optimization AI microservice, solves complex routing problems with multiple constraints using GPU-accelerated evolutionary algorithms.

All of this happens in real time, while Isaac Mission Control coordinates the entire fleet using map data and route graphs from cuOpt to send and execute AMR commands.

An AI Gym for Industrial Digitalization

AI agents can assist in large-scale industrial environments by, for example, managing fleets of robots in a factory or identifying streamlined configurations for human-robot collaboration in supply chain distribution centers. To build these complex agents, developers need digital twins that function as AI gyms — physically accurate environments for AI evaluation, simulation and training.

Such software-in-the-loop AI testing enables AI agents and AMRs to adapt to real-world unpredictability.

In the demo, an incident occurs along an AMR’s planned route, blocking the path and preventing it from picking up a pallet. NVIDIA Metropolis updates an occupancy grid, mapping all humans, robots and objects in a single view. cuOpt then plans an optimal route, and the AMR responds accordingly to minimize downtime.

With Metropolis vision foundation models powering the NVIDIA Visual Insight Agent (VIA) framework, AI agents can be built to help operations teams answer questions like, “What situation occurred in aisle three of the factory?” And the generative AI-powered agent offers immediate insights such as, “Boxes fell from the shelves at 3:30 p.m., blocking the aisle.”

Developers can use the VIA framework to build AI agents capable of processing large amounts of live or archived videos and images with vision-language models — whether deployed at the edge or in the cloud. This new generation of visual AI agents will help nearly every industry summarize, search and extract actionable insights from video using natural language.

All of these AI functions can be enhanced through continuous, simulation-based training and are deployed as modular NVIDIA NIM inference microservices.

Learn more about the latest advancements in generative AI and industrial digitalization at NVIDIA GTC, a global AI conference running through Thursday, March 21, at the San Jose Convention Center and online.

Read More

At Your Microservice: NVIDIA Smooths Businesses’ Journey to Generative AI

At Your Microservice: NVIDIA Smooths Businesses’ Journey to Generative AI

NVIDIA’s AI platform is available to any forward-thinking business — and it’s easier to use than ever.

Launched today, NVIDIA AI Enterprise 5.0 includes NVIDIA microservices, downloadable software containers for deploying generative AI applications and accelerated computing. It’s available from leading cloud service providers, system builders and software vendors — and it’s in use at customers such as Uber.

“Our adoption of NVIDIA AI Enterprise inference software is important for meeting the high performance our users expect,” said Albert Greenberg, vice president of platform engineering at Uber. “Uber prides itself on being at the forefront of adopting and using the latest, most advanced AI innovations to deliver a customer service platform that sets the industry standard for effectiveness and excellence.”

Microservices Speed App Development

Developers are turning to microservices as an efficient way to build modern enterprise applications at a global scale. Working from a browser, they use cloud APIs, or application programming interfaces, to compose apps that can run on systems and serve users worldwide.

NVIDIA AI Enterprise 5.0 now includes a wide range of microservices — NVIDIA NIM for deploying AI models in production and the  NVIDIA CUDA-X collection of microservices which includes NVIDIA cuOpt.

NIM microservices optimize inference for dozens of popular AI models from NVIDIA and its partner ecosystem.

Powered by NVIDIA inference software — including Triton Inference Server, TensorRT, and TensorRT-LLM — NIM slashes deployment times from weeks to minutes. It provides security and manageability based on industry standards as well as compatibility with enterprise-grade management tools.

NVIDIA cuOpt is a GPU-accelerated AI microservice that’s set world records for route optimization and can empower dynamic decision-making that reduces cost, time and carbon footprint. It’s one of the CUDA-X microservices that help industries put AI into production.

More capabilities are in the works. For example, NVIDIA RAG LLM operator — now in early access and described in more detail here — will move co-pilots and other generative AI applications that use retrieval-augmented generation from pilot to production without rewriting any code.

NVIDIA microservices are being adopted by leading application and cybersecurity platform providers including CrowdStrike, SAP and ServiceNow.

More Tools and Features

Three other updates in version 5.0 are worth noting.

The platform now packs NVIDIA AI Workbench, a developer toolkit for quickly downloading, customizing, and running generative AI projects. The software is now generally available and supported with an NVIDIA AI Enterprise license.

Version 5.0 also now supports Red Hat OpenStack Platform, the environment most Fortune 500 companies use for creating private and public cloud services. Maintained by Red Hat, it provides developers a familiar option for building virtual computing environments. IBM Consulting will help customers deploy these new capabilities.

In addition, version 5.0 expands support to cover a wide range of the latest NVIDIA GPUs, networking hardware and virtualization software.

Available to Run Anywhere

The enhanced NVIDIA AI platform is easier to access than ever.

NIM and CUDA-X microservices and all the 5.0 features will be available soon on the AWS, Google Cloud, Microsoft Azure and Oracle Cloud marketplaces.

For those who prefer to run code in their own data centers, VMware Private AI Foundation with NVIDIA will support the software, so it can be deployed in the virtualized data centers of Broadcom’s customers.

Companies have the option of running NVIDIA AI Enterprise on Red Hat OpenShift, allowing them to deploy on bare-metal or virtualized environments. It’s also supported on Canonical’s Charmed Kubernetes as well as Ubuntu.

In addition, the AI platform will be part of the software available on HPE ProLiant servers from Hewlett Packard Enterprise (HPE). HPE’s enterprise computing solution for generative AI handles inference and model fine-tuning using NVIDIA AI Enterprise.

In addition, Anyscale, Dataiku and DataRobot — three leading providers of the software for managing machine learning operations — will support NIM on their platforms. They join an NVIDIA ecosystem of hundreds of MLOps partners, including Microsoft Azure Machine Learning, Dataloop AI, Domino Data Lab and Weights & Biases.

However they access it, NVIDIA AI Enterprise 5.0 users can benefit from software that’s secure, production-ready and optimized for performance. It can be flexibly deployed for applications in the data center, the cloud, on workstations or at the network’s edge.

NVIDIA AI Enterprise is available through leading system providers, including Cisco, Dell Technologies, HP, HPE, Lenovo and Supermicro.

Hear Success Stories at GTC

Users will share their experiences with the software at NVIDIA GTC, a global AI conference, running March 18-21 at the San Jose Convention Center.

For example, ServiceNow chief digital information officer Chris Bedi will speak on a panel about harnessing generative AI’s potential. In a separate talk, ServiceNow vice president of AI Products Jeremy Barnes will share on using NVIDIA AI Enterprise to achieve maximum developer productivity.

Executives from BlackRock, Medtronic, SAP and Uber will discuss their work in finance, healthcare, enterprise software, and business operations using the NVIDIA AI platform.

In addition, executives from ControlExpert, a global application provider for  car insurance companies based in Germany, will share how they developed an AI-powered claims management solution using NVIDIA AI Enterprise software.

They’re among a growing set of companies that benefit from NVIDIA’s work evaluating hundreds of internal and external generative AI projects — all integrated into a single package that’s been tested for stability and security.

And get the full picture from NVIDIA CEO and founder Jensen Huang in his GTC keynote.

See notice regarding software product information. 

Read More

Reach for the Stars: Eight Out-of-This-World Games Join the Cloud

Reach for the Stars: Eight Out-of-This-World Games Join the Cloud

The stars align this GFN Thursday as more top titles from Ubisoft and Square Enix join the cloud.

Star Wars Outlaws will be coming to the GeForce NOW library at launch later this year, while STAR OCEAN THE SECOND STORY R and PARANORMASIGHT: The Seven Mysteries of Honjo are part of eight new titles joining this week.

Additionally, four other games are getting NVIDIA RTX enhancements, all arriving at next week’s Game Developers Conference.

NARAKA: BLADEPOINT and Portal with RTX are adding full ray tracing and NVIDIA DLSS 3.5 Ray Reconstruction capabilities. This month’s Diablo IV update will add ray tracing. And Sengoku Dynasty — available to stream today — was recently updated with DLSS 3 Frame Generation.

Coming Soon

Star Wars Outlaws coming to GeForce NOW
A galaxy far, far away is coming to the cloud.

GeForce NOW members will be able to stream Star Wars Outlaws, the first open-world Star Wars game from Ubisoft, when it comes to the cloud at launch later this year.

Set between the events of The Empire Strikes Back and Return of the Jedi, explore distinct planets across the galaxy, both iconic and new. Risk it all as Kay Vess, a scoundrel seeking freedom and a fresh new start. Members will fight, steal and outwit their way through the galaxy’s crime syndicates to become the galaxy’s most wanted.

The game will launch with DLSS 3 and ray-traced effects, as well as NVIDIA RTX Direct Illumination (RTXDI) and ray-traced global illumination lighting, taking visuals to the next level. Turn RTX ON, available to Ultimate and Priority members as well as Day Pass users. And both Ultimate members and Day Pass users get the added benefit of NVIDIA DLSS 3 and NVIDIA Reflex for a streaming experience nearly indistinguishable from playing locally.

Adventure Awaits

Star Ocean on GeForce NOW
Play two of Square Enix’s latest games, thanks to the cloud.

With GeForce NOW, there’s always something new to play. This week, Japan-based publisher Square Enix brings two of its latest role-playing adventures to the cloud.

Witness an awakened destiny in STAR OCEAN THE SECOND STORY R, the highly acclaimed remake of the STAR OCEAN series’ second installment. Brought to life with a unique 2.5D aesthetic, which fuses 2D pixel characters and 3D environments, the remake includes all the iconic aspects of the original release while adding fresh elements. Experience new battle mechanics, full Japanese and English voice-overs, original and rearranged music, fast-travel and more. Discover the modernized, classic Japanese role-playing game perfect for newcomers and long-time fans alike.

Members can also try STAR OCEAN THE SECOND STORY R – DEMO this week before purchasing the full game.

Plus, solve an century-old mystery in PARANORMASIGHT: The Seven Mysteries of Honjo, a horror-adventure visual novel surrounding a Japanese tale, in which a mysterious “Rite of Resurrection” leads to conflict between those who have the power to curse others. Players conduct investigations throughout immersive, ambient, 360-degree environments to unravel the mysteries of Honjo, including by conversing with many interesting — and suspicious — characters.

Ultimate members can stream these games at up to 4K resolution for amazing visual quality across nearly any device and access NVIDIA GeForce RTX 4080 servers for extended session lengths. Upgrade today.

Shine Bright Like a New Game

Balatro on GeForce NOW
Play crazy poker hands, discover game-changing jokers and trigger outrageous combos in Balatro, streaming this week.

Members can look for the following new games this week:

  • Hellbreach: Vegas (New release on Steam, March 11)
  • Deus Ex: Mankind Divided (New release on Epic Games Store, Free March 14)
  • Outcast – A New Beginning (New release on Steam, March 15)
  • Balatro (Steam)
  • PARANORMASIGHT: The Seven Mysteries of Honjo (Steam)
  • Space Engineers (Xbox, available on PC Game Pass)
  • STAR OCEAN THE SECOND STORY R (Steam)
  • STAR OCEAN THE SECOND STORY R – DEMO (Steam)
  • Warhammer 40,000: Boltgun (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

NVIDIA GTC 2024: A Glimpse Into the Future of AI With Jensen Huang

NVIDIA GTC 2024: A Glimpse Into the Future of AI With Jensen Huang

NVIDIA’s GTC 2024 AI conference will set the stage for another leap forward in AI.

At the heart of this highly anticipated event: the opening keynote by Jensen Huang, NVIDIA’s visionary founder and CEO, who speaks on Monday, March 18, at 1 p.m. Pacific, at the SAP Center in San Jose, Calif.

Planning Your GTC Experience

There are two ways to watch.

Register to attend GTC in person to secure a spot for an immersive experience at the SAP Center. The center is a short walk from the San Jose Convention Center, where the rest of the conference takes place. Doors open at 11 a.m., and badge pickup starts at 10:30 a.m.

The keynote will also be livestreamed at www.nvidia.com/gtc/keynote/.

Whether attending in person or virtually, commit to joining us all week. GTC is more than just a conference. It’s a gateway to the next wave of AI innovations.

  • Transforming AI: Hear more from Huang as he discusses the origins and impact of transformer neural network architecture with its creators and industry pioneers. He’ll host a panel with all eight authors of the legendary 2017 paper that introduced the concept of transformers: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.Wed., March 20, 11-11:50 a.m. Pacific.
  • Join Visionaries Transforming Our World: Hear from leaders such as xAI cofounder Igor Babuschkin; Microsoft Vice President of GenAI Sebastian Bubeck, Stanford University’s Fei-Fei Li,  Meta Vice President of AI Research Joelle Pineau; OpenAI Chief Operating Officer Brad LightCap; Adept AI founder and CEO David Luan; Waabi founder and CEO Raquel Urtasun; Mistral CEO Arthur Mensch; and many others at the forefront of AI across various industries.
  • Be Part of What Comes Next: Engage from March 17-21 in workshops and peer networking and connect with the experts. This year’s session catalog is packed with topics covering everything from robotics to generative AI, showcasing real-world applications and the latest in AI innovation.
  • Stay Connected: Tune in online to engage with the event and fellow attendees using #GTC24 on social media.

With visionary speakers and a comprehensive program covering the essentials of AI and computing, GTC promises to be an enlightening experience for all.

Don’t miss your chance to be at the forefront of AI’s evolution. Register now.

Read More