From Algorithms to Atoms: NVIDIA ALCHEMI NIM Catalyzes Sustainable Materials Research for EV Batteries, Solar Panels and More

From Algorithms to Atoms: NVIDIA ALCHEMI NIM Catalyzes Sustainable Materials Research for EV Batteries, Solar Panels and More

More than 96% of all manufactured goods — ranging from everyday products, like laundry detergent and food packaging, to advanced industrial components, such as semiconductors, batteries and solar panels — rely on chemicals that cannot be replaced with alternative materials.

With AI and the latest technological advancements, researchers and developers are studying ways to create novel materials that could address the world’s toughest challenges, such as energy storage and environmental remediation.

Announced today at the Supercomputing 2024 conference in Atlanta, the NVIDIA ALCHEMI NIM microservice accelerates such research by optimizing AI inference for chemical simulations that could lead to more efficient and sustainable materials to support the renewable energy transition.

It’s one of the many ways NVIDIA is supporting researchers, developers and enterprises to boost energy and resource efficiency in their workflows, including to meet requirements aligned with the global Net Zero Initiative.

NVIDIA ALCHEMI for Material and Chemical Simulations

Exploring the universe of potential materials, using the nearly infinite combinations of chemicals — each with unique characteristics — can be extremely complex and time consuming. Novel materials are typically discovered through laborious, trial-and-error synthesis and testing in a traditional lab.

Many of today’s plastics, for example, are still based on material discoveries made in the mid-1900s.

More recently, AI has emerged as a promising accelerant for chemicals and materials innovation.

With the new ALCHEMI NIM microservice, researchers can test chemical compounds and material stability in simulation, in a virtual AI lab, which reduces costs, energy consumption and time to discovery.

For example, running MACE-MP-0, a pretrained foundation model for materials chemistry, on an NVIDIA H100 Tensor Core GPU, the new NIM microservice speeds evaluations of a potential composition’s simulated long-term stability 100x. The below figure shows a 25x speedup from using the NVIDIA Warp Python framework for high-performance simulation, followed by a 4x speedup with in-flight batching. All in all, evaluating 16 million structures would have taken months — with the NIM microservice, it can be done in just hours.

By letting scientists examine more structures in less time, the NIM microservice can boost research on materials for use with solar and electric batteries, for example, to bolster the renewable energy transition.

NVIDIA also plans to release NIM microservices that can be used to simulate the manufacturability of novel materials — to determine how they might be brought from test tubes into the real world in the form of batteries, solar panels, fertilizers, pesticides and other essential products that can contribute to a healthier, greener planet.

SES AI, a leading developer of lithium-metal batteries, is using the NVIDIA ALCHEMI NIM microservice with the AIMNet2 model to accelerate the identification of electrolyte materials used for electric vehicles.

“SES AI is dedicated to advancing lithium battery technology through AI-accelerated material discovery, using our Molecular Universe Project to explore and identify promising candidates for lithium metal electrolyte discovery,” said Qichao Hu, CEO of SES AI. “Using the ALCHEMI NIM microservice with AIMNet2 could drastically improve our ability to map molecular properties, reducing time and costs significantly and accelerating innovation.”

SES AI recently mapped 100,000 molecules in half a day, with the potential to achieve this in under an hour using ALCHEMI. This signals how the microservice is poised to have a transformative impact on material screening efficiency.

Looking ahead, SES AI aims to map the properties of up to 10 billion molecules within the next couple of years, pushing the boundaries of AI-driven, high-throughput discovery.

The new microservice will soon be available for researchers to test for free through the NVIDIA NGC catalog — be notified of ALCHEMI’s launch. It will also be downloadable from build.nvidia.com, and the production-grade NIM microservice will be offered through the NVIDIA AI Enterprise software platform.

Learn more about the NVIDIA ALCHEMI NIM microservice, and hear the latest on how AI and supercomputing are supercharging researchers and developers’ workflows by joining NVIDIA at SC24, running through Friday, Nov. 22.

See notice regarding software product information.

Read More

Open for Development: NVIDIA Works With Cloud-Native Community to Advance AI and ML

Open for Development: NVIDIA Works With Cloud-Native Community to Advance AI and ML

Cloud-native technologies have become crucial for developers to create and implement scalable applications in dynamic cloud environments.

This week at KubeCon + CloudNativeCon North America 2024, one of the most-attended conferences focused on open-source technologies, Chris Lamb, vice president of computing software platforms at NVIDIA, delivered a keynote outlining the benefits of open source for developers and enterprises alike — and NVIDIA offered nearly 20 interactive sessions with engineers and experts.

The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation and host of KubeCon, is at the forefront of championing a robust ecosystem to foster collaboration among industry leaders, developers and end users.

As a member of CNCF since 2018, NVIDIA is working across the developer community to contribute to and sustain cloud-native open-source projects. Our open-source software and more than 750 NVIDIA-led open-source projects help democratize access to tools that accelerate AI development and innovation.

Empowering Cloud-Native Ecosystems

NVIDIA has benefited from the many open-source projects under CNCF and has made contributions to dozens of them over the past decade. These actions help developers as they build applications and microservice architectures aligned with managing AI and machine learning workloads.

Kubernetes, the cornerstone of cloud-native computing, is undergoing a transformation to meet the challenges of AI and machine learning workloads. As organizations increasingly adopt large language models and other AI technologies, robust infrastructure becomes paramount.

NVIDIA has been working closely with the Kubernetes community to address these challenges. This includes:

  • Work on dynamic resource allocation (DRA) that allows for more flexible and nuanced resource management. This is crucial for AI workloads, which often require specialized hardware. NVIDIA engineers played a key role in designing and implementing this feature.
  • Leading efforts in KubeVirt, an open-source project extending Kubernetes to manage virtual machines alongside containers. This provides a unified, cloud-native approach to managing hybrid infrastructure.
  • Development of NVIDIA GPU Operator, which automates the lifecycle management of NVIDIA GPUs in Kubernetes clusters. This software simplifies the deployment and configuration of GPU drivers, runtime and monitoring tools, allowing organizations to focus on building AI applications rather than managing infrastructure.

The company’s open-source efforts extend beyond Kubernetes to other CNCF projects:

  • NVIDIA is a key contributor to Kubeflow, a comprehensive toolkit that makes it easier for data scientists and engineers to build and manage ML systems on Kubernetes. Kubeflow reduces the complexity of infrastructure management and allows users to focus on developing and improving ML models.
  • NVIDIA has contributed to the development of CNAO, which manages the lifecycle of host networks in Kubernetes clusters.
  • NVIDIA has also added to Node Health Check, which provides virtual machine high availability.

And NVIDIA has assisted with projects that address the observability, performance and other critical areas of cloud-native computing, such as:

  • Prometheus: Enhancing monitoring and alerting capabilities
  • Envoy: Improving distributed proxy performance
  • OpenTelemetry: Advancing observability in complex, distributed systems
  • Argo: Facilitating Kubernetes-native workflows and application management

Community Engagement 

NVIDIA engages the cloud-native ecosystem by participating in CNCF events and activities, including:

  • Collaboration with cloud service providers to help them onboard new workloads.
  • Participation in CNCF’s special interest groups and working groups on AI discussions.
  • Participation in industry events such as KubeCon + CloudNativeCon, where it shares insights on GPU acceleration for AI workloads.
  • Work with CNCF-adjacent projects in the Linux Foundation as well as many partners.

This translates into extended benefits for developers, such as improved efficiency in managing AI and ML workloads; enhanced scalability and performance of cloud-native applications; better resource utilization, which can lead to cost savings; and simplified deployment and management of complex AI infrastructures.

As AI and machine learning continue to transform industries, NVIDIA is helping advance cloud-native technologies to support compute-intensive workloads. This includes facilitating the migration of legacy applications and supporting the development of new ones.

These contributions to the open-source community help developers harness the full potential of AI technologies and strengthen Kubernetes and other CNCF projects as the tools of choice for AI compute workloads.

Check out NVIDIA’s keynote at KubeCon + CloudNativeCon North America 2024 delivered by Chris Lamb, where he discusses the importance of CNCF projects in building and delivering AI in the cloud and NVIDIA’s contributions to the community to push the AI revolution forward.

Read More

From Seed to Stream: ‘Farming Simulator 25’ Sprouts on GeForce NOW

From Seed to Stream: ‘Farming Simulator 25’ Sprouts on GeForce NOW

Grab a pitchfork and fire up the tractor — the fields of GeForce NOW are about to get a whole lot greener with Farming Simulator 25.

Whether looking for a time-traveling adventure, cozy games or epic action, GeForce NOW has something for everyone with over 2,000 games in its cloud library. Nine titles arrive this week, including the new 4X historical grand strategy game Ara: History Untold from Oxide Games and Xbox Game Studios.

And in this season of giving, GeForce NOW will offer members new rewards and more this month. This week, GeForce NOW is spreading cheer with a new reward for members that’s sure to delight Throne and Liberty fans. Get ready to add a dash of mischief and a sprinkle of wealth to the epic adventures in the sprawling world of this massively multiplayer online role-playing game.

Plus, the NVIDIA app is officially released for download this week. GeForce users can use it to access GeForce NOW to play their games with RTX performance when they’re away from their gaming rigs or don’t want to wait around for their games to update and patch.

A Cloud Gaming Bounty

Get ready to plow the fields and tend to crops anywhere with GeForce NOW.

Farming Simulator 25 on GeForce NOW

Farming Simulator 25 from Giants Software launched in the cloud for members to stream, bringing a host of new features and improvements — including the introduction of rice as a crop type, complete with specialized machinery and techniques for planting, flooding fields and harvesting.

This expansion into rice farming is accompanied by a new Asian-themed map that offers players a lush landscape filled with picturesque rice paddies to cultivate. The game will also include two other diverse environments: a spacious North American setting and a scenic Central European location, allowing farmers to build their agricultural empires in varied terrains. Don’t forget about the addition of water buffaloes and goats, as well as the introduction of animal offspring for a new layer of depth to farm management.

Be the cream of the crop streaming with a Performance or Ultimate membership. Performance members get up to 1440p 60 frames per second and Ultimate streams at up to 4K and 120 fps for the most incredible levels of realism and variety. Whether tackling agriculture, forestry and animal husbandry single-handedly or together with friends in cooperative multiplayer mode, experience farming life like never before with GeForce NOW.

Mischief Managed

Whether new to the game or a seasoned adventurer, GeForce NOW members can claim a special PC-exclusive reward to use in Amazon Games’ hit title Throne and Liberty. The reward includes 200 Ornate Coins and a PC-exclusive mischievous youngster named Gneiss Amitoi that will enhance the Throne and Liberty journey as members forge alliances, wage epic battles and uncover hidden treasures.

Throne and Liberty on GeForce NOW

Ornate Coins allow players to acquire morphs for animal shapeshifting, autonomous pets named Amitois, exclusive cosmetic items, experience boosters and inventory expansions. Gneiss Youngster Amitoi is a toddler-aged prankster that randomly targets players and non-playable characters with its tricks. While some of its mischief can be mean-spirited, it just wants attention, and will pout and roll back to its adventurer’s side if ignored, adding an entertaining dynamic to the journey through the world of Throne and Liberty.

Members who’ve opted in to GeForce NOW’s Rewards program can check their email for instructions on how to redeem the reward. Ultimate and Performance members can start redeeming the reward today, while free members will be able to claim it starting tomorrow, Nov. 15. It’s available through Tuesday, Dec. 10, first come, first served.

Rewriting History

Ara History Untold on GeForce NOW

Explore, build, lead and conquer a nation in Ara: History Untold, where every choice will shape the world and define a player’s legacy. It’s now available for GeForce NOW members to stream.

Ara: History Untold offers a fresh take on 4X historical grand strategy games. Players will prove their worth by guiding their citizens through history to the pinnacles of human achievement. Explore new lands, develop arts and culture, and engage in diplomacy — or combat — with other nations, before ultimately claiming the mantle of the greatest nation of all time.

Members can craft their own unique story of triumph and achievement by streaming the game across devices from the cloud. GeForce NOW Performance and Ultimate members can enjoy longer gaming sessions and faster access to servers than free users, perfect for crafting sprawling empires and engaging in complex diplomacy without worrying about local hardware limitations.

New Games Are Knocking

GeForce NOW brings the new Wuthering Waves update “When the Night Knocks” for members this week. Version 1.4 brings a wealth of new content, including two new Resonators, Camellya and Lumi, along with powerful new weapons, including the five-star Red Spring and the four-star event weapon Somnoire Anchor. Dive into the Somnoire Adventure Event, Somnium Labyrinth, and enjoy a variety of log-in rewards, combat challenges and exploration activities. The update also includes Camellya’s companion story, a new Phantom Echo and introduces the exciting Weapon Projection feature.

Members can look for the following games available to stream in the cloud this week:

  • Farming Simulator 25 (New release on Steam, Nov. 12)
  • Sea Power: Naval Combat in the Missile Age (New release on Steam, Nov. 12)
  • Industry Giant 4.0 (New release Steam, Nov. 15)
  • Ara: History Untold (Steam and Xbox, available on PC Game Pass)
  • Call of Duty: Black Ops Cold War (Steam and Battle.net)
  • Call of Duty: Vanguard (Steam and Battle.net)
  • Magicraft (Steam)
  • Crash Bandicoot N. Sane Trilogy (Steam and Xbox, available on PC Game Pass)
  • Spyro Reignited Trilogy (Steam and Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Keeping an AI on Diabetes Risk: Gen AI Model Predicts Blood Sugar Levels Four Years Out

Keeping an AI on Diabetes Risk: Gen AI Model Predicts Blood Sugar Levels Four Years Out

Diabetics — or others monitoring their sugar intake — may look at a cookie and wonder, “How will eating this affect my glucose levels?” A generative AI model can now predict the answer.

Researchers from the Weizmann Institute of Science, Tel Aviv-based startup Pheno.AI and NVIDIA led the development of GluFormer, an AI model that can predict an individual’s future glucose levels and other health metrics based on past glucose monitoring data.

Data from continuous glucose monitoring could help more quickly diagnose patients with prediabetes or diabetes, according to Harvard Health Publishing and NYU Langone Health. GluFormer’s AI capabilities can further enhance the value of this data, helping clinicians and patients spot anomalies, predict clinical trial outcomes and forecast health outcomes up to four years in advance.

The researchers showed that, after adding dietary intake data into the model, GluFormer can also predict how a person’s glucose levels will respond to specific foods and dietary changes, enabling precision nutrition.

Accurate predictions of glucose levels for those at high risk of developing diabetes could enable doctors and patients to adopt preventative care strategies sooner, improving patient outcomes and reducing the economic impact of diabetes, which could reach $2.5 trillion globally by 2030.

AI tools like GluFormer have the potential to help the hundreds of millions of adults with diabetes. The condition currently affects around 10% of the world’s adults — a figure that could potentially double by 2050 to impact over 1.3 billion people. It’s one of the 10 leading causes of death globally, with side effects including kidney damage, vision loss and heart problems.

GluFormer is a transformer model, a kind of neural network architecture that tracks relationships in sequential data. It’s the same architecture as OpenAI’s GPT models — in this case generating glucose levels instead of text.

“Medical data, and continuous glucose monitoring in particular, can be viewed as sequences of diagnostic tests that trace biological processes throughout life,” said Gal Chechik, senior director of AI research at NVIDIA. “We found that the transformer architecture, developed for long text sequences, can take a sequence of medical tests and predict the results of the next test. In doing so, it learns something about how the diagnostic measurements develop over time.”

The model was trained on 14 days of glucose monitoring data from over 10,000 non-diabetic study participants, with data collected every 15 minutes through a wearable monitoring device. The data was collected as part of the Human Phenotype Project, an initiative by Pheno.AI, a startup that aims to improve human health through data collection and analysis.

“Two important factors converged at the same time to enable this research: the maturing of generative AI technology powered by NVIDIA and the collection of large-scale health data by the Weizmann Institute,” said the paper’s lead author, Guy Lutsker, an NVIDIA researcher and Ph.D. student at the Weizmann Institute of Science. “It put us in the unique position to extract interesting medical insights from the data.”

The research team validated GluFormer across 15 other datasets and found it generalizes well to predict health outcomes for other groups, including those with prediabetes, type 1 and type 2 diabetes, gestational diabetes and obesity.

They used a cluster of NVIDIA Tensor Core GPUs to accelerate model training and inference.

Beyond glucose levels, GluFormer can predict medical values including visceral adipose tissue, a measure of the amount of body fat around organs like the liver and pancreas; systolic blood pressure, which is associated with diabetes risk; and apnea-hypopnea index, a measurement for sleep apnea, which is linked to type 2 diabetes.

Read the GluFormer research paper on Arxiv.

Read More

NVIDIA Ranks No. 1 as Forbes Debuts List of America’s Best Companies 2025

NVIDIA Ranks No. 1 as Forbes Debuts List of America’s Best Companies 2025

NVIDIA ranked No. 1 on Forbes magazine’s new list — America’s Best Companies — based on more than 60 measures in nearly a dozen categories that cover financial performance, customer and employee satisfaction, sustainability, remote work policies and more.

Forbes stated that the company thrived in numerous areas, “particularly employee satisfaction, earning high ratings in career opportunities, company benefits and culture,” as well as financial strength.

About 2,000 of the largest public companies in the U.S. were eligible, with 300 making the list.

Beau Davidson, vice president of employee experience at NVIDIA, told Forbes that the company has created systemic opportunities to listen to its staff (such as quarterly surveys, CEO Q&As and a virtual suggestion box) and then takes action on concerns ranging from benefits to cafe snacks.

NVIDIA has also championed Free Days — two days each quarter where the entire company closes. “It allows us to take a break as a company,” Davidson told Forbes. NVIDIA provides counselors onsite and a careers week that provides programs and training for workers to pursue internal job opportunities.

NVIDIA enjoys a low rate of employee turnover — widely viewed as a sign of employee happiness, according to People Data Labs, Forbes’ data provider on workforce stability.

For a full list of rankings, view Forbes’ America’s Best Companies 2025 list.

Check out the NVIDIA Careers page and learn more about NVIDIA Life

Read More

Indonesia Tech Leaders Team With NVIDIA and Partners to Launch Nation’s AI

Indonesia Tech Leaders Team With NVIDIA and Partners to Launch Nation’s AI

Working with NVIDIA and its partners, Indonesia’s technology leaders have launched an initiative to bring sovereign AI to the nation’s more than 277 million Indonesian speakers.

The collaboration is grounded in a broad public-private partnership that reflects the nation’s concept of “gotong royong,” a term describing a spirit of mutual assistance and community collaboration.

NVIDIA founder and CEO Jensen Huang joined Indonesia Minster for State-Owned Enterprises Erick Thohir, Indosat Ooredoo Hutchison (IOH) President Director and CEO Vikram Sinha, GoTo CEO Patrick Walujo and other leaders in Jakarta to celebrate the launch of Sahabat-AI.

Sahabat-AI is a collection of open-source Indonesian large language models (LLMs) that local industries, government agencies, universities and research centers can use to create generative AI applications. Built with NVIDIA NeMo and NVIDIA NIM microservices, the models were launched today at Indonesia AI Day, a conference focused on enabling AI sovereignty and driving AI-driven digital independence in the country.

Built by Indonesians, for Indonesians, Sahabat-AI models understand local contexts and enable people to build generative AI services and applications in Bahasa Indonesian and various local languages. The models form the foundation of a collaborative effort to empower Indonesia through a locally developed, open-source LLM ecosystem.

“Artificial intelligence will democratize technology. It is the great equalizer,” said Huang. “The technology is complicated but the benefit is not.”

“Sahabat-AI is not just a technological achievement, it embodies Indonesia’s vision for a future where digital sovereignty and inclusivity go hand in hand,” Sinha said. “By creating an AI model that speaks our language and reflects our culture, we’re empowering every Indonesian to harness advanced technology’s potential. This initiative is a crucial step toward democratizing AI as a tool for growth, innovation and empowerment across our diverse society.”

To accelerate this initiative, IOH — one of Indonesia’s largest telecom and internet companies — earlier this year launched “GPU Merdeka by Lintasarta,” an NVIDIA-accelerated sovereign AI cloud. The GPU Merdeka cloud service operates at a BDx Indonesia AI data center powered by renewable energy.

Bolstered by the NVIDIA Cloud Partner program, IOH subsidiary Lintasarta built the high-performance AI cloud in less than three months, a feat that would’ve taken much longer without NVIDIA’s technology infrastructure. The AI cloud is now driving transformation across energy, financial services, healthcare and other industries.

The NVIDIA Cloud Partner (NCP) program provides Lintasarta with access to NVIDIA reference architectures — blueprints for building high-performance, scalable and secure data centers.

The program also offers technological and go-to-market support, access to the latest NVIDIA AI software and accelerated computing platforms, and opportunities to collaborate with NVIDIA’s extensive ecosystem of industry partners. These partners include global systems integrators like Accenture and Tech Mahindra and software companies like GoTo and Hippocratic AI, each of which is working alongside IOH to boost the telco’s sovereign AI initiatives.

Developing Industry-Specific Applications With Accenture

Partnering with leading professional services company Accenture, IOH is developing applications for industry-specific use cases based on its new AI cloud, Sahabat-AI and the NVIDIA AI Enterprise software platform.

NVIDIA CEO Huang joined Accenture CEO Julie Sweet in a fireside chat during Indonesia AI Day to discuss how the companies are supporting enterprise and industrial AI in Indonesia.

The collaboration taps into the Accenture AI Refinery platform to help Indonesian enterprises build AI solutions tailored for financial services, energy and other industries, while delivering sovereign data governance.

Initially focused on financial services, IOH’s work with Accenture and NVIDIA technologies is delivering pre-built enterprise solutions that can help Indonesian banks more quickly harness AI.

With a modular architecture, these solutions can meet clients’ needs wherever they are in their AI journeys, helping increase profitability, operational efficiency and sustainable growth.

Building the Bahasa LLM and Chatbot Services With Tech Mahindra

Built with India-based global systems integrator Tech Mahindra, the Sahabat-AI LLMs power various AI services in Indonesia.

For example, Sahabat-AI enables IOH’s AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.

The chatbot integrates with a broader citizen services platform Tech Mahindra and IOH are developing as part of the Indonesian government’s sovereign AI initiative.

Indosat developed Sahabat-AI using the NVIDIA NeMo platform for developing customized LLMs. The team fine-tuned a version of the Llama 3 8B model, customizing it for the Bahasa language using a diverse dataset tailored for effective communication with users.

To further optimize performance, Sahabat-AI uses NVIDIA NIM microservices, which have demonstrated up to 2.5x greater throughput compared with standard implementations. This improvement in processing efficiency allows for faster responses and more satisfying user experiences.

In addition, NVIDIA NeMo Guardrails open-source software orchestrates dialog management and helps ensure accuracy, appropriateness and security of the LLM-based chatbot.

Many other service capabilities tapping Sahabat-AI are also planned for development, including AI-powered healthcare services and other local applications.

Improving Indonesian Healthcare With Hippocratic AI

Among the first to tap into Sahabat-AI is healthcare AI company Hippocratic AI, which is using the models, the NVIDIA AI platform and IOH’s sovereign AI cloud to develop digital agents that can have humanlike conversations, exhibit empathic qualities, and build rapport and trust with patients across Indonesia.

Hippocratic AI empowers a novel trillion-parameter constellation architecture that brings together specialized healthcare LLM agents to deliver safe, accurate digital agent implementation.

Digital AI agents can significantly increase staff productivity by offloading time-consuming tasks, allowing human nurses and medical professionals to focus on critical duties to increase healthcare accessibility and quality of service.

IOH’s sovereign AI cloud lets Hippocratic AI keep patient data local and secure, and enables extremely low-latency AI inference for its LLMs.

Enhancing Simplicity, Accessibility for On-Demand and Financial Services With GoTo

GoTo offers technology infrastructure and solutions that help users thrive in the digital economy, including through applications spanning on-demand services for transport, food, grocery and logistics delivery, financial services and e-commerce.

The company — which operates one of Indonesia’s leading on-demand transport services, as well as a leading payment application in the country — is adopting and enhancing the new Sahabat-AI models to integrate with its AI voice assistant, called Dira.

Dira is a speech and generative AI-powered digital assistant that helps customers book rides, order food deliveries, transfer money, pay bills and more.

Tapping into Sahabat-AI, Dira is poised to deliver more localized and culturally relevant interactions with application users.

Advancing Sustainability Within Lintasarta as IOH’s AI Factory

Fundamentally, Lintasarta’s AI cloud is an AI factory — a next-generation data center that hosts advanced, full-stack accelerated computing platforms for the most computationally intensive tasks. It’ll enable regional governments, businesses and startups to build, customize and deploy generative AI applications aligned with local language and customs.

Looking forward, Lintasarta plans to expand its AI factory with the most advanced NVIDIA technologies. The infrastructure already boasts a “green” design, powered by renewable energy and sustainable technologies. Lintasarta is committed to adding value to Indonesia’s digital ecosystem with integrated, secure and sustainable technology, in line with the Golden Indonesia 2045 vision.

Beyond Indonesia, NVIDIA NIM microservices are bolstering sovereign AI models that support local languages in India, Japan, Taiwan and many other countries and regions.

NVIDIA NIM microservices, NeMo and NeMo Guardrails are available as part of the NVIDIA AI Enterprise software platform.

Learn more about NVIDIA-powered sovereign AI factories for telecommunications.

See notice regarding software product information.

Read More

2025 Predictions: AI Finds a Reason to Tap Industry Data Lakes

2025 Predictions: AI Finds a Reason to Tap Industry Data Lakes

Since the advent of the computer age, industries have been so awash in stored data that most of it never gets put to use.

This data is estimated to be in the neighborhood of 120 zettabytes — the equivalent of trillions of terabytes, or more than 120x the amount of every grain of sand on every beach around the globe. Now, the world’s industries are putting that untamed data to work by building and customizing large language models (LLMs).

As 2025 approaches, industries such as healthcare, telecommunications, entertainment, energy, robotics, automotive and retail are using those models, combining it with their proprietary data and gearing up to create AI that can reason.

The NVIDIA experts below focus on some of the industries that deliver $88 trillion worth of goods and services globally each year. They predict that AI that can harness data at the edge and deliver near-instantaneous insights is coming to hospitals, factories, customer service centers, cars and mobile devices near you.

But first, let’s hear AI’s predictions for AI. When asked, “What will be the top trends in AI in 2025 for industries?” both Perplexity and ChatGPT 4.0 responded that agentic AI sits atop the list alongside edge AI, AI cybersecurity and AI-driven robots.

Agentic AI is a new category of generative AI that operates virtually autonomously. It can make complex decisions and take actions based on continuous learning and analysis of vast datasets. Agentic AI is adaptable, has defined goals and can correct itself, and can chat with other AI agents or reach out to a human for help.

Now, hear from NVIDIA experts on what to expect in the year ahead:

Kimberly Powell
Vice President of Healthcare

Human-robotic interaction: Robots will assist human clinicians in a variety of ways, from understanding and responding to human commands, to performing and assisting in complex surgeries.

It’s being made possible by digital twins, simulation and AI that train and test robotic systems in virtual environments to reduce risks associated with real-world trials. It also can train robots to react in virtually any scenario, enhancing their adaptability and performance across different clinical situations.

New virtual worlds for training robots to perform complex tasks will make autonomous surgical robots a reality. These surgical robots will perform complex surgical tasks with precision, reducing patient recovery times and decreasing the cognitive workload for surgeons.

Digital health agents: The dawn of agentic AI and multi-agent systems will address the existential challenges of workforce shortages and the rising cost of care.

Administrative health services will become digital humans taking notes for you or making your next appointment — introducing an era of services delivered by software and birthing a service-as-a-software industry.

Patient experience will be transformed with always-on, personalized care services while healthcare staff will collaborate with agents that help them reduce clerical work, retrieve and summarize patient histories, and recommend clinical trials and state-of-the-art treatments for their patients.

Drug discovery and design AI factories: Just as ChatGPT can generate an email or a poem without putting a pen to paper for trial and error, generative AI models in drug discovery can liberate scientific thinking and exploration.

Techbio and biopharma companies have begun combining models that generate, predict and optimize molecules to explore the near-infinite possible target drug combinations before going into time-consuming and expensive wet lab experiments.

The drug discovery and design AI factories will consume all wet lab data, refine AI models and redeploy those models — improving each experiment by learning from the previous one. These AI factories will shift the industry from a discovery process to a design and engineering one.

Rev Lebaredian
Vice President of Omniverse and Simulation Technology

Let’s get physical (AI, that is): Getting ready for AI models that can perceive, understand and interact with the physical world is one challenge enterprises will race to tackle.

While LLMs require reinforcement learning largely in the form of human feedback, physical AI needs to learn in a “world model” that mimics the laws of physics. Large-scale physically based simulations are allowing the world to realize the value of physical AI through robots by accelerating the training of physical AI models and enabling continuous training in robotic systems across every industry.

Cheaper by the dozen: In addition to their smarts (or lack thereof), one big factor that has slowed adoption of humanoid robots has been affordability. As agentic AI brings new intelligence to robots, though, volume will pick up and costs will come down sharply. The average cost of industrial robots is expected to drop to $10,800 in 2025, down sharply from $46K in 2010 to $27K in 2017. As these devices become significantly cheaper, they’ll become as commonplace across industries as mobile devices are.

Deepu Talla
Vice President of Robotics and Edge Computing

Redefining robots: When people think of robots today, they’re usually images or content showing autonomous mobile robots (AMRs), manipulator arms or humanoids. But tomorrow’s robots are set to be an autonomous system that perceives, reasons, plans and acts — then learns.

Soon we’ll be thinking of robots embodied everywhere from surgical rooms and data centers to warehouses and factories. Even traffic control systems or entire cities will be transformed from static, manually operated systems to autonomous, interactive systems embodied by physical AI.

The rise of small language models: To improve the functionality of robots operating at the edge, expect to see the rise of small language models that are energy-efficient and avoid latency issues associated with sending data to data centers. The shift to small language models in edge computing will improve inference in a range of industries, including automotive, retail and advanced robotics.

Kevin Levitt
Global Director of Financial Services

AI agents boost firm operations: AI-powered agents will be deeply integrated into the financial services ecosystem, improving customer experiences, driving productivity and reducing operational costs.

AI agents will take every form based on each financial services firm’s needs. Human-like 3D avatars will take requests and interact directly with clients, while text-based chatbots will summarize thousands of pages of data and documents in seconds to deliver accurate, tailored insights to employees across all business functions.

AI factories become table stakes: AI use cases in the industry are exploding. This includes improving identity verification for anti-money laundering and know-your-customer regulations, reducing false positives for transaction fraud and generating new trading strategies to improve market returns. AI also is automating document management, reducing funding cycles to help consumers and businesses on their financial journeys.

To capitalize on opportunities like these, financial institutions will build AI factories that use full-stack accelerated computing to maximize performance and utilization to build AI-enabled applications that serve hundreds, if not thousands, of use cases — helping set themselves apart from the competition.

AI-assisted data governance: Due to the sensitive nature of financial data and stringent regulatory requirements, governance will be a priority for firms as they use data to create reliable and legal AI applications, including for fraud detection, predictions and forecasting, real-time calculations and customer service.

Firms will use AI models to assist in the structure, control, orchestration, processing and utilization of financial data, making the process of complying with regulations and safeguarding customer privacy smoother and less labor intensive. AI will be the key to making sense of and deriving actionable insights from the industry’s stockpile of underutilized, unstructured data.

Richard Kerris
Vice President of Media and Entertainment

Let AI entertain you: AI will continue to revolutionize entertainment with hyperpersonalized content on every screen, from TV shows to live sports. Using generative AI and advanced vision-language models, platforms will offer immersive experiences tailored to individual tastes, interests and moods. Imagine teaser images and sizzle reels crafted to capture the essence of a new show or live event and create an instant personal connection.

In live sports, AI will enhance accessibility and cultural relevance, providing language dubbing, tailored commentary and local adaptations. AI will also elevate binge-watching by adjusting pacing, quality and engagement options in real time to keep fans captivated. This new level of interaction will transform streaming from a passive experience into an engaging journey that brings people closer to the action and each other.

AI-driven platforms will also foster meaningful connections with audiences by tailoring recommendations, trailers and content to individual preferences. AI’s hyperpersonalization will allow viewers to discover hidden gems, reconnect with old favorites and feel seen. For the industry, AI will drive growth and innovation, introducing new business models and enabling global content strategies that celebrate unique viewer preferences, making entertainment feel boundless, engaging and personally crafted.

Ronnie Vasishta
Senior Vice President of Telecoms

The AI connection: Telecommunications providers will begin to deliver generative AI applications and 5G connectivity over the same network. AI radio access network (AI-RAN) will enable telecom operators to transform traditional single-purpose base stations from cost centers into revenue-producing assets capable of providing AI inference services to devices, while more efficiently delivering the best network performance.

AI agents to the rescue: The telecommunications industry will be among the first to dial into agentic AI to perform key business functions. Telco operators will use AI agents for a wide variety of tasks, from suggesting money-saving plans to customers and troubleshooting network connectivity, to answering billing questions and processing payments.

More efficient, higher-performing networks: AI also will be used at the wireless network layer to enhance efficiency, deliver site-specific learning and reduce power consumption. Using AI as an intelligent performance improvement tool, operators will be able to continuously observe network traffic, predict congestion patterns and make adjustments before failures happen, allowing for optimal network performance.

Answering the call on sovereign AI: Nations will increasingly turn to telcos — which have proven experience managing complex, distributed technology networks — to achieve their sovereign AI objectives. The trend will spread quickly across Europe and Asia, where telcos in Switzerland, Japan, Indonesia and Norway are already partnering with national leaders to build AI factories that can use proprietary, local data to help researchers, startups, businesses and government agencies create AI applications and services.

Xinzhou Wu
Vice President of Automotive

Pedal to generative AI metal: Autonomous vehicles will become more performant as developers tap into advancements in generative AI. For example, harnessing foundation models, such as vision language models, provides an opportunity to use internet-scale knowledge to solve one of the hardest problems in the autonomous vehicle (AV) field, namely that of efficiently and safely reasoning through rare corner cases.

Simulation unlocks success: More broadly, new AI-based tools will enable breakthroughs in how AV development is carried out. For example, advances in generative simulation will enable the scalable creation of complex scenarios aimed at stress-testing vehicles for safety purposes. Aside from allowing for testing unusual or dangerous conditions, simulation is also essential for generating synthetic data to enable end-to-end model training.

Three-computer approach: Effectively, new advances in AI will catalyze AV software development across the three key computers underpinning AV development — one for training the AI-based stack in the data center, another for simulation and validation, and a third in-vehicle computer to process real-time sensor data for safe driving. Together, these systems will enable continuous improvement of AV software for enhanced safety and performance of cars, trucks, robotaxis and beyond.

Marc Spieler
Senior Managing Director of Global Energy Industry

Welcoming the smart grid: Do you know when your daily peak home electricity is? You will soon as utilities around the world embrace smart meters that use AI to broadly manage their grid networks, from big power plants and substations and, now, into the home.

As the smart grid takes shape, smart meters — once deemed too expensive to be installed in millions of homes — that combine software, sensors and accelerated computing will alert utilities when trees in a backyard brush up against power lines or when to offer big rebates to buy back the excess power stored through rooftop solar installations.

Powering up: Delivering the optimal power stack has always been mission-critical for the energy industry. In the era of generative AI, utilities will address this issue in ways that reduce environmental impact.

Expect in 2025 to see a broader embrace of nuclear power as one clean-energy path the industry will take. Demand for natural gas also will grow as it replaces coal and other forms of energy. These resurgent forms of energy are being helped by the increased use of accelerated computing, simulation technology and AI and 3D visualization, which helps optimize design, pipeline flows and storage. We’ll see the same happening at oil and gas companies, which are looking to reduce the impact of energy exploration and production.

Azita Martin
Vice President of Retail, Consumer-Packaged Goods and Quick-Service Restaurants 

Software-defined retail: Supercenters and grocery stores will become software-defined, each running computer vision and sophisticated AI algorithms at the edge. The transition will accelerate checkout, optimize merchandising and reduce shrink — the industry term for a product being lost or stolen.

Each store will be connected to a headquarters AI network, using collective data to become a perpetual learning machine. Software-defined stores that continually learn from their own data will transform the shopping experience.

Intelligent supply chain: Intelligent supply chains created using digital twins, generative AI, machine learning and AI-based solvers will drive billions of dollars in labor productivity and operational efficiencies. Digital twin simulations of stores and distribution centers will optimize layouts to increase in-store sales and accelerate throughput in distribution centers.

Agentic robots working alongside associates will load and unload trucks, stock shelves and pack customer orders. Also, last-mile delivery will be enhanced with AI-based routing optimization solvers, allowing products to reach customers faster while reducing vehicle fuel costs.

Read More

Peak Training: Blackwell Delivers Next-Level MLPerf Training Performance

Peak Training: Blackwell Delivers Next-Level MLPerf Training Performance

Generative AI applications that use text, computer code, protein chains, summaries, video and even 3D graphics require data-center-scale accelerated computing to efficiently train the large language models (LLMs) that power them.

In MLPerf Training 4.1 industry benchmarks, the NVIDIA Blackwell platform delivered impressive results on workloads across all tests — and up to 2.2x more performance per GPU on LLM benchmarks, including Llama 2 70B fine-tuning and GPT-3 175B pretraining.

In addition, NVIDIA’s submissions on the NVIDIA Hopper platform continued to hold at-scale records on all benchmarks, including a submission with 11,616 Hopper GPUs on the GPT-3 175B benchmark.

Leaps and Bounds With Blackwell

The first Blackwell training submission to the MLCommons Consortium — which creates standardized, unbiased and rigorously peer-reviewed testing for industry participants — highlights how the architecture is advancing generative AI training performance.

For instance, the architecture includes new kernels that make more efficient use of Tensor Cores. Kernels are optimized, purpose-built math operations like matrix-multiplies that are at the heart of many deep learning algorithms.

Blackwell’s higher per-GPU compute throughput and significantly larger and faster high-bandwidth memory allows it to run the GPT-3 175B benchmark on fewer GPUs while achieving excellent per-GPU performance.

Taking advantage of larger, higher-bandwidth HBM3e memory, just 64 Blackwell GPUs were able to run in the GPT-3 LLM benchmark without compromising per-GPU performance. The same benchmark run using Hopper needed 256 GPUs.

The Blackwell training results follow an earlier submission to MLPerf Inference 4.1, where Blackwell delivered up to 4x more LLM inference performance versus the Hopper generation. Taking advantage of the Blackwell architecture’s FP4 precision, along with the NVIDIA QUASAR Quantization System, the submission revealed powerful performance while meeting the benchmark’s accuracy requirements.

Relentless Optimization

NVIDIA platforms undergo continuous software development, racking up performance and feature improvements in training and inference for a wide variety of frameworks, models and applications.

In this round of MLPerf training submissions, Hopper delivered a 1.3x improvement on GPT-3 175B per-GPU training performance since the introduction of the benchmark.

NVIDIA also submitted large-scale results on the GPT-3 175B benchmark using 11,616 Hopper GPUs connected with NVIDIA NVLink and NVSwitch high-bandwidth GPU-to-GPU communication and NVIDIA Quantum-2 InfiniBand networking.

NVIDIA Hopper GPUs have more than tripled scale and performance on the GPT-3 175B benchmark since last year. In addition, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA increased performance by 26% using the same number of Hopper GPUs, reflecting continued software enhancements.

NVIDIA’s ongoing work on optimizing its accelerated computing platforms enables continued improvements in MLPerf test results — driving performance up in containerized software, bringing more powerful computing to partners and customers on existing platforms and delivering more return on their platform investment.

Partnering Up

NVIDIA partners, including system makers and cloud service providers like ASUSTek, Azure, Cisco, Dell, Fujitsu, Giga Computing, Lambda Labs, Lenovo, Oracle Cloud, Quanta Cloud Technology and Supermicro also submitted impressive results to MLPerf in this latest round.

A founding member of MLCommons, NVIDIA sees the role of industry-standard benchmarks and benchmarking best practices in AI computing as vital. With access to peer-reviewed, streamlined comparisons of AI and HPC platforms, companies can keep pace with the latest AI computing innovations and access crucial data that can help guide important platform investment decisions.

Learn more about the latest MLPerf results on the NVIDIA Technical Blog

Read More

‘Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution,’ Says NVIDIA CEO Jensen Huang at AI Summit Japan

‘Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution,’ Says NVIDIA CEO Jensen Huang at AI Summit Japan

The next technology revolution is here, and Japan is poised to be a major part of it.

At NVIDIA’s AI Summit Japan on Wednesday, NVIDIA founder and CEO Jensen Huang and SoftBank Chairman and CEO Masayoshi Son shared a sweeping vision for Japan’s role in the AI revolution.

Speaking in Tokyo, Huang underscored that AI infrastructure is essential to drive global transformation.

In his talk, he emphasized two types of AI: digital and physical. Digital is represented by AI agents, while physical AI is represented by robotics.

He said Japan is poised to create both types, leveraging its unique language, culture and data.

“Every industry, every company, every country must produce a new industrial revolution,” Huang said, pointing to AI as the catalyst for this shift.

Huang emphasized Japan’s unique position to lead in this AI-driven economy, praising the country’s history of innovation and engineering excellence as well as its technological and cultural panache.

“I can’t imagine a better country to lead the robotics AI revolution than Japan,” Huang said. “You have created some of the world’s best robots. These are the robots we grew up with, the robots we’ve loved our whole lives.”

Huang highlighted the potential of agentic AI—advanced digital agents capable of understanding, reasoning, planning, and taking action—to transform productivity across industries.

He noted that these agents can tackle complex, multi-step tasks, effectively doing “50% of the work for 100% of the people,” turbocharging human productivity.

By turning data into actionable insights, agentic AI offers companies powerful tools to enhance operations without replacing human roles.

SoftBank and NVIDIA to Build Japan’s Largest AI Supercomputer

Among the summit’s major announcements was NVIDIA’s collaboration with SoftBank to build Japan’s most powerful AI supercomputer.

NVIDIA CEO Jensen Huang showcases Blackwell, the company’s advanced AI supercomputing platform, at the AI Summit Japan in Tokyo.

Using the NVIDIA Blackwell platform, SoftBank’s DGX SuperPOD will deliver extensive computing power to drive sovereign AI initiatives, including large language models (LLMs) specifically designed for Japan.

“With your support, we are creating the largest AI data center here in Japan,” said Son, a visionary who, as Huang noted, has been a part of every major technology revolution of the past half-century.

“We should provide this platform to many of those researchers, the students, the startups, so that we can encourage … so that they have a better access [to] much more compute.”

Huang noted that the AI supercomputer project is just one part of the collaboration.

SoftBank also successfully piloted the world’s first combined AI and 5G network, known as AI-RAN (radio access network). The network enables AI and 5G workloads to run simultaneously, opening new revenue possibilities for telecom providers.

“Now with this intelligence network that we densely connect each other, [it will] become one big neural brain for the infrastructure intelligence to Japan,” Son said. “That will be amazing.”

Accelerated Computing and Japan’s AI Infrastructure

Huang emphasized the profound synergy between AI and robotics, highlighting how advancements in artificial intelligence have created new possibilities for robotics across industries.

He noted that as AI enables machines to learn, adapt and perform complex tasks autonomously, robotics is evolving beyond traditional programming.

Huang spoke to developers, researchers and AI industry leaders at this week’s NVIDIA AI Summit Japan.

“I hope that Japan will take advantage of the latest breakthroughs in artificial intelligence and combine that with your world-class expertise in mechatronics,” Huang said. “No country in the world has greater skills in mechatronics than Japan, and this is an extraordinary opportunity to seize.”

NVIDIA aims to develop a national AI infrastructure network through partnerships with Japanese cloud leaders such as GMO Internet Group and SAKURA internet.

Supported by the Japan Ministry of Economy, Trade and Industry, this infrastructure will support sectors like healthcare, automotive and robotics by providing advanced AI resources to companies and research institutions across Japan.

“This is the beginning of a new era… we can’t miss this time,” Huang added.

Read more about all of today’s announcements in the NVIDIA AI Summit Japan online press kit

Read More

Japan’s Market Innovators Bring Physical AI to Industries With NVIDIA AI and Omniverse

Japan’s Market Innovators Bring Physical AI to Industries With NVIDIA AI and Omniverse

Robots transporting heavy metal at a Toyota plant. Yaskawa’s robots working alongside human coworkers in factories. To advance efforts like these virtually, Rikei Corporation develops digital twin tooling to assist planning.

And if that weren’t enough, diversified retail holdings company Seven & i Holdings is running digital twin simulations to enhance customer experiences.

Physical AI and industrial AI, powered by NVIDIA Omniverse and Isaac and Metropolis, are propelling Japan’s industrial giants into the future. Such pioneering moves in robotic manipulation, industrial inspection and digital twins for human assistance are on full display at NVIDIA AI Summit Japan this week.

The arrival of generative AI-driven robotics leaps couldn’t come at a better time. With its population in decline, Japan has a critical need for advanced robotics. A report in the Japan Times said the nation is expected to face an 11 million shortage of workers by 2040.

Industrial and physical AI-based systems are today becoming accelerated by a three computer solution that enables robot AI model training, testing, and simulation and deployment.

Looking Into the Future With Toyota Robotics

Toyota is tapping into NVIDIA Omniverse for physics simulation for robot motion and gripping to improve its metal forging capabilities. That’s helping to reduce the time it takes to teach robots to transport forging materials.

Digital representation of robotic arm moving inside an assembly structure
Image courtesy of Toyota.

Toyota is verifying to reproduce its robotic work handling and robot motion with the accuracy of NVIDIA PhysX with Omniverse. Omniverse enables modeling digital twins of factories and other environments that accurately duplicate the physical characteristics of objects and systems in the real world, which is foundational to building physical AI for driving next-generation autonomous systems.

Omniverse enables Toyota to model things like mass properties, gravity and friction for comparing results with physical representations of tests. This can help work in manipulation and robot motion.

It also allows Toyota to replicate the expertise of its senior employees with robotics for issues requiring a high degree of skills. And it increases safety and throughput since factory personnel are not required to work in the high temperatures and harsh environments associated with metal-forging production lines.

Driving Automation, Yaskawa Harnesses NVIDIA Isaac 

Yaskawa is a leading global robotics manufacturer that has shipped more than 600,000 robots and offers nearly 200 robot models, including industrial robots for the automotive industry, collaborative robots and dual-arm robots.

robotic arm moving items into storage bins.
Image courtesy of YASKAWA.

The Japanese robotics leader is expanding into new markets with its MOTOMAN NEXT adaptive robot, which is moving into task adaptation, versatility and flexibility. Driven by advanced robotics enabled by the NVIDIA Isaac and Omniverse platforms, Yaskawa’s adaptive robots are focused on delivering automation for the food, logistics, medical and agriculture industries.

Using NVIDIA Isaac Manipulator, a reference workflow of NVIDIA-accelerated libraries and AI models, Yaskawa is integrating AI to its industrial arm robots, giving them the ability to complete a wide range of industrial automation tasks.  

Yaskawa is using FoundationPose for precise 6D pose estimation and tracking. These AI models enhance the adaptability and efficiency of Yaskawa’s robotic arms, and the motion control enables sim-to-real transition, making them versatile and effective at performing complex tasks across a wide range of industries.

Additionally, Yaskawa is embracing digital twin and robotics simulations powered by NVIDIA Isaac Sim, built on Omniverse, to accelerate the development and deployment of Yaskawa’s robotic solutions, saving time and resources.

Creating Customer Experiences at Seven & i Holdings With Omniverse, Metropolis

Seven & i Holdings is one of the largest Japanese  diversified retail holdings companies. The Japanese retail company runs a proof of concept to understand customer behaviors at its retail outlets with digital simulation.

Seven & i Holdings is pushing its research activities by tapping into NVIDIA Omniverse and NVIDIA Metropolis to better understand operations across its retail stores. Using NVIDIA Metropolis, a set of developer tools for building vision AI applications, store operations are analyzed with computer vision models, helping improve efficiency and safety. A digital twin of this environment is developed in an Omniverse-based application, along with assets from Blender and animations from SideFX Houdini.

Digital retail store with person walking down aisle, above simulated sensor captures can be visualized.
Image courtesy of Seven & i Holdings Co.

Combining digital twins with price recognition, object tracking and other AI-based computation enables it to generate useful behavioral insights about retail environments and customer interactions. Such information offers opportunities to dynamically generate and show personalized ads on digital signage displays targeted to customers.

The retailer plans to use Metropolis and the NVIDIA Merlin recommendation engine framework to create tailored suggestions to individual shoppers, responding to customer interests — based on data — like never before.

Virtually Revolutionizing, Rikei Corporation Launches Asset Library for Digital Twins

Rikei Corporation, a systems solutions provider, specializes in spatial computing and extended reality technology for the manufacturing sector.

The technology company has developed JAPAN USD Factory, which is a digital twin asset library specifically for the Japanese manufacturing industry. Developed on NVIDIA Omniverse, JAPAN USD Factory reproduces materials and equipment commonly used in manufacturing sites across Japan in a digital form so that Japanese manufacturers can more easily build digital twins of their factories and warehouses.

 Digital twin design of a manufacturing plant where a number of bins are stored on shelving.
Image courtesy of Rikei

Rikei Corporation aims to streamline various stages of design, simulation and operations for the manufacturing process with these digital assets to enhance productivity with digital twins.

Developed with OpenUSD, a universal 3D asset interchange, JAPAN USD Factory allows developers to access its asset libraries for things like palettes and racks, offering seamless integration across tools and workflows.

To learn more, watch the NVIDIA AI Summit Japan fireside chat with NVIDIA founder and CEO Jensen Huang.

Read More