Lab Confidential: Japan Research Keeps Healthcare Data Secure

Lab Confidential: Japan Research Keeps Healthcare Data Secure

Established 77 years ago, Mitsui & Co stays vibrant by building businesses and ecosystems with new technologies like generative AI and confidential computing.

Digital transformation takes many forms at the Tokyo-based conglomerate with 16 divisions. In one case, it’s an autonomous trucking service, in another it’s a geospatial analysis platform. Mitsui even collaborates with a partner at the leading edge of quantum computing.

One new subsidiary, Xeureka, aims to accelerate R&D in healthcare, where it can take more than a billion dollars spent over a decade to bring to market a new drug.

“We create businesses using new digital technology like AI and confidential computing,” said Katsuya Ito, a project manager in Mitsui’s digital transformation group. “Most of our work is done in collaboration with tech companies — in this case NVIDIA and Fortanix,” a San Francisco based security software company.

In Pursuit of Big Data

Though only three years old, Xeureka already completed a proof of concept addressing one of drug discovery’s biggest problems — getting enough data.

Speeding drug discovery requires powerful AI models built with datasets larger than most pharmaceutical companies have on hand. Until recently, sharing across companies has been unthinkable because data often contains private patient information as well as chemical formulas proprietary to the drug company.

Enter confidential computing, a way of processing data in a protected part of a GPU or CPU that acts like a black box for an organization’s most important secrets.

To ensure their data is kept confidential at all times, banks, government agencies and even advertisers are using the technology that’s backed by a consortium of some of the world’s largest companies.

A Proof of Concept for Privacy

To validate that confidential computing would allow its customers to safely share data, Xeureka created two imaginary companies, each with a thousand drug candidates. Each company’s dataset was used separately to train an AI model to predict the chemicals’ toxicity levels. Then the data was combined to train a similar, but larger AI model.

Xeureka ran its test on NVIDIA H100 Tensor Core GPUs using security management software from Fortanix, one of the first startups to support confidential computing.

The H100 GPUs support a trusted execution environment with hardware-based engines that ensure and validate confidential workloads are protected while in use on the GPU, without compromising performance. The Fortanix software manages data sharing, encryption keys and the overall workflow.

Up to 74% Higher Accuracy

The results were impressive. The larger model’s predictions were 65-74% more accurate, thanks to use of the combined datasets.

The models created by a single company’s data showed instability and bias issues that were not present with the larger model, Ito said.

“Confidential computing from NVIDIA and Fortanix essentially alleviates the privacy and security concerns while also improving model accuracy, which will prove to be a win-win situation for the entire industry,” said Xeureka’s CTO, Hiroki Makiguchi, in a Fortanix press release.

An AI Supercomputing Ecosystem

Now, Xeureka is exploring broad applications of this technology in drug discovery research, in collaboration with the community behind Tokyo-1, its GPU-accelerated AI supercomputer. Announced in February, Tokyo-1 aims to enhance the efficiency of pharmaceutical companies in Japan and beyond.

Initial projects may include collaborations to predict protein structures, screen ligand-base pairs and accelerate molecular dynamics simulations with trusted services. Tokyo-1 users can harness large language models for chemistry, protein, DNA and RNA data formats through the NVIDIA BioNeMo drug discovery microservices and framework.

It’s part of Mitsui’s broader strategic growth plan to develop software and services for healthcare, such as powering Japan’s $100 billion pharma industry, the world’s third largest following the U.S. and China.

Xeueka’s services will include using AI to quickly screen billions of drug candidates, to predict how useful molecules will bind with proteins and to simulate detailed chemical behaviors.

To learn more, read about NVIDIA Confidential Computing and NVIDIA BioNeMo, an AI platform for drug discovery.

Read More

NVIDIA and Global Consulting Leaders Speed AI Adoption Across Japan’s Industries

NVIDIA and Global Consulting Leaders Speed AI Adoption Across Japan’s Industries

Consulting giants including Accenture, Deloitte, EY Strategy and Consulting Co., Ltd. (or EY Japan), FPT,  Kyndryl and Tata Consultancy Services Japan (TCS Japan) are working with NVIDIA to establish innovation centers in Japan to accelerate the nation’s goal of embracing enterprise AI and physical AI across its industrial landscape.

The centers will use NVIDIA AI Enterprise software, local language models and NVIDIA NIM microservices to help clients in Japan advance the development and deployment of AI agents tailored to their industries’ respective needs, boosting productivity with a digital workforce.

Using the NVIDIA Omniverse platform, Japanese firms can develop digital twins and simulate complex physical AI systems, driving innovation in manufacturing, robotics and other sectors.

Like many nations, Japan is navigating complex social and demographic challenges,  which is leading to a smaller workforce as older generations retire. Leaning into its manufacturing and robotics leadership, the country is seeking opportunities to solve these challenges using AI.

The Japanese government in April published a paper on its aims to become “the world’s most AI-friendly country.” AI adoption is strong and growing, as IDC reports that the Japanese AI systems market reached approximately $5.9 billion this year, with a year-on-year growth rate of 31.2%.1

The consulting giants’ initiatives and activities include:

  • Accenture has established the Accenture NVIDIA Business Group and will provide solutions and services incorporating a Japanese large language model (LLM), which uses NVIDIA NIM and NVIDIA NeMo, as a Japan-specific offering. In addition, Accenture will deploy agentic AI solutions based on Accenture AI Refinery to all industries in Japan, accelerating total enterprise reinvention for its clients. In the future, Accenture plans to build new services using NVIDIA AI Enterprise and Omniverse at Accenture Innovation Hub Tokyo.
  • Deloitte is establishing its AI Experience Center in Tokyo, which will serve as an executive briefing center to showcase generative AI solutions built on NVIDIA technology. This facility builds on the Deloitte Japan NVIDIA Practice announced in June and will allow clients to experience firsthand how AI can revolutionize their operations. The center will also offer NVIDIA AI and Omniverse Blueprints to help enterprises in Japan adopt agentic AI effectively.
  • EY Strategy and Consulting Co., Ltd (EY Japan) is developing a multitude of digital transformation (DX) solutions in Japan across diverse industries including finance, retail, media and manufacturing. The new EY Japan DX offerings will be built with NVIDIA AI Enterprise to serve the country’s growing demand for digital twins, 3D applications, multimodal AI and generative AI.
  • FPT is launching FPT AI Factory in Japan with NVIDIA Hopper GPUs and NVIDIA AI Enterprise software to support the country’s AI transformation by using business data in a secure, sovereign environment. FPT is integrating the NVIDIA NeMo framework with FPT AI Studio for building, pretraining and fine-tuning generative AI models, including FPT’s multi-language LLM, named Saola. In addition, to provide end-to-end AI integration services, FPT plans to train over 1,000 software engineers and consultants domestically in Japan, and over 7,000 globally by 2026.
  • IT infrastructure services provider Kyndryl has launched a dedicated AI private cloud in Japan. Built in collaboration with Dell Technologies using the Dell AI Factory with NVIDIA, this new AI private cloud will provide a controlled, secure and sovereign location for customers to develop, test and plan implementation of AI on the end-to-end NVIDIA AI platform, including  NVIDIA accelerated computing and networking, as well as the NVIDIA AI Enterprise software.
  • TCS Japan will begin offering its TCS global AI offerings built on the full NVIDIA AI stack in the automotive and manufacturing industries. These solutions will be hosted in its showcase centers at TCS Japan’s Azabudai office in Tokyo.

Located in the Tokyo and Kansai metropolitan areas, these new consulting centers offer hands-on experience with NVIDIA’s latest technologies and expert guidance — helping accelerate AI transformation, solve complex social challenges and support the nation’s economic growth.

To learn more, watch the NVIDIA AI Summit Japan fireside chat with NVIDIA founder and CEO Jensen Huang.

Editor’s note: IDC figures are sourced to IDC, 2024 Domestic AI System Market Forecast Announced, April 2024. The IDC forecast amount was converted to USD by NVIDIA, while the CAGR (31.2%) was calculated based on JPY.

Read More

Japan’s Startups Drive AI Innovation With NVIDIA Accelerated Computing

Japan’s Startups Drive AI Innovation With NVIDIA Accelerated Computing

Lifelike digital humans engage with audiences in real time. Autonomous systems streamline complex logistics. And AI-driven language tools break down communication barriers on the fly.

This isn’t sci-fi. This is Tokyo’s startup scene.

Supercharged by AI — and world-class academic and industrial might — the region has become a global innovation hub. And the NVIDIA Inception program is right in the middle of it.

With over 370 AI-driven startups in the program and a 250,000-person strong NVIDIA developer community, Japan’s AI startup ecosystem is as bold as it is fast-moving.

This week’s NVIDIA AI Summit Japan puts these achievements in the spotlight, capturing the region’s relentless innovation momentum.

NVIDIA founder and CEO Jensen Huang and SoftBank Group Chairman and CEO Masayoshi Son opened the summit with a fireside chat to discuss AI’s transformative role, with Jensen diving into Japan’s growing AI ecosystem and its push toward sovereign AI.

Sessions followed with leaders from METI (Japan’s Ministry of Economy, Trade and Industry), the University of Tokyo and other key players. Their success is no accident.

Tokyo’s academic powerhouses, global technology and industrial giants, and technology-savvy population of 14 million, provide the underpinnings of a global AI hub that stretches from the bustling startup scene in Shibuya to new hotbeds of tech development in Chiyoda and beyond.

Supercharging Japan’s Creative Class 

Iconic works from anime to manga have not only redefined entertainment in Japan — they’ve etched themselves into global culture, inspiring fans across continents, languages and generations.

Now, Japan’s vibrant visual pop culture is spilling into AI, finding fresh ways to surprise and connect with audiences.

Take startup AiHUB’s digital celebrity Sali.

Sali isn’t just a character in the traditional sense. She’s a digital being with presence — responsive and lifelike. She blinks, she smiles, she reacts.

Here, AI is doing something quietly revolutionary, slipping under the radar to redefine how people interact with media.

At AI Summit Japan, AiHUB revealed that it will adopt the NVIDIA Avatar Cloud Engine, or ACE, in the lip-sync module of its digital human framework, providing Sali nuanced expressions and human-like emotional depth.

ACE doesn’t just make Sali relatable — it puts her in a league of characters who transcend screens and pages.

This integration reduced development and future management costs by approximately 50% while improving the expressiveness of the avatars, according to AiHUB.

SDK Adoption: From Hesitation to High Velocity

In the global tech race, success doesn’t always hinge on the heroes you’d expect.

The unsung stars here are software development kits — those bundles of tools, libraries and documentation that cut the guesswork out of innovation. And in Japan’s fast-evolving AI ecosystem, these once-overlooked SDKs are driving an improbable revolution.

For years, Japan’s tech companies treated SDKs with caution. Now, however, with AI advancing at lightspeed and NVIDIA GPUs powering the engine, SDKs have moved from a quiet corner to center stage.

Take NVIDIA NeMo, a platform for building large language models, or LLMs. It’s swiftly becoming the background for Japan’s latest wave of real-time, AI-driven communication technologies.

One company at the forefront is Kotoba Technologies, which has cracked the code on real-time speech recognition thanks to NeMo’s powerful tools.

Under a key Japanese government grant, Kotoba’s language tools don’t just capture sound — they translate it live. It’s a blend of computational heft and human ingenuity, redefining how multilingual communication happens in non-English-speaking countries like Japan.

Kotoba’s tools are used in customer call centers and for automatic meeting minutes creation across various industries. It was also used to perform live transcription during the AI Summit Japan fireside chat between Huang and Son.

And if LLMs are the engines driving Japan’s AI, then companies like APTO supply the fuel. Using NVIDIA NeMo Curator, APTO is changing the game in data annotation, handling the intensive prep work that makes LLMs effective.

By refining data quality for big clients like RIKEN, Ricoh and ORIX, APTO has mastered the fine art of sifting valuable signals from noise. Through tools like WordCountFilter — an ingenious mechanism that prunes short or unnatural sentences — it’s supercharging performance.

APTO’s data quality control boosted model accuracy scores and slashed training time.

Across Japan, developers are looking to move on AI fast, and they’re embracing SDKs to go further, faster.

The Power of Cross-Sector Synergy

The gears of Japan’s AI ecosystem increasingly turn in sync thanks to NVIDIA-powered infrastructure that enables startups to build on each other’s breakthroughs.

As Japan’s population ages, solutions like these address security needs as well as an intensifying labor shortage. Here, ugo and Asilla have taken on the challenge, using autonomous security systems to manage facilities across the country.

Asilla’s cutting-edge anomaly detection was developed with security in mind but is now finding applications in healthcare and retail. Built on the NVIDIA DeepStream and Triton Inference Server SDKs, Asilla’s tech doesn’t just identify risks — it responds to them.

In high-stakes environments, ugo and Asilla’s systems, powered by the NVIDIA Jetson platform, are already in action, identifying potential security threats and triggering real-time responses.

NVIDIA’s infrastructure is also at the heart of Kotoba Technologies’ language tools, as well as AiHUB’s lifelike digital avatars. Running on an AI backbone, these various tools seamlessly bridge media, communication and human interaction.

The Story Behind the Story: Tokyo IPC and Osaka Innovation Hub

All of these startups are part of a larger ecosystem that’s accelerating Japan’s rise as an AI powerhouse.

Leading the charge is UTokyo IPC, the wholly owned venture capital arm of the University of Tokyo, operating through its flagship accelerator program, 1stRound.

Cohosted by 18 universities and four national research institutions, this program serves as the nexus where academia and industry converge, providing hands-on guidance, resources and strategic support.

By championing the real-world deployment of seed-stage deep-tech innovations, UTokyo IPC is igniting Japan’s academic innovation landscape and setting the standard for others to follow.

Meanwhile, Osaka’s own Innovation Hub, OIH, expands this momentum beyond Tokyo, providing startups with coworking spaces and networking events. Its Startup Acceleration Program brings early-stage projects to market faster.

Fast-moving hubs like these are core to Japan’s AI ecosystem, giving startups the mentorship, funding and resources they need to go from prototype to fully commercialized product.

And through NVIDIA’s accelerated computing technologies and the Inception program, Japan’s fast-moving startups are united with AI innovators across the globe.

Image credit: ugo.

Read More

Japan Tech Leaders Supercharge Sovereign AI With NVIDIA AI Enterprise and Omniverse

Japan Tech Leaders Supercharge Sovereign AI With NVIDIA AI Enterprise and Omniverse

From call centers to factories to hospitals, AI is sweeping Japan.

Undergirding it all: the exceptional resources of the island nation’s world-class universities and global technology leaders such as Fujitsu, The Institute of Science Tokyo, NEC and NTT.

NVIDIA software — NVIDIA AI Enterprise for building and deploying AI agents and NVIDIA Omniverse for bringing AI into the physical world — is playing a crucial role in supporting Japan’s transformation into a global hub for AI development.

The bigger picture: Japan’s journey to AI sovereignty is well underway to support the nation in building, developing and sharing AI innovations at home and across the world.

Japanese AI Pioneers to Power Homegrown Innovation

Putting Japan in a position to become a global AI leader begins with AI-driven language models. Japanese tech leaders are developing advanced AI models that can better interpret Japanese cultural and linguistic nuances.

These models enable developers to build AI applications for industries requiring high-precision outcomes, such as healthcare, finance and manufacturing.

As Japan’s tech giants support AI adoption across the country, they’re using NVIDIA AI Enterprise software.

Fujitsu’s Takane model is specifically built for high-stakes sectors like finance and security.

The model is designed to prioritize security and accuracy with Japanese data, which is crucial for sensitive fields. It excels in both domestic and international Japanese LLM benchmarks for natural Japanese expression and accuracy.

The companies plan to use NVIDIA NeMo for additional fine-tuning, and Fujitsu has tapped NVIDIA to support making Takane available as an NVIDIA NIM to broaden accessibility for the developer community.

NEC’s cotomi model uses NeMo’s parallel processing techniques for efficient model training. It’s already integrated with NEC’s solutions in finance, manufacturing, healthcare and local governments.

NTT Group is moving forward with NTT Communications’ launch of NTT’s large language model “tsuzumi,” which is accelerated with NVIDIA TensorRT-LLM for AI agent customer experiences and use cases such as document summarization.

Meanwhile, startups such as Kotoba Technologies, a Tokyo-based software developer, will unveil its Kotoba-Whisper model, built using NVIDIA NeMo for AI model building.

The transcription application built on the Kotoba-Whisper model performed live transcription during this week’s conversation between SoftBank Chairman and CEO Masayoshi Son and NVIDIA founder and CEO Jensen Huang at NVIDIA AI Summit Japan.

Kotoba Technologies reports that using NeMo’s automatic speech recognition for data preprocessing delivers superior transcription performance.

Kotoba-Whisper is already used in healthcare to create medical records from patient conversations, in customer call centers and for automatic meeting minutes creation across various industries.

These models are used by developers and researchers, especially those focusing on Japanese language AI applications.

Academic Contributions to Japan’s Sovereign AI Vision

Japanese universities, meanwhile, are powering the ongoing transformation with a wave of AI innovations.

Nagoya University’s Ruri-Large, built using NVIDIA’s Nemotron-4 340B — which is also available as a NIM microservice — is a Japanese embedding model. It achieves high document retrieval performance with high-quality synthetic data generated by Nemotron-4 340B, and it enables the enhancement of language model capabilities through retrieval-augmented generation using external, authoritative knowledge bases.

The National Institute of Informatics will introduce LLM.jp-3-13B-Instruct, a sovereign AI model developed from scratch. Supported by several Japanese government-backed programs, this model underscores the nation’s commitment to self-sufficiency in AI. It’s expected to be available as a NIM microservice soon.

The Institute of Science Tokyo and Japan’s National Institute of Advanced Industrial Science and Technology, better known as AIST, will present the Llama 3.1 Swallow model. Optimized for Japanese tasks, it’s now a NIM microservice that can integrate into generative AI workflows for uses ranging from cultural research to business applications.

The University of Tokyo’s Human Genome Center uses NVIDIA AI Enterprise and NVIDIA Parabricks software for rapid genomic analysis, advancing life sciences and precision medicine.

Japan’s Tech Providers Helping Organizations Adopt AI

In addition, technology providers are working to bring NVIDIA AI technologies of all kinds to organizations across Japan.

Accenture will deploy AI agent solutions based on the Accenture AI Refinery across all industries in Japan, customizing with NVIDIA NeMo and deploying with NVIDIA NIM for a Japanese-specific solution.

Dell Technologies is deploying the Dell AI Factory with NVIDIA globally — with a key focus on the Japanese market — and will support NVIDIA NIM microservices for Japanese enterprises across various industries.

Deloitte will integrate NIM microservices that support the leading Japanese language models including LLM.jp, Kotoba, Ruri-large, Swallow and more, into its multi-agent solution.

HPE has launched HPE Private Cloud AI platform, supporting NVIDIA AI Enterprise in a private environment. This solution can be tailored for organizations looking to tap into Japan’s sovereign AI NIM microservices, meeting the needs of companies that prioritize data sovereignty while using advanced AI capabilities.

Bringing Physical AI to Industries With NVIDIA Omniverse

The proliferation of language models across academia, startups and enterprises, however, is just the start of Japan’s AI revolution.

A leading maker of industrial robots, a top automaker and a retail giant are all embracing NVIDIA Omniverse and AI, as physics-based simulation drives the next wave of automation.

Industrial automation provider Yaskawa, which has shipped 600,000 robots, is developing adaptive robots for increased autonomy. Yaskawa is now adopting NVIDIA Isaac libraries and AI models to create adaptive robot applications for factory automation and other industries such as food, logistics, medical, agriculture and more.

It’s using NVIDIA Isaac Manipulator, a reference workflow of NVIDIA-accelerated libraries and AI models, to help its developers build AI-enabled manipulators, or robot arms.

It’s also using NVIDIA FoundationPose for precise 6D pose estimation and tracking.

More broadly, NVIDIA and Yaskawa teams use AI-powered simulations and digital twin technology — powered by Omniverse — to accelerate the development and deployment of Yaskawa’s robotic solutions, saving time and resources.

Meanwhile, Toyota is looking into how to build robotic factory lines in Omniverse to improve tasks in robot motion in metal-forging processes.

And another iconic Japanese company, Seven & i Holdings, is using Omniverse to gather insights from video cameras in research to optimize retail and enhance safety.

To learn more, check out our blog on these use cases.

See notice regarding software product information.

Read More

GPU’s Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features

GPU’s Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features

The NVIDIA app — officially releasing today — is a companion platform for content creators, GeForce gamers and AI enthusiasts using GeForce RTX GPUs.

Featuring a GPU control center, the NVIDIA app allows users to access all their GPU settings in one place. From the app, users can do everything from updating to the latest drivers and configuring NVIDIA G-SYNC monitor settings, to tapping AI video enhancements through RTX Video and discovering exclusive AI-powered NVIDIA apps.

In addition, NVIDIA RTX Remix has a new update that improves performance and streamlines workflows.

For a deeper dive on gaming-exclusive benefits, check out the GeForce article.

The GPU’s PC Companion

The NVIDIA app turbocharges GeForce RTX GPUs with a bevy of applications, features and tools.

Keep NVIDIA Studio Drivers up to date — The NVIDIA app automatically notifies users when the latest Studio Driver is available. These graphics drivers, fine-tuned in collaboration with developers, enhance performance in top creative applications and are tested extensively to deliver maximum stability. They’re released once a month.

Discover AI creator apps — Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality — without the need for expensive, specialized equipment. It’s user-friendly, works in virtually any app and includes AI features like Noise and Acoustic Echo Removal, Virtual Backgrounds, Eye Contact, Auto Frame, Vignettes and Video Noise Removal.

NVIDIA RTX Remix is a modding platform built on NVIDIA Omniverse that allows users to capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing, including DLSS 3.5 support featuring Ray Reconstruction.

NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscape images. Artists can create backgrounds quickly or speed up concept exploration, enabling them to visualize more ideas.

Enhance video streams with AI — The NVIDIA app includes a System tab as a one-stop destination for display, video and GPU options. It also includes an AI feature called RTX Video that enhances all videos streamed on browsers.

RTX Video Super Resolution uses AI to enhance video streaming on GeForce RTX GPUs by removing compression artifacts and sharpening edges when upscaling.

RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range (HDR) when played in Google Chrome, Microsoft Edge, Mozilla Firefox or the VLC media player. HDR enables more vivid, dynamic colors to enhance gaming and content creation. A compatible HDR10 monitor is required.

Give game streams or video on demand a unique look with AI filters — Content creators looking to elevate their streamed or recorded gaming sessions can access the NVIDIA app’s redesigned Overlay feature with AI-powered game filters.

Freestyle RTX filters allow livestreamers and content creators to apply fun post-processing filters, changing the look and mood of content with tweaks to color and saturation.

Joining these Freestyle RTX game filters is RTX Dynamic Vibrance, which enhances visual clarity on a per-app basis. Colors pop more on screen, and color crushing is minimized to preserve image quality and immersion. The filter is accelerated by Tensor Cores on GeForce RTX GPUs, making it easier for viewers to enjoy all the action.

Enhanced visual clarity with RTX Dynamic Vibrance.

Freestyle RTX filters empower gamers to personalize the visual aesthetics of their favorite games through real-time post-processing filters. This feature boasts compatibility with a vast library of more than 1,200 games.

Download the NVIDIA app today.

RTX Remix 0.6 Release

The new RTX Remix update offers modders significantly improved mod performance, as well as quality of life improvements that help streamline the mod-making process.

RTX Remix now supports the ability to test experimental features under active development. It includes a new Stage Manager that makes it easier to see and change every mesh, texture, light or element in scenes in real time.

To learn more about the RTX Remix 0.6 release, check out the release notes.

With RTX Remix in the NVIDIA app launcher, modders have direct access to Remix’s powerful features. Through the NVIDIA app, RTX Remix modders can benefit from faster start-up times, lower CPU usage and direct control over updates with an optimized user interface.

To the 3D Victor Go the Spoils

NVIDIA Studio in June kicked off a 3D character contest for artists in collaboration with Reallusion, a company that develops 2D and 3D character creation and animation software. Today, we’re celebrating the winners from that contest.

In the category of Best Realistic Character Animation, Robert Lundqvist won for the piece Lisa and Fia.

In the category of Best Stylized Character Animation, Loic Bramoulle won for the piece HellGal.

Both winners will receive an NVIDIA Studio-validated laptop to help further their creative efforts.

View over 250 imaginative and impressive entries here.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Jensen Huang to Discuss AI’s Future with Masayoshi Son at AI Summit Japan

Jensen Huang to Discuss AI’s Future with Masayoshi Son at AI Summit Japan

NVIDIA founder and CEO Jensen Huang will join SoftBank Group Chairman and CEO Masayoshi Son in a fireside chat at NVIDIA AI Summit Japan to discuss the transformative role of AI and more..

Taking place on November 12-13, the invite-only event at The Prince Park Tower in Tokyo’s Minato district will gather industry leaders to explore advancements in generative AI, robotics and industrial digitalization.

Call to action: Tickets for the event are sold out, but tune in via livestream or watch on-demand sessions.

Over 50 sessions and live demos will showcase innovations from NVIDIA and its partners, covering everything from large language models, known as LLMs, to AI-powered robotics and digital twins.

Huang and Son will discuss AI’s transformative role and efforts driving the  AI field.

Son has invested in companies around the world that show potential for AI-driven growth through SoftBank Vision Funds. Huang has steered NVIDIA’s rise to a global leader in AI and accelerated computing.

One major topic: Japan’s AI infrastructure initiative, supported by NVIDIA and local firms. This investment is central to the country’s AI ambitions.

Leaders from METI and experts like Shunsuke Aoki from Turing Inc. will dig into how sovereign AI fosters innovation and strengthens Japan’s technological independence.

On Wednesday, November 13, two key sessions will offer deeper insights into Japan’s AI journey:

  • The Present and Future of Generative AI in Japan: Professor Yutaka Matsuo of the University of Tokyo will explore the advances of generative AI and its impact on policy and business strategy. Expect discussions on the opportunities and challenges Japan faces as it pushes forward with AI innovations.
  • Sovereign AI and Its Role in Japan’s Future: A panel of four experts will dive into the concept of sovereign AI. Speakers like Takuya Watanabe of METI and Hironobu Tamba of SoftBank will discuss how sovereign AI can accelerate business strategies and strengthen Japan’s technological independence.

These sessions highlight how Japan is positioning itself at the forefront of AI development. Practical insights into the next wave of AI innovation and policy are on the agenda.

Experts from Sakana AI, Sony, Tokyo Science University and Yaskawa Electric will be among those presenting breakthroughs across sectors like healthcare, robotics and data centers.

The summit will also feature hands-on workshops, including a full-day session on Tuesday, November 12, titled “Building RAG Agents with LLM.”

Led by NVIDIA experts, this workshop will offer practical experience in developing retrieval-augmented generation, or RAG, agents using large-scale language models.

With its mix of forward-looking discussions and real-world applications, the NVIDIA AI Summit Tokyo will highlight Japan’s ongoing advancements in AI and its contributions to the global AI landscape.

Tune in to the fireside chat between Son and Huang via livestream or watch on-demand sessions.

Read More

Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade

Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade

This GFN Thursday, the GeForce NOW Priority membership is getting enhancements and a fresh name to go along with it. The new Performance membership offers more GeForce-powered premium gaming — at no change in the monthly membership cost.

Gamers having a hard time deciding between the Performance and Ultimate memberships can take them both for a spin with a Day Pass, now 25% off for a limited time. Day Passes give access to 24 continuous hours of powerful cloud gaming.

In addition, seven new games are available this week, joining the over 2,000 games in the GeForce NOW library.

Time for a Glow Up

The Performance membership keeps all the same great gaming benefits and now provides members with an enhanced streaming experience at no additional cost.

Performance membership on GeForce NOW
Say hello to the Performance membership.

Performance members can stream at up to 1440p — an increase from the previous 1080p resolution — and experience games in immersive, ultrawide resolutions. They can also save their in-game graphics settings across streaming sessions, including for NVIDIA RTX features in supported titles.

All current Priority members are automatically upgraded to Performance and can take advantage of the upgraded streaming experience today.

Performance members will connect to GeForce RTX-powered gaming rigs for up to 1440p resolution. Ultimate members continue to receive the top streaming experience: connecting to GeForce RTX 4080-powered gaming rigs with up to 4K resolution and 120 frames per second, or 1080p and 240 fps in Competitive mode for games with support for NVIDIA Reflex technology.

Gamers playing on the free tier will now see they’re streaming from basic rigs, with varying specs that offer entry-level cloud gaming and are optimized for capacity.

Account portal on GeForce NOW
Time to play.

At the start of next year, GeForce NOW will roll out a 100-hour monthly playtime allowance to continue providing exceptional quality and speed — as well as shorter queue times — for Performance and Ultimate members. This ample limit comfortably accommodates 94% of members, who typically enjoy the service well within this timeframe. Members can check out how much time they’ve spent in the cloud through their account portal (see screenshot example above).

Up to 15 hours of unused playtime will automatically roll over to the next month for members, and additional hours can be purchased at $2.99 for 15 additional hours of Performance, or $5.99 for 15 additional Ultimate hours.

Loyal Member Benefit

To thank the GFN community for joining the cloud gaming revolution, GeForce NOW is offering active paid members as of Dec. 31, 2024, the ability to continue with unlimited playtime for a full year until January 2026.

New members can lock in this feature by signing up for GeForce NOW before Dec. 31, 2024. As long as a member’s account remains uninterrupted and in good standing, they’ll continue to receive unlimited playtime for all of 2025.

Don’t Pass This Up

For those looking to try out the new premium benefits and all Performance and Ultimate memberships have to offer, Day Passes are 25% off for a limited time.

Whether with the newly named Performance Day Pass at $2.99 or the Ultimate Day Pass at $5.99, members can unlock 24 hours of uninterrupted access to powerful NVIDIA GeForce RTX-powered cloud gaming servers.

Another new GeForce NOW feature lets users apply the value of their most recently purchased Day Pass toward any monthly membership if they sign up within 48 hours of the completion of their Day Pass.

Day Pass Sale on GeForce NOW
Quarter the price, full day of fun.

Dive into a vast library of over 2,000 games with enhanced graphics, including NVIDIA RTX features like ray tracing and DLSS. With the Ultimate Day Pass, snag a taste of GeForce NOW’s highest-performing membership tier and enjoy up to 4K resolution 120 fps or 1080p 240 fps across nearly any device. It’s an ideal way to experience elevated GeForce gaming in the cloud.

Thrilling New Games

Members can look for the following games available to stream in the cloud this week:

  • Planet Coaster 2 (New release on Steam, Nov. 6)
  • Teenage Mutant Ninja Turtles: Splintered Fate (New release on Steam, Nov. 6)
  • Empire of the Ants (New release on Steam, Nov. 7)
  • Unrailed 2: Back on Track (New release on Steam, Nov. 7)
  • TCG Card Shop Simulator (Steam)
  • StarCraft II (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)
  • StarCraft Remastered (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools

Robotics developers can greatly accelerate their work on AI-enabled robots, including humanoids, using new AI and simulation tools and workflows that NVIDIA revealed this week at the Conference for Robot Learning (CoRL) in Munich, Germany.

The lineup includes the general availability of the NVIDIA Isaac Lab robot learning framework; six new humanoid robot learning workflows for Project GR00T, an initiative to accelerate humanoid robot development; and new world-model development tools for video data curation and processing, including the NVIDIA Cosmos tokenizer and NVIDIA NeMo Curator for video processing.

The open-source Cosmos tokenizer provides robotics developers superior visual tokenization by breaking down images and videos into high-quality tokens with exceptionally high compression rates. It runs up to 12x faster than current tokenizers, while NeMo Curator provides video processing curation up to 7x faster than unoptimized pipelines.

Also timed with CoRL, NVIDIA presented 23 papers and nine workshops related to robot learning and released training and workflow guides for developers. Further, Hugging Face and NVIDIA announced they’re collaborating to accelerate open-source robotics research with LeRobot, NVIDIA Isaac Lab and NVIDIA Jetson for the developer community.

Accelerating Robot Development With Isaac Lab 

NVIDIA Isaac Lab is an open-source, robot learning framework built on NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and physical AI simulation.

Developers can use Isaac Lab to train robot policies at scale. This open-source unified robot learning framework applies to any embodiment — from humanoids to quadrupeds to collaborative robots — to handle increasingly complex movements and interactions.

Leading commercial robot makers, robotics application developers and robotics research entities around the world are adopting Isaac Lab, including 1X, Agility Robotics, The AI Institute, Berkeley Humanoid, Boston Dynamics, Field AI, Fourier, Galbot, Mentee Robotics, Skild AI, Swiss-Mile, Unitree Robotics and XPENG Robotics.

Project GR00T: Foundations for General-Purpose Humanoid Robots 

Building advanced humanoids is extremely difficult, demanding multilayer technological and interdisciplinary approaches to make the robots perceive, move and learn skills effectively for human-robot and robot-environment interactions.

Project GR00T is an initiative to develop accelerated libraries, foundation models and data pipelines to accelerate the global humanoid robot developer ecosystem.

Six new Project GR00T workflows provide humanoid developers with blueprints to realize the most challenging humanoid robot capabilities. They include:

  • GR00T-Gen for building generative AI-powered, OpenUSD-based 3D environments
  • GR00T-Mimic for robot motion and trajectory generation
  • GR00T-Dexterity for robot dexterous manipulation
  • GR00T-Control for whole-body control
  • GR00T-Mobility for robot locomotion and navigation
  • GR00T-Perception for multimodal sensing

“Humanoid robots are the next wave of embodied AI,” said Jim Fan, senior research manager of embodied AI at NVIDIA. “NVIDIA research and engineering teams are collaborating across the company and our developer ecosystem to build Project GR00T to help advance the progress and development of global humanoid robot developers.”

New Development Tools for World Model Builders

Today, robot developers are building world models — AI representations of the world that can predict how objects and environments respond to a robot’s actions. Building these world models is incredibly compute- and data-intensive, with models requiring thousands of hours of real-world, curated image or video data.

NVIDIA Cosmos tokenizers provide efficient, high-quality encoding and decoding to simplify the development of these world models. They set a new standard of minimal distortion and temporal instability, enabling high-quality video and image reconstructions.

Providing high-quality compression and up to 12x faster visual reconstruction, the Cosmos tokenizer paves the path for scalable, robust and efficient development of generative applications across a broad spectrum of visual domains.

1X, a humanoid robot company, has updated the 1X World Model Challenge dataset to use the Cosmos tokenizer.

“NVIDIA Cosmos tokenizer achieves really high temporal and spatial compression of our data while still retaining visual fidelity,” said Eric Jang, vice president of AI at 1X Technologies. “This allows us to train world models with long horizon video generation in an even more compute-efficient manner.”

Other humanoid and general-purpose robot developers, including XPENG Robotics and Hillbot, are developing with the NVIDIA Cosmos tokenizer to manage high-resolution images and videos.

NeMo Curator now includes a video processing pipeline. This enables robot developers to improve their world-model accuracy by processing large-scale text, image and video data.

Curating video data poses challenges due to its massive size, requiring scalable pipelines and efficient orchestration for load balancing across GPUs. Additionally, models for filtering, captioning and embedding need optimization to maximize throughput.

NeMo Curator overcomes these challenges by streamlining data curation with automatic pipeline orchestration, reducing processing time significantly. It supports linear scaling across multi-node, multi-GPU systems, efficiently handling over 100 petabytes of data. This simplifies AI development, reduces costs and accelerates time to market.

Advancing the Robot Learning Community at CoRL

The nearly two dozen research papers the NVIDIA robotics team released with CoRL cover breakthroughs in integrating vision language models for improved environmental understanding and task execution, temporal robot navigation, developing long-horizon planning strategies for complex multistep tasks and using human demonstrations for skill acquisition.

Groundbreaking papers for humanoid robot control and synthetic data generation include SkillGen, a system based on synthetic data generation for training robots with minimal human demonstrations, and HOVER, a robot foundation model for controlling humanoid robot locomotion and manipulation.

NVIDIA researchers will also be participating in nine workshops at the conference. Learn more about the full schedule of events.

Availability

NVIDIA Isaac Lab 1.2 is available now and is open source on GitHub. NVIDIA Cosmos tokenizer is available now on GitHub and Hugging Face. NeMo Curator for video processing will be available at the end of the month.

The new NVIDIA Project GR00T workflows are coming soon to help robot companies build humanoid robot capabilities with greater ease. Read more about the workflows on the NVIDIA Technical Blog.

Researchers and developers learning to use Isaac Lab can now access developer guides and tutorials, including an Isaac Gym to Isaac Lab migration guide.

Discover the latest in robot learning and simulation in an upcoming OpenUSD insider livestream on robot simulation and learning on Nov. 13, and attend the NVIDIA Isaac Lab office hours for hands-on support and insights.

Developers can apply to join the NVIDIA Humanoid Robot Developer Program.

Read More

Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development

Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development

At the Conference for Robot Learning (CoRL) in Munich, Germany, Hugging Face and NVIDIA announced a collaboration to accelerate robotics research and development by bringing together their open-source robotics communities.

Hugging Face’s LeRobot open AI platform combined with NVIDIA AI, Omniverse and Isaac robotics technology will enable researchers and developers to drive advances across a wide range of industries, including manufacturing, healthcare and logistics.

Open-Source Robotics for the Era of Physical AI

The era of physical AI — robots understanding physical properties of environments — is here, and it’s rapidly transforming the world’s industries.

To drive and sustain this rapid innovation, robotics researchers and developers need access to open-source, extensible frameworks that span the development process of robot training, simulation and inference. With models, datasets and workflows released under shared frameworks, the latest advances are readily available for use without the need to recreate code.

Hugging Face’s leading open AI platform serves more than 5 million machine learning researchers and developers, offering tools and resources to streamline AI development. Hugging Face users can access and fine-tune the latest pretrained models and build AI pipelines on common APIs with over 1.5 million models, datasets and applications freely accessible on the Hugging Face Hub.

LeRobot, developed by Hugging Face, extends the successful paradigms from its  Transformers and Diffusers libraries into the robotics domain. LeRobot offers a comprehensive suite of tools for sharing data collection, model training and simulation environments along with designs for low-cost manipulator kits.

NVIDIA’s AI technology, simulation and open-source robot learning modular framework such as NVIDIA Isaac Lab can accelerate the LeRobot’s data collection, training and verification workflow. Researchers and developers can share their models and datasets built with LeRobot and Isaac Lab, creating a data flywheel for the robotics community.

Scaling Robot Development With Simulation

Developing physical AI is challenging. Unlike language models that use extensive internet text data, physics-based robotics relies on physical interaction data along with vision sensors, which is harder to gather at scale. Collecting real-world robot data for dexterous manipulation across a large number of tasks and environments is time-consuming and labor-intensive.

Making this easier, Isaac Lab, built on NVIDIA Isaac Sim, enables robot training by demonstration or trial-and-error in simulation using  high-fidelity rendering and physics simulation to create realistic synthetic environments and data. By combining GPU-accelerated physics simulations and parallel environment execution, Isaac Lab provides the ability to generate vast amounts of training data — equivalent to thousands of real-world experiences — from a single demonstration.

Generated motion data is then used to train a policy with imitation learning. After successful training and validation in simulation, the policies are deployed on a real robot, where they are further tested and tuned to achieve optimal performance.

This iterative process leverages real-world data’s accuracy and the scalability of simulated synthetic data, ensuring robust and reliable robotic systems.

By sharing these datasets, policies and models on Hugging Face, a robot data flywheel is created that enables developers and researchers to build upon each other’s work, accelerating progress in the field.

“The robotics community thrives when we build together,” said Animesh Garg, assistant professor at Georgia Tech. “By embracing open-source frameworks such as Hugging Face’s LeRobot and NVIDIA Isaac Lab, we accelerate the pace of research and innovation in AI-powered robotics.”

Fostering Collaboration and Community Engagement

The planned collaborative workflow involves collecting data through teleoperation and simulation in Isaac Lab, storing it in the standard LeRobotDataset format. Data generated using GR00T-Mimic, will then be used to train a robot policy with imitation learning, which is subsequently evaluated in simulation. Finally, the validated policy is deployed on real-world robots with NVIDIA Jetson for real-time inference.

The initial steps in this collaboration have already been taken, having shown a physical picking setup with LeRobot software running on NVIDIA Jetson Orin Nano, providing a powerful, compact compute platform for deployment.

“Combining Hugging Face open-source community with NVIDIA’s hardware and Isaac Lab simulation has the potential to accelerate innovation in AI for robotics,” said Remi Cadene, principal research scientist at LeRobot.

This work builds on NVIDIA’s community contributions in generative AI at the edge, supporting the latest open models and libraries, such as Hugging Face Transformers, optimizing inference for large language models (LLMs), small language models (SLMs) and multimodal vision-language models (VLMs), along with VLM’s action-based variants of  vision language action models (VLAs), diffusion policies and speech models — all with strong, community-driven support.

Together, Hugging Face and NVIDIA aim to accelerate the work of the global ecosystem of robotics researchers and developers transforming industries ranging from transportation to manufacturing and logistics.

Learn about NVIDIA’s robotics research papers at CoRL, including VLM integration for better environmental understanding, temporal navigation and long-horizon planning. Check out workshops at CoRL with NVIDIA researchers.

Read More

Get Plugged In: How to Use Generative AI Tools in Obsidian

Get Plugged In: How to Use Generative AI Tools in Obsidian

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.

As generative AI evolves and accelerates industry, a community of AI enthusiasts is experimenting with ways to integrate the powerful technology into common productivity workflows.

Applications that support community plug-ins give users the power to explore how large language models (LLMs) can enhance a variety of workflows. By using local inference servers powered by the NVIDIA RTX-accelerated llama.cpp software library, users on RTX AI PCs can integrate local LLMs with ease.

Previously, we looked at how users can take advantage of Leo AI in the Brave web browser to optimize the web browsing experience. Today, we look at Obsidian, a popular writing and note-taking application, based on the Markdown markup language, that’s useful for keeping complex and linked records for multiple projects. The app supports community-developed plug-ins that bring additional functionality, including several that enable users to connect Obsidian to a local inferencing server like Ollama or LM Studio.

Using Obsidian and LM Studio to generate notes with a 27B-parameter LLM accelerated by RTX.

Connecting Obsidian to LM Studio only requires enabling the local server functionality in LM Studio by clicking on the “Developer” icon on the left panel, loading any downloaded model, enabling the CORS toggle and clicking “Start.” Take note of the chat completion URL from the “Developer” log console (“http://localhost:1234/v1/chat/completions” by default), as the plug-ins will need this information to connect.

Next, launch Obsidian and open the “Settings” panel. Click “Community plug-ins” and then “Browse.” There are several community plug-ins related to LLMs, but two popular options are Text Generator and Smart Connections.

  • Text Generator is helpful for generating content in an Obsidian vault, like notes and summaries on a research topic.
  • Smart Connections is useful for asking questions about the contents of an Obsidian vault, such as the answer to an obscure trivia question previously saved years ago.

Each plug-in has its own way of entering the LM Server URL.

For Text Generator, open the settings and select “Custom” for “Provider profile” and paste the whole URL into the “Endpoint” field. For Smart Connections, configure the settings after starting the plug-in. In the settings panel on the right side of the interface, select “Custom Local (OpenAI Format)” for the model platform. Then, enter the URL and the model name (e.g., “gemma-2-27b-instruct”) into their respective fields as they appear in LM Studio.

Once the fields are filled in, the plug-ins will function. The LM Studio user interface will also show logged activity if users are curious about what’s happening on the local server side.

Transforming Workflows With Obsidian AI Plug-Ins

Both the Text Generator and Smart Connections plug-ins use generative AI in compelling ways.

For example, imagine a user wants to plan a vacation to the fictitious destination of Lunar City and brainstorm ideas for what to do there. The user would start a new note, titled “What to Do in Lunar City.” Since Lunar City is not a real place, the query sent to the LLM will need to include a few extra instructions to guide the responses. Click the Text Generator plug-in icon, and the model will generate a list of activities to do during the trip.

Obsidian, via the Text Generator plug-in, will request LM Studio to generate a response, and in turn LM Studio will run the Gemma 2 27B model. With RTX GPU acceleration in the user’s computer, the model can quickly generate a list of things to do.

The Text Generator community plug-in in Obsidian enables users to connect to an LLM in LM Studio and generate notes for an imaginary vacation. The Text Generator community plug-in in Obsidian allows users to access an LLM through LM Studio to generate notes for a fictional vacation.

Or, suppose many years later the user’s friend is going to Lunar City and wants to know where to eat. The user may not remember the names of the places where they ate, but they can check the notes in their vault (Obsidian’s term for a collection of notes) in case they’d written something down.

Rather than looking through all of the notes manually, a user can use the Smart Connections plug-in to ask questions about their vault of notes and other content. The plug-in uses the same LM Studio server to respond to the request, and provides relevant information it finds from the user’s notes to assist the process. The plug-in does this using a technique called retrieval-augmented generation.

The Smart Connections community plug-in in Obsidian uses retrieval-augmented generation and a connection to LM Studio to enable users to query their notes.

These are fun examples, but after spending some time with these capabilities, users can see the real benefits and improvements for everyday productivity. Obsidian plug-ins are just two ways in which community developers and AI enthusiasts are embracing AI to supercharge their PC experiences. .

NVIDIA GeForce RTX technology for Windows PCs can run thousands of open-source models for developers to integrate into their Windows apps.

Learn more about the power of LLMs, Text Generation and Smart Connections by integrating Obsidian into your workflow and play with the accelerated experience available on RTX AI PCs

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More