Japan Tech Leaders Supercharge Sovereign AI With NVIDIA AI Enterprise and Omniverse

Japan Tech Leaders Supercharge Sovereign AI With NVIDIA AI Enterprise and Omniverse

From call centers to factories to hospitals, AI is sweeping Japan.

Undergirding it all: the exceptional resources of the island nation’s world-class universities and global technology leaders such as Fujitsu, The Institute of Science Tokyo, NEC and NTT.

NVIDIA software — NVIDIA AI Enterprise for building and deploying AI agents and NVIDIA Omniverse for bringing AI into the physical world — is playing a crucial role in supporting Japan’s transformation into a global hub for AI development.

The bigger picture: Japan’s journey to AI sovereignty is well underway to support the nation in building, developing and sharing AI innovations at home and across the world.

Japanese AI Pioneers to Power Homegrown Innovation

Putting Japan in a position to become a global AI leader begins with AI-driven language models. Japanese tech leaders are developing advanced AI models that can better interpret Japanese cultural and linguistic nuances.

These models enable developers to build AI applications for industries requiring high-precision outcomes, such as healthcare, finance and manufacturing.

As Japan’s tech giants support AI adoption across the country, they’re using NVIDIA AI Enterprise software.

Fujitsu’s Takane model is specifically built for high-stakes sectors like finance and security.

The model is designed to prioritize security and accuracy with Japanese data, which is crucial for sensitive fields. It excels in both domestic and international Japanese LLM benchmarks for natural Japanese expression and accuracy.

The companies plan to use NVIDIA NeMo for additional fine-tuning, and Fujitsu has tapped NVIDIA to support making Takane available as an NVIDIA NIM to broaden accessibility for the developer community.

NEC’s cotomi model uses NeMo’s parallel processing techniques for efficient model training. It’s already integrated with NEC’s solutions in finance, manufacturing, healthcare and local governments.

NTT Group is moving forward with NTT Communications’ launch of NTT’s large language model “tsuzumi,” which is accelerated with NVIDIA TensorRT-LLM for AI agent customer experiences and use cases such as document summarization.

Meanwhile, startups such as Kotoba Technologies, a Tokyo-based software developer, will unveil its Kotoba-Whisper model, built using NVIDIA NeMo for AI model building.

The transcription application built on the Kotoba-Whisper model performed live transcription during this week’s conversation between SoftBank Chairman and CEO Masayoshi Son and NVIDIA founder and CEO Jensen Huang at NVIDIA AI Summit Japan.

Kotoba Technologies reports that using NeMo’s automatic speech recognition for data preprocessing delivers superior transcription performance.

Kotoba-Whisper is already used in healthcare to create medical records from patient conversations, in customer call centers and for automatic meeting minutes creation across various industries.

These models are used by developers and researchers, especially those focusing on Japanese language AI applications.

Academic Contributions to Japan’s Sovereign AI Vision

Japanese universities, meanwhile, are powering the ongoing transformation with a wave of AI innovations.

Nagoya University’s Ruri-Large, built using NVIDIA’s Nemotron-4 340B — which is also available as a NIM microservice — is a Japanese embedding model. It achieves high document retrieval performance with high-quality synthetic data generated by Nemotron-4 340B, and it enables the enhancement of language model capabilities through retrieval-augmented generation using external, authoritative knowledge bases.

The National Institute of Informatics will introduce LLM.jp-3-13B-Instruct, a sovereign AI model developed from scratch. Supported by several Japanese government-backed programs, this model underscores the nation’s commitment to self-sufficiency in AI. It’s expected to be available as a NIM microservice soon.

The Institute of Science Tokyo and Japan’s National Institute of Advanced Industrial Science and Technology, better known as AIST, will present the Llama 3.1 Swallow model. Optimized for Japanese tasks, it’s now a NIM microservice that can integrate into generative AI workflows for uses ranging from cultural research to business applications.

The University of Tokyo’s Human Genome Center uses NVIDIA AI Enterprise and NVIDIA Parabricks software for rapid genomic analysis, advancing life sciences and precision medicine.

Japan’s Tech Providers Helping Organizations Adopt AI

In addition, technology providers are working to bring NVIDIA AI technologies of all kinds to organizations across Japan.

Accenture will deploy AI agent solutions based on the Accenture AI Refinery across all industries in Japan, customizing with NVIDIA NeMo and deploying with NVIDIA NIM for a Japanese-specific solution.

Dell Technologies is deploying the Dell AI Factory with NVIDIA globally — with a key focus on the Japanese market — and will support NVIDIA NIM microservices for Japanese enterprises across various industries.

Deloitte will integrate NIM microservices that support the leading Japanese language models including LLM.jp, Kotoba, Ruri-large, Swallow and more, into its multi-agent solution.

HPE has launched HPE Private Cloud AI platform, supporting NVIDIA AI Enterprise in a private environment. This solution can be tailored for organizations looking to tap into Japan’s sovereign AI NIM microservices, meeting the needs of companies that prioritize data sovereignty while using advanced AI capabilities.

Bringing Physical AI to Industries With NVIDIA Omniverse

The proliferation of language models across academia, startups and enterprises, however, is just the start of Japan’s AI revolution.

A leading maker of industrial robots, a top automaker and a retail giant are all embracing NVIDIA Omniverse and AI, as physics-based simulation drives the next wave of automation.

Industrial automation provider Yaskawa, which has shipped 600,000 robots, is developing adaptive robots for increased autonomy. Yaskawa is now adopting NVIDIA Isaac libraries and AI models to create adaptive robot applications for factory automation and other industries such as food, logistics, medical, agriculture and more.

It’s using NVIDIA Isaac Manipulator, a reference workflow of NVIDIA-accelerated libraries and AI models, to help its developers build AI-enabled manipulators, or robot arms.

It’s also using NVIDIA FoundationPose for precise 6D pose estimation and tracking.

More broadly, NVIDIA and Yaskawa teams use AI-powered simulations and digital twin technology — powered by Omniverse — to accelerate the development and deployment of Yaskawa’s robotic solutions, saving time and resources.

Meanwhile, Toyota is looking into how to build robotic factory lines in Omniverse to improve tasks in robot motion in metal-forging processes.

And another iconic Japanese company, Seven & i Holdings, is using Omniverse to gather insights from video cameras in research to optimize retail and enhance safety.

To learn more, check out our blog on these use cases.

See notice regarding software product information.

Read More

GPU’s Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features

GPU’s Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features

The NVIDIA app — officially releasing today — is a companion platform for content creators, GeForce gamers and AI enthusiasts using GeForce RTX GPUs.

Featuring a GPU control center, the NVIDIA app allows users to access all their GPU settings in one place. From the app, users can do everything from updating to the latest drivers and configuring NVIDIA G-SYNC monitor settings, to tapping AI video enhancements through RTX Video and discovering exclusive AI-powered NVIDIA apps.

In addition, NVIDIA RTX Remix has a new update that improves performance and streamlines workflows.

For a deeper dive on gaming-exclusive benefits, check out the GeForce article.

The GPU’s PC Companion

The NVIDIA app turbocharges GeForce RTX GPUs with a bevy of applications, features and tools.

Keep NVIDIA Studio Drivers up to date — The NVIDIA app automatically notifies users when the latest Studio Driver is available. These graphics drivers, fine-tuned in collaboration with developers, enhance performance in top creative applications and are tested extensively to deliver maximum stability. They’re released once a month.

Discover AI creator apps — Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality — without the need for expensive, specialized equipment. It’s user-friendly, works in virtually any app and includes AI features like Noise and Acoustic Echo Removal, Virtual Backgrounds, Eye Contact, Auto Frame, Vignettes and Video Noise Removal.

NVIDIA RTX Remix is a modding platform built on NVIDIA Omniverse that allows users to capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing, including DLSS 3.5 support featuring Ray Reconstruction.

NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscape images. Artists can create backgrounds quickly or speed up concept exploration, enabling them to visualize more ideas.

Enhance video streams with AI — The NVIDIA app includes a System tab as a one-stop destination for display, video and GPU options. It also includes an AI feature called RTX Video that enhances all videos streamed on browsers.

RTX Video Super Resolution uses AI to enhance video streaming on GeForce RTX GPUs by removing compression artifacts and sharpening edges when upscaling.

RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range (HDR) when played in Google Chrome, Microsoft Edge, Mozilla Firefox or the VLC media player. HDR enables more vivid, dynamic colors to enhance gaming and content creation. A compatible HDR10 monitor is required.

Give game streams or video on demand a unique look with AI filters — Content creators looking to elevate their streamed or recorded gaming sessions can access the NVIDIA app’s redesigned Overlay feature with AI-powered game filters.

Freestyle RTX filters allow livestreamers and content creators to apply fun post-processing filters, changing the look and mood of content with tweaks to color and saturation.

Joining these Freestyle RTX game filters is RTX Dynamic Vibrance, which enhances visual clarity on a per-app basis. Colors pop more on screen, and color crushing is minimized to preserve image quality and immersion. The filter is accelerated by Tensor Cores on GeForce RTX GPUs, making it easier for viewers to enjoy all the action.

Enhanced visual clarity with RTX Dynamic Vibrance.

Freestyle RTX filters empower gamers to personalize the visual aesthetics of their favorite games through real-time post-processing filters. This feature boasts compatibility with a vast library of more than 1,200 games.

Download the NVIDIA app today.

RTX Remix 0.6 Release

The new RTX Remix update offers modders significantly improved mod performance, as well as quality of life improvements that help streamline the mod-making process.

RTX Remix now supports the ability to test experimental features under active development. It includes a new Stage Manager that makes it easier to see and change every mesh, texture, light or element in scenes in real time.

To learn more about the RTX Remix 0.6 release, check out the release notes.

With RTX Remix in the NVIDIA app launcher, modders have direct access to Remix’s powerful features. Through the NVIDIA app, RTX Remix modders can benefit from faster start-up times, lower CPU usage and direct control over updates with an optimized user interface.

To the 3D Victor Go the Spoils

NVIDIA Studio in June kicked off a 3D character contest for artists in collaboration with Reallusion, a company that develops 2D and 3D character creation and animation software. Today, we’re celebrating the winners from that contest.

In the category of Best Realistic Character Animation, Robert Lundqvist won for the piece Lisa and Fia.

In the category of Best Stylized Character Animation, Loic Bramoulle won for the piece HellGal.

Both winners will receive an NVIDIA Studio-validated laptop to help further their creative efforts.

View over 250 imaginative and impressive entries here.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Jensen Huang to Discuss AI’s Future with Masayoshi Son at AI Summit Japan

Jensen Huang to Discuss AI’s Future with Masayoshi Son at AI Summit Japan

NVIDIA founder and CEO Jensen Huang will join SoftBank Group Chairman and CEO Masayoshi Son in a fireside chat at NVIDIA AI Summit Japan to discuss the transformative role of AI and more..

Taking place on November 12-13, the invite-only event at The Prince Park Tower in Tokyo’s Minato district will gather industry leaders to explore advancements in generative AI, robotics and industrial digitalization.

Call to action: Tickets for the event are sold out, but tune in via livestream or watch on-demand sessions.

Over 50 sessions and live demos will showcase innovations from NVIDIA and its partners, covering everything from large language models, known as LLMs, to AI-powered robotics and digital twins.

Huang and Son will discuss AI’s transformative role and efforts driving the  AI field.

Son has invested in companies around the world that show potential for AI-driven growth through SoftBank Vision Funds. Huang has steered NVIDIA’s rise to a global leader in AI and accelerated computing.

One major topic: Japan’s AI infrastructure initiative, supported by NVIDIA and local firms. This investment is central to the country’s AI ambitions.

Leaders from METI and experts like Shunsuke Aoki from Turing Inc. will dig into how sovereign AI fosters innovation and strengthens Japan’s technological independence.

On Wednesday, November 13, two key sessions will offer deeper insights into Japan’s AI journey:

  • The Present and Future of Generative AI in Japan: Professor Yutaka Matsuo of the University of Tokyo will explore the advances of generative AI and its impact on policy and business strategy. Expect discussions on the opportunities and challenges Japan faces as it pushes forward with AI innovations.
  • Sovereign AI and Its Role in Japan’s Future: A panel of four experts will dive into the concept of sovereign AI. Speakers like Takuya Watanabe of METI and Hironobu Tamba of SoftBank will discuss how sovereign AI can accelerate business strategies and strengthen Japan’s technological independence.

These sessions highlight how Japan is positioning itself at the forefront of AI development. Practical insights into the next wave of AI innovation and policy are on the agenda.

Experts from Sakana AI, Sony, Tokyo Science University and Yaskawa Electric will be among those presenting breakthroughs across sectors like healthcare, robotics and data centers.

The summit will also feature hands-on workshops, including a full-day session on Tuesday, November 12, titled “Building RAG Agents with LLM.”

Led by NVIDIA experts, this workshop will offer practical experience in developing retrieval-augmented generation, or RAG, agents using large-scale language models.

With its mix of forward-looking discussions and real-world applications, the NVIDIA AI Summit Tokyo will highlight Japan’s ongoing advancements in AI and its contributions to the global AI landscape.

Tune in to the fireside chat between Son and Huang via livestream or watch on-demand sessions.

Read More

Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade

Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade

This GFN Thursday, the GeForce NOW Priority membership is getting enhancements and a fresh name to go along with it. The new Performance membership offers more GeForce-powered premium gaming — at no change in the monthly membership cost.

Gamers having a hard time deciding between the Performance and Ultimate memberships can take them both for a spin with a Day Pass, now 25% off for a limited time. Day Passes give access to 24 continuous hours of powerful cloud gaming.

In addition, seven new games are available this week, joining the over 2,000 games in the GeForce NOW library.

Time for a Glow Up

The Performance membership keeps all the same great gaming benefits and now provides members with an enhanced streaming experience at no additional cost.

Performance membership on GeForce NOW
Say hello to the Performance membership.

Performance members can stream at up to 1440p — an increase from the previous 1080p resolution — and experience games in immersive, ultrawide resolutions. They can also save their in-game graphics settings across streaming sessions, including for NVIDIA RTX features in supported titles.

All current Priority members are automatically upgraded to Performance and can take advantage of the upgraded streaming experience today.

Performance members will connect to GeForce RTX-powered gaming rigs for up to 1440p resolution. Ultimate members continue to receive the top streaming experience: connecting to GeForce RTX 4080-powered gaming rigs with up to 4K resolution and 120 frames per second, or 1080p and 240 fps in Competitive mode for games with support for NVIDIA Reflex technology.

Gamers playing on the free tier will now see they’re streaming from basic rigs, with varying specs that offer entry-level cloud gaming and are optimized for capacity.

Account portal on GeForce NOW
Time to play.

At the start of next year, GeForce NOW will roll out a 100-hour monthly playtime allowance to continue providing exceptional quality and speed — as well as shorter queue times — for Performance and Ultimate members. This ample limit comfortably accommodates 94% of members, who typically enjoy the service well within this timeframe. Members can check out how much time they’ve spent in the cloud through their account portal (see screenshot example above).

Up to 15 hours of unused playtime will automatically roll over to the next month for members, and additional hours can be purchased at $2.99 for 15 additional hours of Performance, or $5.99 for 15 additional Ultimate hours.

Loyal Member Benefit

To thank the GFN community for joining the cloud gaming revolution, GeForce NOW is offering active paid members as of Dec. 31, 2024, the ability to continue with unlimited playtime for a full year until January 2026.

New members can lock in this feature by signing up for GeForce NOW before Dec. 31, 2024. As long as a member’s account remains uninterrupted and in good standing, they’ll continue to receive unlimited playtime for all of 2025.

Don’t Pass This Up

For those looking to try out the new premium benefits and all Performance and Ultimate memberships have to offer, Day Passes are 25% off for a limited time.

Whether with the newly named Performance Day Pass at $2.99 or the Ultimate Day Pass at $5.99, members can unlock 24 hours of uninterrupted access to powerful NVIDIA GeForce RTX-powered cloud gaming servers.

Another new GeForce NOW feature lets users apply the value of their most recently purchased Day Pass toward any monthly membership if they sign up within 48 hours of the completion of their Day Pass.

Day Pass Sale on GeForce NOW
Quarter the price, full day of fun.

Dive into a vast library of over 2,000 games with enhanced graphics, including NVIDIA RTX features like ray tracing and DLSS. With the Ultimate Day Pass, snag a taste of GeForce NOW’s highest-performing membership tier and enjoy up to 4K resolution 120 fps or 1080p 240 fps across nearly any device. It’s an ideal way to experience elevated GeForce gaming in the cloud.

Thrilling New Games

Members can look for the following games available to stream in the cloud this week:

  • Planet Coaster 2 (New release on Steam, Nov. 6)
  • Teenage Mutant Ninja Turtles: Splintered Fate (New release on Steam, Nov. 6)
  • Empire of the Ants (New release on Steam, Nov. 7)
  • Unrailed 2: Back on Track (New release on Steam, Nov. 7)
  • TCG Card Shop Simulator (Steam)
  • StarCraft II (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)
  • StarCraft Remastered (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools

NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools

Robotics developers can greatly accelerate their work on AI-enabled robots, including humanoids, using new AI and simulation tools and workflows that NVIDIA revealed this week at the Conference for Robot Learning (CoRL) in Munich, Germany.

The lineup includes the general availability of the NVIDIA Isaac Lab robot learning framework; six new humanoid robot learning workflows for Project GR00T, an initiative to accelerate humanoid robot development; and new world-model development tools for video data curation and processing, including the NVIDIA Cosmos tokenizer and NVIDIA NeMo Curator for video processing.

The open-source Cosmos tokenizer provides robotics developers superior visual tokenization by breaking down images and videos into high-quality tokens with exceptionally high compression rates. It runs up to 12x faster than current tokenizers, while NeMo Curator provides video processing curation up to 7x faster than unoptimized pipelines.

Also timed with CoRL, NVIDIA presented 23 papers and nine workshops related to robot learning and released training and workflow guides for developers. Further, Hugging Face and NVIDIA announced they’re collaborating to accelerate open-source robotics research with LeRobot, NVIDIA Isaac Lab and NVIDIA Jetson for the developer community.

Accelerating Robot Development With Isaac Lab 

NVIDIA Isaac Lab is an open-source, robot learning framework built on NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and physical AI simulation.

Developers can use Isaac Lab to train robot policies at scale. This open-source unified robot learning framework applies to any embodiment — from humanoids to quadrupeds to collaborative robots — to handle increasingly complex movements and interactions.

Leading commercial robot makers, robotics application developers and robotics research entities around the world are adopting Isaac Lab, including 1X, Agility Robotics, The AI Institute, Berkeley Humanoid, Boston Dynamics, Field AI, Fourier, Galbot, Mentee Robotics, Skild AI, Swiss-Mile, Unitree Robotics and XPENG Robotics.

Project GR00T: Foundations for General-Purpose Humanoid Robots 

Building advanced humanoids is extremely difficult, demanding multilayer technological and interdisciplinary approaches to make the robots perceive, move and learn skills effectively for human-robot and robot-environment interactions.

Project GR00T is an initiative to develop accelerated libraries, foundation models and data pipelines to accelerate the global humanoid robot developer ecosystem.

Six new Project GR00T workflows provide humanoid developers with blueprints to realize the most challenging humanoid robot capabilities. They include:

  • GR00T-Gen for building generative AI-powered, OpenUSD-based 3D environments
  • GR00T-Mimic for robot motion and trajectory generation
  • GR00T-Dexterity for robot dexterous manipulation
  • GR00T-Control for whole-body control
  • GR00T-Mobility for robot locomotion and navigation
  • GR00T-Perception for multimodal sensing

“Humanoid robots are the next wave of embodied AI,” said Jim Fan, senior research manager of embodied AI at NVIDIA. “NVIDIA research and engineering teams are collaborating across the company and our developer ecosystem to build Project GR00T to help advance the progress and development of global humanoid robot developers.”

New Development Tools for World Model Builders

Today, robot developers are building world models — AI representations of the world that can predict how objects and environments respond to a robot’s actions. Building these world models is incredibly compute- and data-intensive, with models requiring thousands of hours of real-world, curated image or video data.

NVIDIA Cosmos tokenizers provide efficient, high-quality encoding and decoding to simplify the development of these world models. They set a new standard of minimal distortion and temporal instability, enabling high-quality video and image reconstructions.

Providing high-quality compression and up to 12x faster visual reconstruction, the Cosmos tokenizer paves the path for scalable, robust and efficient development of generative applications across a broad spectrum of visual domains.

1X, a humanoid robot company, has updated the 1X World Model Challenge dataset to use the Cosmos tokenizer.

“NVIDIA Cosmos tokenizer achieves really high temporal and spatial compression of our data while still retaining visual fidelity,” said Eric Jang, vice president of AI at 1X Technologies. “This allows us to train world models with long horizon video generation in an even more compute-efficient manner.”

Other humanoid and general-purpose robot developers, including XPENG Robotics and Hillbot, are developing with the NVIDIA Cosmos tokenizer to manage high-resolution images and videos.

NeMo Curator now includes a video processing pipeline. This enables robot developers to improve their world-model accuracy by processing large-scale text, image and video data.

Curating video data poses challenges due to its massive size, requiring scalable pipelines and efficient orchestration for load balancing across GPUs. Additionally, models for filtering, captioning and embedding need optimization to maximize throughput.

NeMo Curator overcomes these challenges by streamlining data curation with automatic pipeline orchestration, reducing processing time significantly. It supports linear scaling across multi-node, multi-GPU systems, efficiently handling over 100 petabytes of data. This simplifies AI development, reduces costs and accelerates time to market.

Advancing the Robot Learning Community at CoRL

The nearly two dozen research papers the NVIDIA robotics team released with CoRL cover breakthroughs in integrating vision language models for improved environmental understanding and task execution, temporal robot navigation, developing long-horizon planning strategies for complex multistep tasks and using human demonstrations for skill acquisition.

Groundbreaking papers for humanoid robot control and synthetic data generation include SkillGen, a system based on synthetic data generation for training robots with minimal human demonstrations, and HOVER, a robot foundation model for controlling humanoid robot locomotion and manipulation.

NVIDIA researchers will also be participating in nine workshops at the conference. Learn more about the full schedule of events.

Availability

NVIDIA Isaac Lab 1.2 is available now and is open source on GitHub. NVIDIA Cosmos tokenizer is available now on GitHub and Hugging Face. NeMo Curator for video processing will be available at the end of the month.

The new NVIDIA Project GR00T workflows are coming soon to help robot companies build humanoid robot capabilities with greater ease. Read more about the workflows on the NVIDIA Technical Blog.

Researchers and developers learning to use Isaac Lab can now access developer guides and tutorials, including an Isaac Gym to Isaac Lab migration guide.

Discover the latest in robot learning and simulation in an upcoming OpenUSD insider livestream on robot simulation and learning on Nov. 13, and attend the NVIDIA Isaac Lab office hours for hands-on support and insights.

Developers can apply to join the NVIDIA Humanoid Robot Developer Program.

Read More

Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development

Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development

At the Conference for Robot Learning (CoRL) in Munich, Germany, Hugging Face and NVIDIA announced a collaboration to accelerate robotics research and development by bringing together their open-source robotics communities.

Hugging Face’s LeRobot open AI platform combined with NVIDIA AI, Omniverse and Isaac robotics technology will enable researchers and developers to drive advances across a wide range of industries, including manufacturing, healthcare and logistics.

Open-Source Robotics for the Era of Physical AI

The era of physical AI — robots understanding physical properties of environments — is here, and it’s rapidly transforming the world’s industries.

To drive and sustain this rapid innovation, robotics researchers and developers need access to open-source, extensible frameworks that span the development process of robot training, simulation and inference. With models, datasets and workflows released under shared frameworks, the latest advances are readily available for use without the need to recreate code.

Hugging Face’s leading open AI platform serves more than 5 million machine learning researchers and developers, offering tools and resources to streamline AI development. Hugging Face users can access and fine-tune the latest pretrained models and build AI pipelines on common APIs with over 1.5 million models, datasets and applications freely accessible on the Hugging Face Hub.

LeRobot, developed by Hugging Face, extends the successful paradigms from its  Transformers and Diffusers libraries into the robotics domain. LeRobot offers a comprehensive suite of tools for sharing data collection, model training and simulation environments along with designs for low-cost manipulator kits.

NVIDIA’s AI technology, simulation and open-source robot learning modular framework such as NVIDIA Isaac Lab can accelerate the LeRobot’s data collection, training and verification workflow. Researchers and developers can share their models and datasets built with LeRobot and Isaac Lab, creating a data flywheel for the robotics community.

Scaling Robot Development With Simulation

Developing physical AI is challenging. Unlike language models that use extensive internet text data, physics-based robotics relies on physical interaction data along with vision sensors, which is harder to gather at scale. Collecting real-world robot data for dexterous manipulation across a large number of tasks and environments is time-consuming and labor-intensive.

Making this easier, Isaac Lab, built on NVIDIA Isaac Sim, enables robot training by demonstration or trial-and-error in simulation using  high-fidelity rendering and physics simulation to create realistic synthetic environments and data. By combining GPU-accelerated physics simulations and parallel environment execution, Isaac Lab provides the ability to generate vast amounts of training data — equivalent to thousands of real-world experiences — from a single demonstration.

Generated motion data is then used to train a policy with imitation learning. After successful training and validation in simulation, the policies are deployed on a real robot, where they are further tested and tuned to achieve optimal performance.

This iterative process leverages real-world data’s accuracy and the scalability of simulated synthetic data, ensuring robust and reliable robotic systems.

By sharing these datasets, policies and models on Hugging Face, a robot data flywheel is created that enables developers and researchers to build upon each other’s work, accelerating progress in the field.

“The robotics community thrives when we build together,” said Animesh Garg, assistant professor at Georgia Tech. “By embracing open-source frameworks such as Hugging Face’s LeRobot and NVIDIA Isaac Lab, we accelerate the pace of research and innovation in AI-powered robotics.”

Fostering Collaboration and Community Engagement

The planned collaborative workflow involves collecting data through teleoperation and simulation in Isaac Lab, storing it in the standard LeRobotDataset format. Data generated using GR00T-Mimic, will then be used to train a robot policy with imitation learning, which is subsequently evaluated in simulation. Finally, the validated policy is deployed on real-world robots with NVIDIA Jetson for real-time inference.

The initial steps in this collaboration have already been taken, having shown a physical picking setup with LeRobot software running on NVIDIA Jetson Orin Nano, providing a powerful, compact compute platform for deployment.

“Combining Hugging Face open-source community with NVIDIA’s hardware and Isaac Lab simulation has the potential to accelerate innovation in AI for robotics,” said Remi Cadene, principal research scientist at LeRobot.

This work builds on NVIDIA’s community contributions in generative AI at the edge, supporting the latest open models and libraries, such as Hugging Face Transformers, optimizing inference for large language models (LLMs), small language models (SLMs) and multimodal vision-language models (VLMs), along with VLM’s action-based variants of  vision language action models (VLAs), diffusion policies and speech models — all with strong, community-driven support.

Together, Hugging Face and NVIDIA aim to accelerate the work of the global ecosystem of robotics researchers and developers transforming industries ranging from transportation to manufacturing and logistics.

Learn about NVIDIA’s robotics research papers at CoRL, including VLM integration for better environmental understanding, temporal navigation and long-horizon planning. Check out workshops at CoRL with NVIDIA researchers.

Read More

Get Plugged In: How to Use Generative AI Tools in Obsidian

Get Plugged In: How to Use Generative AI Tools in Obsidian

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.

As generative AI evolves and accelerates industry, a community of AI enthusiasts is experimenting with ways to integrate the powerful technology into common productivity workflows.

Applications that support community plug-ins give users the power to explore how large language models (LLMs) can enhance a variety of workflows. By using local inference servers powered by the NVIDIA RTX-accelerated llama.cpp software library, users on RTX AI PCs can integrate local LLMs with ease.

Previously, we looked at how users can take advantage of Leo AI in the Brave web browser to optimize the web browsing experience. Today, we look at Obsidian, a popular writing and note-taking application, based on the Markdown markup language, that’s useful for keeping complex and linked records for multiple projects. The app supports community-developed plug-ins that bring additional functionality, including several that enable users to connect Obsidian to a local inferencing server like Ollama or LM Studio.

Using Obsidian and LM Studio to generate notes with a 27B-parameter LLM accelerated by RTX.

Connecting Obsidian to LM Studio only requires enabling the local server functionality in LM Studio by clicking on the “Developer” icon on the left panel, loading any downloaded model, enabling the CORS toggle and clicking “Start.” Take note of the chat completion URL from the “Developer” log console (“http://localhost:1234/v1/chat/completions” by default), as the plug-ins will need this information to connect.

Next, launch Obsidian and open the “Settings” panel. Click “Community plug-ins” and then “Browse.” There are several community plug-ins related to LLMs, but two popular options are Text Generator and Smart Connections.

  • Text Generator is helpful for generating content in an Obsidian vault, like notes and summaries on a research topic.
  • Smart Connections is useful for asking questions about the contents of an Obsidian vault, such as the answer to an obscure trivia question previously saved years ago.

Each plug-in has its own way of entering the LM Server URL.

For Text Generator, open the settings and select “Custom” for “Provider profile” and paste the whole URL into the “Endpoint” field. For Smart Connections, configure the settings after starting the plug-in. In the settings panel on the right side of the interface, select “Custom Local (OpenAI Format)” for the model platform. Then, enter the URL and the model name (e.g., “gemma-2-27b-instruct”) into their respective fields as they appear in LM Studio.

Once the fields are filled in, the plug-ins will function. The LM Studio user interface will also show logged activity if users are curious about what’s happening on the local server side.

Transforming Workflows With Obsidian AI Plug-Ins

Both the Text Generator and Smart Connections plug-ins use generative AI in compelling ways.

For example, imagine a user wants to plan a vacation to the fictitious destination of Lunar City and brainstorm ideas for what to do there. The user would start a new note, titled “What to Do in Lunar City.” Since Lunar City is not a real place, the query sent to the LLM will need to include a few extra instructions to guide the responses. Click the Text Generator plug-in icon, and the model will generate a list of activities to do during the trip.

Obsidian, via the Text Generator plug-in, will request LM Studio to generate a response, and in turn LM Studio will run the Gemma 2 27B model. With RTX GPU acceleration in the user’s computer, the model can quickly generate a list of things to do.

The Text Generator community plug-in in Obsidian enables users to connect to an LLM in LM Studio and generate notes for an imaginary vacation. The Text Generator community plug-in in Obsidian allows users to access an LLM through LM Studio to generate notes for a fictional vacation.

Or, suppose many years later the user’s friend is going to Lunar City and wants to know where to eat. The user may not remember the names of the places where they ate, but they can check the notes in their vault (Obsidian’s term for a collection of notes) in case they’d written something down.

Rather than looking through all of the notes manually, a user can use the Smart Connections plug-in to ask questions about their vault of notes and other content. The plug-in uses the same LM Studio server to respond to the request, and provides relevant information it finds from the user’s notes to assist the process. The plug-in does this using a technique called retrieval-augmented generation.

The Smart Connections community plug-in in Obsidian uses retrieval-augmented generation and a connection to LM Studio to enable users to query their notes.

These are fun examples, but after spending some time with these capabilities, users can see the real benefits and improvements for everyday productivity. Obsidian plug-ins are just two ways in which community developers and AI enthusiasts are embracing AI to supercharge their PC experiences. .

NVIDIA GeForce RTX technology for Windows PCs can run thousands of open-source models for developers to integrate into their Windows apps.

Learn more about the power of LLMs, Text Generation and Smart Connections by integrating Obsidian into your workflow and play with the accelerated experience available on RTX AI PCs

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Austin Calling: As Texas Absorbs Influx of Residents, Rekor Taps NVIDIA Technology for Roadway Safety, Traffic Relief

Austin Calling: As Texas Absorbs Influx of Residents, Rekor Taps NVIDIA Technology for Roadway Safety, Traffic Relief

Austin is drawing people to jobs, music venues, comedy clubs, barbecue and more. But with this boom has come a big city blues: traffic jams.

Rekor, which offers traffic management and public safety analytics, has a front-row seat to the increasing traffic from an influx of new residents migrating to Austin. Rekor works with the Texas Department of Transportation, which has a $7 billion project addressing this, to help mitigate the roadway concerns.

“Texas has been trying to meet that growth and demand on the roadways by investing a lot in infrastructure, and they’re focusing a lot on digital infrastructure,” said Shervin Esfahani, vice president of global marketing and communications at Rekor. “It’s super complex, and they realized their traditional systems were unable to really manage and understand it in real time.”

Rekor, based in Columbia, Maryland, has been harnessing NVIDIA Metropolis for real-time video understanding and NVIDIA Jetson Xavier NX modules for edge AI in Texas, Florida, Philadelphia, Georgia, Nevada, Oklahoma and many more U.S. destinations as well as in Israel and other places internationally.

Metropolis is an application framework for smart infrastructure development with vision AI. It provides developer tools, including the NVIDIA DeepStream SDK, NVIDIA TAO Toolkit, pretrained models on the NVIDIA NGC catalog and NVIDIA TensorRT. NVIDIA Jetson is a compact, powerful and energy-efficient accelerated computing platform used for embedded and robotics applications.

Rekor’s efforts in Texas and Philadelphia to help better manage roads with AI are the latest development in an ongoing story for traffic safety and traffic management.

Reducing Rubbernecking, Pileups, Fatalities and Jams

Rekor offers two main products: Rekor Command and Rekor Discover. Command is an AI-driven platform for traffic management centers, providing rapid identification of traffic events and zones of concern. It offers departments of transportation with real-time situational awareness and alerts that allows them to keep city roadways safer and more congestion-free.

Discover taps into Rekor’s edge system to fully automate the capture of comprehensive traffic and vehicle data and provides robust traffic analytics that turn roadway data into measurable, reliable traffic knowledge. With Rekor Discover, departments of transportation can see a full picture of how vehicles move on roadways and the impact they make, allowing them to better organize and execute their future city-building initiatives.

The company has deployed Command across Austin to help detect issues, analyze incidents and respond to roadway activity with a real-time view.

“For every minute an incident happens and stays on the road, it creates four minutes of traffic, which puts a strain on the road, and the likelihood of a secondary incident like an accident from rubbernecking massively goes up,” said Paul-Mathew Zamsky, vice president of strategic growth and partnerships at Rekor. “Austin deployed Rekor Command and saw a 159% increase in incident detections, and they were able to respond eight and a half minutes faster to those incidents.”

Rekor Command takes in many feeds of data — like traffic camera footage, weather, connected car info and construction updates — and taps into any other data infrastructure, as well as third-party data. It then uses AI to make connections and surface up anomalies, like a roadside incident. That information is presented in workflows to traffic management centers for review, confirmation and response.

“They look at it and respond to it, and they are doing it faster than ever before,” said Esfahani. “It helps save lives on the road, and it also helps people’s quality of life, helps them get home faster and stay out of traffic, and it reduces the strain on the system in the city of Austin.”

In addition to adopting NVIDIA’s full-stack accelerated computing for roadway intelligence, Rekor is going all in on NVIDIA AI and NVIDIA AI Blueprints, which are reference workflows for generative AI use cases, built with NVIDIA NIM microservices as part of the NVIDIA AI Enterprise software platform. NVIDIA NIM is a set of easy-to-use inference microservices for accelerating deployments of foundation models on any cloud or data center while keeping data secure.

Rekor has multiple large language models and vision language models  running on NVIDIA Triton Inference Server in production,” according to Shai Maron, senior vice president of global software and data engineering at Rekor. 

“Internally, we’ll use it for data annotation, and it will help us optimize different aspects of our day to day,” he said. “LLMs externally will help us calibrate our cameras in a much more efficient way and configure them.”

Rekor is using the NVIDIA AI Blueprint for video search and summarization to build AI agents for city services, particularly in areas such as traffic management, public safety and optimization of city infrastructure. NVIDIA recently announced a new AI Blueprint for video search and summarization enabling a range of interactive visual AI agents that extracts complex activities from massive volumes of live or archived video.

Philadelphia Monitors Roads, EV Charger Needs, Pollution

Philadelphia Navy Yard is a tourism hub run by the Philadelphia Industrial Development Corporation (PIDC), which has some challenges in road management and gathering data on new developments for the popular area. The Navy Yard location, occupying 1,200 acres, has more than 150 companies and 15,000 employees, but a $6 billion redevelopment plan there promises to bring in 12,000-plus new jobs and thousands more as residents to the area.

PIDC sought greater visibility into the effects of road closures and construction projects on mobility and how to improve mobility during significant projects and events. PIDC also looked to strengthen the Navy Yard’s ability to understand the volume and traffic flow of car carriers or other large vehicles and quantify the impact of speed-mitigating devices deployed across hazardous stretches of roadway.

Discover provided PIDC insights into additional infrastructure projects that need to be deployed to manage any changes in traffic.

Understanding the number of electric vehicles, and where they’re entering and leaving the Navy Yard, provides PIDC with clear insights on potential sites for electric vehicle (EV) charge station deployment in the future. By pulling insights from Rekor’s edge systems, built with NVIDIA Jetson Xavier NX modules for powerful edge processing and AI, Rekor Discover lets Navy Yard understand the number of EVs and where they’re entering and leaving, allowing PIDC to better plan potential sites for EV charge station deployment in the future.

Rekor Discover enabled PIDC planners to create a hotspot map of EV traffic by looking at data provided by the AI platform. The solution relies on real-time traffic analysis using NVIDIA’s DeepStream data pipeline and Jetson. Additionally, it uses NVIDIA Triton Inference Server to enhance LLM capabilities.

The PIDC wanted to address public safety issues related to speeding and collisions as well as decrease property damage. Using speed insights, it’s deploying traffic calming measures where average speeds are exceeding what’s ideal on certain segments of roadway.

NVIDIA Jetson Xavier NX to Monitor Pollution in Real Time

Traditionally, urban planners can look at satellite imagery to try to understand pollution locations, but Rekor’s vehicle recognition models, running on NVIDIA Jetson Xavier NX modules, were able to track it to the sources, taking it a step further toward mitigation.

“It’s about air quality,” said Shobhit Jain, senior vice president of product management at Rekor. “We’ve built models to be really good at that. They can know how much pollution each vehicle is putting out.”

Looking ahead, Rekor is examining how NVIDIA Omniverse might be used for digital twins development in order to simulate traffic mitigation with different strategies. Omniverse is a platform for developing OpenUSD applications for industrial digitalization and generative physical AI.

Developing digital twins with Omniverse for municipalities has enormous implications for reducing traffic, pollution and road fatalities — all areas Rekor sees as hugely beneficial to its customers.

“Our data models are granular, and we’re definitely exploring Omniverse,” said Jain. “We’d like to see how we can support those digital use cases.”

Learn about the NVIDIA AI Blueprint for building AI agents for video search and summarization.

Read More

Give AI a Look: Any Industry Can Now Search and Summarize Vast Volumes of Visual Data

Give AI a Look: Any Industry Can Now Search and Summarize Vast Volumes of Visual Data

Enterprises and public sector organizations around the world are developing AI agents to boost the capabilities of workforces that rely on visual information from a growing number of devices — including cameras, IoT sensors and vehicles.

To support their work, a new NVIDIA AI Blueprint for video search and summarization will enable developers in virtually any industry to build visual AI agents that analyze video and image content. These agents can answer user questions, generate summaries and enable alerts for specific scenarios.

Part of NVIDIA Metropolis, a set of developer tools for building vision AI applications, the blueprint is a customizable workflow that combines NVIDIA computer vision and generative AI technologies.

Global systems integrators and technology solutions providers including Accenture, Dell Technologies and Lenovo are bringing the NVIDIA AI Blueprint for visual search and summarization to businesses and cities worldwide, jump-starting the next wave of AI applications that can be deployed to boost productivity and safety in factories, warehouses, shops, airports, traffic intersections and more.

Announced ahead of the Smart City Expo World Congress, the NVIDIA AI Blueprint gives visual computing developers a full suite of optimized software for building and deploying generative AI-powered agents that can ingest and understand massive volumes of live video streams or data archives.

Users can customize these visual AI agents with natural language prompts instead of rigid software code, lowering the barrier to deploying virtual assistants across industries and smart city applications.

NVIDIA AI Blueprint Harnesses Vision Language Models

Visual AI agents are powered by vision language models (VLMs), a class of generative AI models that combine computer vision and language understanding to interpret the physical world and perform reasoning tasks.

The NVIDIA AI Blueprint for video search and summarization can be configured with NVIDIA NIM microservices for VLMs like NVIDIA VILA, LLMs like Meta’s Llama 3.1 405B and AI models for GPU-accelerated question answering and context-aware retrieval-augmented generation. Developers can easily swap in other VLMs, LLMs and graph databases and fine-tune them using the NVIDIA NeMo platform for their unique environments and use cases.

Adopting the NVIDIA AI Blueprint could save developers months of effort on investigating and optimizing generative AI models for smart city applications. Deployed on NVIDIA GPUs at the edge, on premises or in the cloud, it can vastly accelerate the process of combing through video archives to identify key moments.

In a warehouse environment, an AI agent built with this workflow could alert workers if safety protocols are breached. At busy intersections, an AI agent could identify traffic collisions and generate reports to aid emergency response efforts. And in the field of public infrastructure, maintenance workers could ask AI agents to review aerial footage and identify degrading roads, train tracks or bridges to support proactive maintenance.

Beyond smart spaces, visual AI agents could also be used to summarize videos for people with impaired vision, automatically generate recaps of sporting events and help label massive visual datasets to train other AI models.

The video search and summarization workflow joins a collection of NVIDIA AI Blueprints that make it easy to create AI-powered digital avatars, build virtual assistants for personalized customer service and extract enterprise insights from PDF data.

NVIDIA AI Blueprints are free for developers to experience and download, and can be deployed in production across accelerated data centers and clouds with NVIDIA AI Enterprise, an end-to-end software platform that accelerates data science pipelines and streamlines generative AI development and deployment.

AI Agents to Deliver Insights From Warehouses to World Capitals

Enterprise and public sector customers can also harness the full collection of NVIDIA AI Blueprints with the help of NVIDIA’s partner ecosystem.

Global professional services company Accenture has integrated NVIDIA AI Blueprints into its Accenture AI Refinery, which is built on NVIDIA AI Foundry and enables customers to develop custom AI models trained on enterprise data.

Global systems integrators in Southeast Asia — including ITMAX in Malaysia and FPT in Vietnam — are building AI agents based on the video search and summarization NVIDIA AI Blueprint for smart city and intelligent transportation applications.

Developers can also build and deploy NVIDIA AI Blueprints on NVIDIA AI platforms with compute, networking and software provided by global server manufacturers.

Dell will use VLM and agent approaches with Dell’s NativeEdge platform to enhance existing edge AI applications and create new edge AI-enabled capabilities. Dell Reference Designs for the Dell AI Factory with NVIDIA and the NVIDIA AI Blueprint for video search and summarization will support VLM capabilities in dedicated AI workflows for data center, edge and on-premises multimodal enterprise use cases.

NVIDIA AI Blueprints are also incorporated in Lenovo Hybrid AI solutions powered by NVIDIA.

Companies like K2K, a smart city application provider in the NVIDIA Metropolis ecosystem, will use the new NVIDIA AI Blueprint to build AI agents that analyze live traffic cameras in real time. This will enable city officials to ask questions about street activity and receive recommendations on ways to improve operations. The company also is working with city traffic managers in Palermo, Italy, to deploy visual AI agents using NIM microservices and NVIDIA AI Blueprints.

Discover more about the NVIDIA AI Blueprint for video search and summarization by visiting the NVIDIA booth at the Smart Cities Expo World Congress, taking place in Barcelona through Nov. 7.

Learn how to build a visual AI agent.

Read More

Startup Helps Surgeons Target Breast Cancers With AI-Powered 3D Visualizations

Startup Helps Surgeons Target Breast Cancers With AI-Powered 3D Visualizations

A new AI-powered, imaging-based technology that creates accurate three-dimensional models of tumors, veins and other soft tissue offers a promising new method to help surgeons operate on, and better treat, breast cancers.

The technology, from Illinois-based startup SimBioSys, converts routine black-and-white MRI images into spatially accurate, volumetric images of a patient’s breasts. It then illuminates different parts of the breast with distinct colors — the vascular system, or veins, may be red; tumors are shown in blue; surrounding tissue is gray.

Surgeons can then easily manipulate the 3D visualization on a computer screen, gaining important insight to help guide surgeries and influence treatment plans. The technology, called TumorSight, calculates key surgery-related measurements, including a tumor’s volume and how far tumors are from the chest wall and nipple.

It also provides key data about a tumor’s volume in relation to a breast’s overall volume, which can help determine — before a procedure begins — whether surgeons should try to preserve a breast or choose a mastectomy, which often presents cosmetic and painful side effects. Last year, TumorSight received FDA clearance.

Across the world, nearly 2.3 million women are diagnosed with breast cancer each year, according to the World Health Organization. Every year, breast cancer is responsible for the deaths of more than 500,000 women. Around 100,000 women in the U.S. annually undergo some form of mastectomy, according to the Brigham and Women’s Hospital.

According to Jyoti Palaniappan, chief commercial officer at SimBioSys, the company’s visualization technology offers a step-change improvement over the kind of data surgeons typically see before they begin surgery.

“Typically, surgeons will get a radiology report, which tells them, ‘Here’s the size and location of the tumor,’ and they’ll get one or two pictures of the patient’s tumor,” said Palaniappan. “If the surgeon wants to get more information, they’ll need to find the radiologist and have a conversation with them — which doesn’t always happen — and go through the case with them.”

Dr. Barry Rosen, the company’s chief medical officer, said one of the technology’s primary goals is to uplevel and standardize presurgical imaging, which he believes can have broad positive impacts on outcomes.

“We’re trying to move the surgical process from an art to a science by harnessing the power of AI to improve surgical planning,” Dr. Rosen said.

SimBioSys uses NVIDIA A100 Tensor Core GPUs in the cloud for pretraining its models. It also uses NVIDIA MONAI for training and validation data, and NVIDIA CUDA-X libraries including cuBLAS and MONAI Deploy to run its imaging technology. SimBioSys is part of the NVIDIA Inception program for startups.

SimBioSys is already working on additional AI use cases it hopes can improve breast cancer survival rates.

It has developed a novel technique to reconcile MRI images of a patient’s breasts, taken when the patient is lying face down, and converts those images into virtual, realistic 3D visualizations that show how the tumor and surrounding tissue will appear during surgery — when a patient is lying face up.

This 3D visualization is especially relevant for surgeons so they can visualize what a breast  and any tumors will look like once surgery begins.

To create this imagery, the technology calculates gravity’s impact on different kinds of breast tissue and accounts for how different kinds of skin elasticity impact a breast’s shape when a patient is lying on the operating table.

The startup is also working on a new strategy that also relies on AI to quickly provide insights that can help avoid cancer recurrence.

Currently, hospital labs run pathology tests on tumors that surgeons have removed. The biopsies are then sent to a different outside lab, which conducts a more comprehensive molecular analysis.

This process routinely takes up to six weeks. Without knowing how aggressive a cancer in the removed tumor is, or how that type of cancer might respond to different treatments, patients and doctors are unable to quickly chart out treatment plans to avoid recurrence.

SimBioSys’s new technology uses an AI model to analyze the 3D volumetric features of the just-removed tumor, the hospital’s initial tumor pathology report and a patient’s demographic data. From that information, SimBioSys generates — in a matter of hours — a risk analysis for that patient’s cancer, which helps doctors quickly determine the best treatment to avoid recurrence.

According to SimBioSys’s Palaniappan, the startup’s new method matches or exceeds the risk of recurrence scoring ability of more traditional methodologies, based upon its internal studies. It also takes a fraction of the time of these other methods while costing far less money.

Read More