Some things are easy as A, B, C. But when it comes to autonomous vehicles, the key may be in one, two, three.
Faction, a Bay Area-based startup and NVIDIA Inception member, is preparing to debut its business-to-business autonomous delivery service, with three-wheel production electric vehicles purpose-built for driverless operation, streamlining time to market.
In addition, the company has built its autonomous driving system on NVIDIA DRIVE AGX for robust, automotive-grade AI compute.
The demand for last-mile enterprise delivery has significantly increased over the past decade, with few signs of slowing down. The number of business-to-business parcels grew from 7 billion to 11 billion from 2019 to 2021, according to ABI Research. The firm expects this number to continue rising, to reach 75 billion in 2030.
Faction aims to narrow this gap with affordable, production autonomous vehicles ready to hit the road this year.
Smaller Vehicles, Bigger Brains
Faction’s flagship vehicle, the D1, is built on EV maker Arcimoto’s low-cost vehicle platform. The vehicle is designed to be completely driverless, combining autonomous driving and teleoperation to navigate delivery routes.
The D1 delivery vehicle can reach speeds up to 75 miles per hour, with over 100 miles of battery range, and tote 500 pounds of cargo.
Inside the vehicle, NVIDIA DRIVE AGX delivers high-performance and energy-efficient AI compute for autonomous driving.
The centralized platform runs the redundant and diverse deep neural networks that power the vehicle’s AI capabilities, while leaving enough compute headroom to continuously add new features. It’s also automotive grade, achieving systematic safety standards such as ISO 26262 ASIL-D.
“Our goal is to deploy cost-efficient autonomous vehicles in the near term,” said Faction CEO Ain McKendrick. “We chose NVIDIA DRIVE because it’s an automotive-grade platform that meets our needs today.”
Making the Inception Connection
As a member of NVIDIA Inception, Faction taps into the latest AI technologies and expertise to create vehicles that are always at the cutting edge.
Inception supports all stages of a startup’s life cycle. NVIDIA works closely with members to provide the best technical tools, latest resources and opportunities to connect with investors.
McKendrick added that Inception has helped Faction take full advantage of the latest software tools for faster iteration and streamlined development.
Expanding Services
In addition to last-mile delivery, Faction is targeting its vehicles for the micro-mobility market.
The startup plans to next year launch single-rider vehicles that can be requested via an app. The vehicle will drive autonomously to the customer, who will then take control and manually drive it to their destination.
The goal is to meet single-rider demand with a cost-efficient and sustainable shared mobility offering.
By keeping its delivery and mobility vehicles in a compact package without sacrificing compute, Faction proves that three truly is a magic number.
Turn the TV on. GeForce NOW is leveling up gaming in the living room.
The Samsung Gaming Hub launched today, delivering GeForce NOW natively on 2022 Samsung Smart TVs.
Plus, the SHIELD Software Experience Upgrade 9.1 is now rolling out to all NVIDIA SHIELD TVs, delivering new gaming features that improve GeForce NOW.
Great living room gaming pairs perfectly with a great gaming controller. GeForce NOW members can claim a new reward for 20% off all SteelSeries gaming controllers on SteelSeries.com — available through the end of August.
Gear up for the final game release in June with six games available to stream today, with titles from Motorsport Games joining the GeForce NOW library. And 13 additions are coming in July. The announcement arrives just in time to grab games at discounted prices during the Steam Summer Sale through Thursday, July 7.
To cap it all off, the GeForce NOW v2.0.42 update improves streaming performance with new optimizations that adjust streaming resolutions to best fit network conditions.
What’s All the Hubbub?
Today’s launch of the Samsung Gaming Hub brings the best of gaming from leading game streaming services like GeForce NOW to 2022 Samsung Smart TVs.
The Samsung Gaming Hub is a new game-streaming discovery platform that bridges hardware and software to provide a better player experience. Gamers can instantly play the biggest games from GeForce NOW and other top gaming partners with no downloads, storage limits or console required.
The best Samsung Smart TVs combine the latest game streaming technology with intelligent technology for picture quality and sound to create a console-like performance, eliminating the hassle of downloads and worries about precious storage space or latency.
The Samsung Gaming Hub is available now on supported TVs in the US, UK, Brazil, Canada, France, Italy, Germany, and Spain. Members can also stream their PC games on the GeForce NOW app on Samsung TVs in other supported GeForce NOW regions.
GeForce NOW RTX 3080 members also have the advantages of ultra-low latency powered by GeForce NOW SuperPODs with faster game rendering, more efficient encoding and higher streaming frame rates. They also benefit from maximized eight-hour gaming sessions and dedicated RTX 3080 servers.
Gamers can even pair their favorite controllers to the Samsung Gaming Hub for a seamless experience.
The Best Keeps Getting Better
SHIELD TV continues to upgrade the best cloud gaming experience in the living room, adding to its existing GeForce NOW support for 4K HDR, 7.1 surround sound, a wide range of controllers, streaming to Twitch, and in-game voice chat with USB headsets. The latest SHIELD update, Software Experience Upgrade 9.1, takes gaming in the living room to new heights.
SHIELD now automatically switches TVs with automatic low-latency mode to “game mode” when playing games or video conferencing — and then reverts to the previous setting when playing movies or streaming TV shows. This latency-saving feature replaces the cumbersome process of finding the TV remote, switching the mode setting, and changing it back when gaming sessions are complete.
Another new feature is night listening mode, which enables users to stream games or watch movies at night, without disturbing others. SHIELD will automatically adjust sound levels for loud explosions, quiet dialogue and everything in between to deliver a consistent listening experience regardless of volume settings.
The update also includes microphone notifications that help identify the hot mic when multiple devices are connected.
Whether gaming from the cloud with GeForce NOW or playing an Android game locally, the latest SHIELD update helps members get the most responsive gaming experience in the living room.
Get Rewarded With SteelSeries
Members can take control of their gaming with 20% off SteelSeries gaming controllers.
SteelSeries wireless gaming controllers bring the PC gaming experience to any platform with easy pairing, extreme durability and a battery life of up to 50 hours of playtime. They’re also a part of the full lineup of GeForce NOW Recommended products. The discount is valid for the Nimbus +, the Stratus Duo and even the newest Stratus+ models. Redemption is valid through Wednesday, August 31 for select North American and European regions.
It’s easy to get membership rewards for streaming games on the cloud. Log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game spoils.
Start July Off With a Bang
This GFN Thursday closes out the month with six new games streaming this week, including games from Motorsport Games. It also kicks off July with the list of 13 titles on the way,
GeForce NOW welcomes video game publisher Motorsport Games to the cloud. From NASCAR 21: Ignition, the officially licensed video game of the world’s most popular stock-car racing series with, to the thrilling and realistic physics of KartKraft, more gamers than ever can experience racing entertainment streaming to low-powered PCs, Macs and mobile devices.
Catch the games ready to play today:
Alaloth – Champions of The Four Kingdoms (New release on Steam)
Speaking of games, it’s the best time to build your collection with the Steam Summer Sale, running through Thursday, July 7.
Get PC games streaming from the GeForce NOW library during Valve’s special event to stream across low-powered PCs, Macs and mobile devices on the cloud. Once purchased, they’re yours forever, and the cloud saves all your progress.
Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on your next adventure. Race to grab titles like NASCAR 21: Ignition and KartKraft from Motorsport Games and check if any of the GeForce NOW games on your wishlist are on sale. With over 1,300 games streaming on the cloud, it’s a good chance they are.
Extra Games From June
On top of the 25 games announced in June, another seven joined over the month:
Silicon Valley magic met Wednesday with 175 years of industrial technology leadership as Siemens CEO Roland Busch and NVIDIA Founder and CEO Jensen Huang shared their vision for an “industrial metaverse” at the launch of the Siemens Xcelerator business platform in Munich.
“When we combine the real and digital worlds we can achieve new levels of flexibility and we can bring new products to market faster,” Busch said during an event at Siemens’ Munich headquarters.
Pairing physics-based digital models from Siemens with real-time AI from NVIDIA, the companies announced they will connect the Siemens Xcelerator and NVIDIA Omniverse platforms.
“With our two companies we can connect with Siemens makes, and what NVIDIA makes to AI and Omniverse,” Huang said. “We can now fuse data from the point of design, all the way through product life cycle management, all the way through the automation of plants to the optimization of the plant after deployment – that entire life cycle can now be in one world.”
Bringing Real, Virtual Worlds Together
Siemens Xcelerator is a business platform that includes internet of things-enabled hardware, software and digital services from across Siemens that offer a comprehensive digital twin that can bring together the mechanical, electrical and software domains.
Siemens is a leader in industrial automation and software, infrastructure, building technology and transportation, and their solutions are used across the manufacturing lifecycle from designing products and the equipment to manufacture those products in factories to controlling and tracking how the equipment moves to orchestrating the flow of people, parts and machines across the factory itself.
The company has built a rich portfolio of hardware and software solutions that are part of the Siemens Xcelerator platform that is now at the center of an ecosystem of more than 50 certified partners
The NVIDIA Omniverse 3D collaboration and simulation development platform delivers photorealistic rendering capabilities and advanced AI to the Siemens Xcelerator ecosystem, allowing the digital twin to be represented in full-design fidelity, and operating in real-time.
Working Side by Side
During Wednesday’s event, Busch and Huang outlined their plans, showed a demo video of these technologies working together, and sat down for an informal fireside chat with Milan Nedeljkovic, a member of the board of management of BMW.
“The digital twin itself is not the challenge,” Nedeljkovic said, outlining BMW’s plans to create sophisticated digital models of its manufacturing process that are linked, in real-time- to real-world factories. “The challenge is to link into this digital twin the existing systems one by one, and to have any change in the digital twin being reverted in the original planning tools.”
Busch and Huang began their conversation by sharing the story behind Wednesday’s news, relaying insights from their meeting in November.
“We figured out that when we bring our competencies, our technology, our platforms together, we can do something great,” Busch said. “We can basically go for the full-fledged industrial metaverse… to have faster decisions, real-time decisions with higher confidence.”
Transforming Businesses
With the connection of Siemens Xcelerator and NVIDIA Omniverse, manufacturing customers of any size will be able to immediately analyze issues, identify root causes, and simulate and optimize solutions, thanks to the AI-infused, real-time photorealistic virtual environments, Busch and Huang said.
So, for example, if something goes wrong on the factory floor, teams of users from around the world will be able to meet, virtually, to collaborate and use the connected digital twin to quickly identify, troubleshoot and solve the problem.
The partnership also promises to make factories more efficient and sustainable. Users will more easily be able to turn data streaming from the factory floor PLCs and sensors into AI models. These models can be used to continuously optimize performance, predict problems, reduce energy consumption, and streamline the flow of parts and materials across the factory floor.
Under the Hood
The partnership brings together complementary technologies and ecosystems, the two leaders said.
Innovating at the intersection of real and digital worlds, Siemens offers the industry’s most comprehensive digital twin by representing the mechanical, electrical and software domains interacting, Busch explained.
NVIDIA Omniverse is a multi-GPU scalable virtual world engine that enables teams to connect 3D design and CAD applications for collaborative design workflows and allows users to build physically accurate virtual worlds for training, testing and operating AI agents such as robots and autonomous machines.
Together, Xcelerator and Omniverse offer a powerful combination of capabilities.
For example, energy and utility plant engineers can virtually navigate through the live digital twin of a facility to analyze the thermal distribution produced by the existing air conditioning. system from Siemens simulations.
Then they can explore different vents and cooling towers configurations, powered by use the of Omniverse’s full-design-fidelity visualization capabilities enabled by real-time ray and path traced rendering.
Ultimately every component inside a factory can be inspected and optimized – and eventually, automated by AI. A robotic conveyer belt could be trained to alert an operator when the conveyor motor is undergoing excessive energy draw due to improperly greased rollers, saving time and maintenance costs, for example.
Advancing Digital Twins
These innovations will reach not just from the cloud to the factory floor, but across industries, Busch and Huang explained.
“You know, if you look at almost every engineering project today of any significant complexity, we simulate the product before we go to production,” Huang said. “And yet, for most plants and most factories, it’s nearly impossible to do that today… and so we needed to create a very large-scale simulation platform – Omniverse.”
The addition of Siemens Xcelerator to the Omniverse ecosystem will enable domain-specific digital twins, using the rich design, manufacturing and operational data from Siemens’ mechanical, electrical, software, IoT and edge solutions in Omniverse.
“The world’s industries represent hundreds of trillions of dollars over time,” Huang said, adding that finding even small efficiencies in such huge systems is a huge opportunity. “That’s one of the reasons why people want to invest and now we have the technology capability for them to do so.”
BMW’s iFACTORY
The two CEOs continued the fireside chat with BMW AG’s Member of the Board of Management, Dr. Milan Nedeljković.
Nedeljković outlined the carmaker’s initiative, dubbed iFACTORY, to make its factories “lean, green and digital.”
And he explained how BMW Group is working with both Siemens and NVIDIA to move this effort forward.
“By the end of next year BMW will offer 13 fully electrified cars,” he said. “So we are changing our equipment, we are changing our production environment, we are changing our processes, and all of that needs good planning, and, again, digitization is a part of it.”
Siemens and NVIDIA are continuing to help BMW with this digital transformation with the companies committing to collaborate to develop BMW’s factory in Debrecen, Hungary.
BMW is moving fast, planning to get the factory running by 2025. That means Siemens and NVIDIA, who will help BMW model the factory, will need to move fast, too.
NVIDIA and its partners continued to provide the best overall AI training performance and the most submissions across all benchmarks with 90% of all entries coming from the ecosystem, according to MLPerf benchmarks released today.
The NVIDIA AI platform covered all eight benchmarks in the MLPerf Training 2.0 round, highlighting its leading versatility.
No other accelerator ran all benchmarks, which represent popular AI use cases including speech recognition, natural language processing, recommender systems, object detection, image classification and more. NVIDIA has done so consistently since submitting in December 2018 to the first round of MLPerf, an industry-standard suite of AI benchmarks.
Leading Benchmark Results, Availability
In its fourth consecutive MLPerf Training submission, the NVIDIA A100 Tensor Core GPU based on the NVIDIA Ampere architecture continued to excel.
Selene — our in-house AI supercomputer based on the modular NVIDIA DGX SuperPOD and powered by NVIDIA A100 GPUs, our software stack and NVIDIA InfiniBand networking — turned in the fastest time to train on four out of eight tests.
NVIDIA A100 also continued its per-chip leadership, proving the fastest on six of the eight tests.
A total of 16 partners submitted results this round using the NVIDIA AI platform. They include ASUS, Baidu, CASIA (Institute of Automation, Chinese Academy of Sciences), Dell Technologies, Fujitsu, GIGABYTE, H3C, Hewlett Packard Enterprise, Inspur, KRAI, Lenovo, MosaicML, Nettrix and Supermicro.
Most of our OEM partners submitted results using NVIDIA-Certified Systems, servers validated by NVIDIA to provide great performance, manageability, security and scalability for enterprise deployments.
Many Models Power Real AI Applications
An AI application may need to understand a user’s spoken request, classify an image, make a recommendation and deliver a response as a spoken message.
These tasks require multiple kinds of AI models to work in sequence, also known as a pipeline. Users need to design, train, deploy and optimize these models fast and flexibly.
That’s why both versatility – the ability to run every model in MLPerf and beyond – as well as leading performance are vital for bringing real-world AI into production.
Delivering ROI With AI
For customers, their data science and engineering teams are their most precious resources, and their productivity determines the return on investment for AI infrastructure. Customers must consider the cost of expensive data science teams, which often plays a significant part in the total cost of deploying AI, as well as the relatively small cost of deploying the AI infrastructure itself.
AI researcher productivity depends on the ability to quickly test new ideas, requiring both the versatility to train any model as well as the speed afforded by training those models at the largest scale.That’s why organizations focus on overall productivity per dollar to determine the best AI platforms — a more comprehensive view that more accurately represents the true cost of deploying AI.
In addition, the utilization of their AI infrastructure relies on its fungibility, or the ability to accelerate the entire AI workflow — from data prep to training to inference — on a single platform.
With NVIDIA AI, customers can use the same infrastructure for the entire AI pipeline, repurposing it to match the varying demands between data preparation, training and inference, which dramatically boosts utilization, leading to very high ROI.
And, as researchers discover new AI breakthroughs, supporting the latest model innovations is key to maximizing the useful life of AI infrastructure.
NVIDIA AI delivers the highest productivity per dollar as it is universal and performant for every model, scales to any size and accelerates AI from end to end — from data prep to training to inference.
Today’s results provide the latest demonstration of NVIDIA’s broad and deep AI expertise shown in every MLPerf training, inference and HPC round to date.
23x More Performance in 3.5 Years
In the two years since our first MLPerf submission with A100, our platform has delivered 6x more performance. Continuous optimizations to our software stack helped fuel those gains.
Since the advent of MLPerf, the NVIDIA AI platform has delivered 23x more performance in 3.5 years on the benchmark — the result of full-stack innovation spanning GPUs, software and at-scale improvements. It’s this continuous commitment to innovation that assures customers that the AI platform that they invest in today and keep in service for 3 to 5 years, will continue to advance to support the state-of-the-art.
In addition the NVIDIA Hopper architecture, announced in March, promises another giant leap in performance in future MLPerf rounds.
For example, CUDA Graphs — software that helps minimize launch overhead on jobs that run across many accelerators — is used extensively across our submissions. Optimized kernels in our libraries like cuDNN and pre-processing in DALI unlocked additional speedups. We also implemented full stack improvements across hardware, software and networking such as NVIDIA Magnum IO and SHARP, which offloads some AI functions into the network to drive even greater performance, especially at scale.
All the software we use is available from the MLPerf repository, so everyone can get our world-class results. We continuously fold these optimizations into containers available on NGC, our software hub for GPU applications, and offer NVIDIA AI Enterprise to deliver optimized software, fully supported by NVIDIA.
Two years after the debut of A100, the NVIDIA AI platform continues to deliver the highest performance in MLPerf 2.0, and is the only platform to submit on every single benchmark. Our next-generation Hopper architecture promises another giant leap in future MLPerf rounds.
Our platform is universal for every model and framework at any scale, and provides the fungibility to handle every part of the AI workload. It’s available from every major cloud and server maker.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.
The June NVIDIA Studio Driver is available for download today, optimizing the latest creative app updates, all with the stability and reliability that users count on.
Creators with NVIDIA RTX GPUs will benefit from faster performance and new features within Blender version 3.2, BorisFX Sapphire release 2022.5 and Topaz Denoise AI 3.7.0.
And this week, NVIDIA Senior Designer Daniel Barnes showcases inspirational artwork In the NVIDIA Studio. Specializing in visual design and 3D content, Barnes covers his creative workflow in designing the galactic 3D scene Journey.
June Boon: Studio Driver Release Supports Blender, Sapphire and Denoise AI Updates
OpenVDB offers a near-infinite 3D index space, allowing massive Universal Scene Description (USD) files to move in and out of Omniverse while keeping these volumes intact. 3D artists can iterate with larger files, speeding up creative workflows without the need to reduce or convert files and sizes.
Blender 3.2 also added a Light Group feature, enabling artists to modify the color and intensity of light sources in the compositor without re-rendering. New Shadow Caustics supports selective rendering of caustics in shadows of refractive objects for further realism. Check out a complete overview of the Blender 3.2 update.
BorisFX Sapphire 2022.5 now supports multi-GPU systems applying GPU-accelerated visual-effect plugins in Blackmagic’s DaVinci Resolve — scaling GPU power with rendering speeds up to 6x faster.
Topaz Denoise AI 3.7.0 added support for the NVIDIA TensorRT framework, which means RTX GPU owners will benefit from significantly faster inference speeds. When using RAW model denoising features, inference runs up to 6x faster.
This week’s In the NVIDIA Studio artist spotlight sees NVIDIA’s Daniel Barnes share his creative process for the 3D sci-fi scene, Journey.
Barnes’ artwork draws inspiration from various movies and anime, usually a combination of visual revelations. Journey is an extension of Barnes’ latest obsession, the isekai genre in anime, where the protagonist awakens in another world and has to navigate their new and unknown situation.
“This reincarnation narrative can be pretty refreshing, as it almost always is something you can connect with living life and having firsts, or wishing you could redo a particular moment differently with the advantage of 20/20 hindsight,” Barnes noted.
Barnes sketches when inspiration strikes — often at his local coffee shop — which is where he got started with Journey using Adobe Photoshop.
With his GeForce RTX 3060-powered laptop, Barnes benefited from speedy GPU-accelerated features such as Scrubby Zoom, to quickly zoom and adjust fine details, and Flick Panning, to move around the canvas faster, with the freedom to create on the go.
Turning his attention to Autodesk Maya, Barnes built and blocked out foundational geometric shapes for the Journey scene, starting with elements he could reuse from an existing scene. “Absolutely nothing wrong with working smart wherever possible,” Barnes mused. The GPU-accelerated viewport unlocks fast and interactive 3D modeling for Barnes, who was able to set up the building blocks quickly.
Barnes further detailed some 3D models in ZBrush by sculpting with custom brushes. He then ran the ZBrush Remesh feature, creating a new single mesh by combining several existing objects. This simplified applying textures and will make potentially animating Journey much easier.
Barnes then used the Omniverse Create app to assemble his physically accurate, photorealistic 3D scene. Back at home with his RTX 3080-powered desktop system, he used the built-in RTX Renderer for interactive visualization within the viewport with virtually no slowdown. Even building at real-world scale, Create’s material presets and lighting allowed Barnes to quickly and efficiently apply realistic visuals with ease.
Barnes bounced back and forth between the handy Adobe Substance 3D Painter Connector and the Omniverse NVIDIA vMaterials library to discover, create and refine textures, applying his unique color scheme style in Create.
With the piece in good shape, Barnes planned for the composite by exporting several versions with depth of field, extra light bloom and fog enabled.
Now, all Journey needed was a vibrant sky to complete the piece. Barnes harnessed the power of AI with the NVIDIA Canvas app, free for RTX GPU owners, turning simple brushstrokes into a realistic, stunning view. Barnes generated outer space in mere minutes, allowing for more concept exploration and saving the time required to search for backgrounds or create one from scratch.
The artist then returned to Adobe Photoshop to add color bleed and stylish details, drop in the Canvas background, and export final renders.
Check out Barnes’ Instagram for more design inspiration.
Enterprises now have a new option for quickly getting started with NVIDIA AI software: the HPE GreenLake edge-to-cloud platform.
The NVIDIA AI Enterprise software suite is an end-to-end, cloud-native suite of AI and data analytics software. It’s optimized to enable any organization to use AI, and doesn’t require deep AI expertise.
Fully supported by NVIDIA, the software can be deployed anywhere, from the data center to the cloud. And developers can use the cloud-native platform of AI tools and frameworks to streamline development and deployment and quickly build high-performing AI solutions.
With NVIDIA AI Enterprise now available through HPE GreenLake in select countries, IT is relieved from the burden of building the infrastructure to run AI workloads. Organizations can access the NVIDIA AI Enterprise software suite as an on-prem cloud service from HPE, reducing the risk, duration, effort and cost for IT staff to build, deploy and operate an enterprise AI platform.
NVIDIA AI Enterprise is deployed on NVIDIA-Certified HPE ProLiant DL380 and DL385 servers running VMware vSphere with Tanzu. HPE GreenLake enables customers to acquire NVIDIA AI Enterprise on a pay-per-use basis, with the flexibility to scale up or down, and tailor to their needs. The software is fully supported by NVIDIA, ensuring robust operations for enterprise AI deployments.
HPE ProLiant DL380 and DL385 servers are optimized and certified with NVIDIA AI Enterprise software, VMware vSphere with Tanzu and NVIDIA A100 and A30 Tensor Core GPUs to deliver performance that is on par with bare metal for AI training and inference workloads.
Customers can select from predefined packages for training or inference workloads. Packages include NVIDIA AI Enterprise software, NVIDIA Ampere architecture GPUs, VMware vSphere with Tanzu, as well as all setup, installation and configuration, including:
VMware ESXi and VMware vCenter installation
NVIDIA License System installation and configuration
NVIDIA AI Enterprise host software installation
NVIDIA AI Enterprise virtual machine creation and configuration
Data science and AI software installation
Validation
With HPE management services to monitor and manage infrastructure and public clouds, IT can free up resources for more strategic projects. Customers can also take advantage of HPE’s optional AI advisory and solutioning workshops, including AI use case design, implementation, testing and deployment.
The HPE GreenLake platform provides enterprises with centralized control and insights to manage resources, costs and capacity across their on-premises and cloud deployments with a secure, self-service provisioning and management via a common control pane.
Get immediate, short-term, remote access to try NVIDIA AI Enterprise now with NVIDIA LaunchPad. The program gives AI practitioners, data scientists and IT admins immediate access to NVIDIA AI with free hands-on labs featuring AI-powered chatbots, image classification and more.
Taiwan has nearly 85,000 kidney dialysis patients — the highest prevalence in the world based on population density. Taipei Veterans General Hospital (TVGH) is working to improve outcomes for these patients with an AI model that predicts heart failure risk in real time during dialysis procedures.
The hospital’s AI tool displays key factors for risk prediction on a dashboard for clinicians, detects abnormal patterns in the streaming data from dialysis machines, and immediately alerts doctors and nursing staff to intervene.
NVIDIA AI technology, including the NVIDIA Jetson edge AI platform, enables TVGH to analyze patient data in real time, with the proposed model using a combination of dialysis machine data, patient medical records, test results and medication information.
“In this field, early detection and prompt decision-making can save lives,” said Professor Der-Cherng Tarng, chief of the department of medicine at TVGH. “By deploying NVIDIA Jetson next to each dialyzer to perform AI prediction during the procedure, we can achieve real-time insights in a way that’s affordable and effective, even for small-scale dialysis centers.”
The team plans to expand testing of its software to a dozen island-wide hospitals, and to seek approval from the Taiwan Food and Drug Administration for clinical use as a medical device.
Detection During Dialysis
Hemodialysis is a three- to four-hour procedure for patients with kidney failure, in which a machine filters toxins and waste products out of a patient’s blood, typically done two or three times a week. Patients can experience serious complications, including heart failure — which can be triggered if a metric known as dry weight isn’t set accurately during the procedure.
Dry weight refers to a person’s natural weight without any extra fluid in the body. Clinicians aim to return patients to their dry weight after each dialysis session. But estimating dry weight is subjective, since patients with advanced kidney disease typically have excess fluid in their bodies, meaning they start out with a weight higher than their dry weight.
Overestimating dry weight can cause hypertension, leading to complications including heart failure or other macrovascular complications. Underestimating it can remove too much fluid from the body, resulting in dehydration and a lower blood pressure.
This makes it critical that clinicians monitor multiple data points during dialysis, including blood flow rate, pressure in the arteries and veins, and ultrafiltration rate — a metric that represents the amount of fluid removed during the treatment.
TVGH’s risk assessment tool processes these values along with medical records, blood test results and medication information — assessing up to 200 sets of dynamic physiological and dialysis machine values. These key statistics are displayed on a dashboard for doctors and nurses, along with a metric that predicts heart failure risk for each patient.
This dashboard displays the health status of all dialysis patients, showing the patient’s severity and risk category in different colors. For each patient, it shows a real-time stream of dialysis machine data and the AI model’s assessment of whether or not the patient’s iron levels are normal.
The hospital’s tool was built to identify abnormal patterns in a patient’s data using multiple AI algorithms including decision trees, gradient boosting and convolutional neural networks. It was trained on a dataset of 3 million health records. The team recently added additional predictive indicators to the tool, including hemoglobin level and chest X-ray image analysis.
Adopting a convolutional neural network model improved the AI’s accuracy by 95%.
In addition to predicting heart failure risk, TVGH’s AI model has reduced the deviation rate in clinicians’ assessment of a patient’s dry weight by 80%, an accuracy boost that helps lessen the risk of complications.
AI, Edge Computing Power Real-Time Results
TVGH’s IT team adopted the SAS Viya analytics engine along with NVIDIA CUDA-X libraries to develop its AI model.
While the team’s electronic hemodialysis system could automatically record the data generated by dialyzers, their initial workflow still required healthcare staff to record physiological measurements every 30 minutes, sending the data to servers over a Bluetooth connection.
“A half-hour window between data analysis still left gaps where a patient may begin experiencing complications that can lead to heart failure,” said Shou-Ming Ou, visiting staff in the nephrology division at TVGH. “So our team worked to find a real-time solution that could receive and compute data generated by dialysis machines within milliseconds.”
To achieve real-time AI inference using streaming data over the course of a four-hour dialysis session, TVGH adopted the Aetina Edge AI Starter Package featuring NVIDIA Jetson Xavier NX, which packs the power to process up to 21 trillion operations per second in a compact module that consumes just 10 watts. The team used NVIDIA TensorRT software to optimize their AI prediction model for inference on the Jetson platform.
By shifting processing to the edge, NVIDIA Jetson also helps TVGH reduce the computation workload on their main servers, freeing up resources to support other AI teams training high-quality medical models.
In addition to the heart failure risk prediction model, the hospital is working on additional AI projects accelerated with the NVIDIA Parabricks genomics software, the NVIDIA FLARE federated learning workflow and the NeMo Megatron framework for natural language processing.
You may not know of Todd Mozer, but it’s likely you have experienced his company: It has enabled voice and vision AI for billions of consumer electronics devices worldwide.
Sensory, started in 1994 from Silicon Valley, is a pioneer of compact models used in mobile devices from the industry’s giants. Today Sensory brings interactivity to all kinds of voice-enabled electronics. LG and Samsung have used Sensory not just in their mobile phones but also in refrigerators, remote controls and wearables.
“What if I want my talking microwave to get me any recipe on the internet, to walk me through the recipe? That’s where the hybrid computing approach can come in,” said Mozer, CEO and founder.
Hybrid computing is the dual approach of using cloud and on-premises computing resources.
Devices are getting ever more powerful. While special-purpose inference accelerators are hitting the market, better models tend to be bigger and require even more memory, so edge-based processing is not always the best solution.
Cloud connections for devices can deliver improved performance to these compact models. Over-the-air deployments of updates can apply to wearable devices, mobile phones, cars and much more, said Mozer.
“Having a cloud connection offers updates for smaller, more accurate on-device models,” he said.
This offers a payoff for many improvements to features on devices. Sensory offers its customers speech-to-text, text-to-speech, wake word verification, natural language understanding, facial ID recognition, and speaker and sound identification.
Sensory is also working with NVIDIA Jetson edge AI modules to bring the power of its Sensory Cloud to the larger on-device implementations.
Tapping Triton for Inference
The company’s Sensory Cloud runs voice and vision models with NVIDIA Triton. Sensory’s custom cloud model management infrastructure built around Triton allows different customers to run different model versions, deploy custom models, enable automatic updates, and monitor usage and errors.
It’s deployable as a container by Sensory customers for on-premises or cloud-based implementations. It can also be used entirely privately, with no data going to Sensory.
Triton provides Sensory a special-purpose machine learning task library for all Triton communications and rapid deployment of new models with minimal coding. It also enables an asynchronous actor pipeline for ease of new pipeline assembly and scaling. Triton’s dynamic batching assists for higher GPU throughput and performance analysis for inference optimization.
Sensory is a member of NVIDIA Inception, a global program designed to support cutting-edge startups.
Enlisting NeMo for Hybrid Cloud Models
Sensory has expanded on NVIDIA NeMo to deliver improvements in accuracy and functionality for all of its cloud technologies.
NeMo-enhanced functions include its proprietary feature extractor, audio streaming optimizations, customizable vocabularies, multilingual models and much more.
The company now has 17 languages supported by NeMo models. And with proprietary Sensory improvements, word error rates are consistently outperforming the best in speech-to-text, according to the company.
“Sensory is bringing about enhanced features and functionality with NVIDIA Triton hardware and NeMo software,” said Mozer. “This type of hybrid-cloud setup offers customers new AI-driven capabilities.”
To foster climate action for a healthy global environment, NVIDIA is working with the United Nations Satellite Centre (UNOSAT) to apply the powers of deep learning and AI.
The effort supports the UN’s 2030 Agenda for Sustainable Development, which has at its core 17 interrelated Sustainable Development Goals. These SDGs — which include “climate action” and “sustainable cities and communities” — serve as calls to action for all UN member states to bolster global well-being.
The collaboration between UNOSAT, part of the United Nations Institute for Training and Research, and NVIDIA is initially focused on boosting climate-related disaster management by using AI for Earth Observation. AI4EO, as it’s known, is a term that encompasses initiatives using AI to help monitor and assess the planet’s changes.
To fast track research and development for its AI4EO efforts, UNOSAT will integrate its satellite imagery technology infrastructure with NVIDIA’s accelerated computing platform. The AI-powered satellite imagery system will collect and analyze geospatial information to provide near-real-time insights about floods, wildfires and other climate-related disasters.
In addition, UNOSAT has launched an educational module that builds upon an NVIDIA Deep Learning Institute (DLI) course on applying deep learning methods to generate accurate flood detection models.
“Working with NVIDIA will enable us to close the loop from AI research to implementation of climate solutions in the shortest time possible, ensuring that vulnerable populations can benefit from the technology,” said Einar Bjørgo, director of UNOSAT.
AI-Powered Satellite Imagery Analysis
For tasks like evaluating the impact of a tropical cyclone in the Philippines or a volcanic eruption in Tonga, UNOSAT’s emergency mapping service uses computer vision and satellite imagery analysis to gain accurate information about complex disasters.
Near-real-time analysis is key to managing climate-disaster events. Humanitarian teams can use the data-driven insights provided by AI to take rapid, effective action in combating disasters. The data is also used to inform sustainable development policies, develop users’ capacities and strengthen climate resilience overall.
UNOSAT will supercharge its satellite imagery technology infrastructure with NVIDIA DGX systems, which enable AI development at scale — as well as the NVIDIA EGX platform, which delivers the power of accelerated computing from the data center to the edge.
NVIDIA technology speeds up AI-based flood detection by 7x, covering larger areas with greater accuracy, according to UNOSAT.
NVIDIA DLI Course on Disaster Risk Monitoring
In addition to powerful technology, a skilled workforce is essential to using AI and data science to analyze and prevent climate events from becoming humanitarian disasters.
“NVIDIA and UNOSAT have a unique opportunity to combat the impact of climate change and advance the UN’s SDGs, with a launching point of training data scientists to develop and deploy GPU-accelerated models that improve flood prediction,” said Keith Strier, vice president of global AI initiatives at NVIDIA.
UNOSAT has developed a module for the Deep Learning Institute’s free online course that covers how to build a deep learning model to automate the detection of flood events.
Called Disaster Risk Monitoring Using Satellite Imagery, it’s the first NVIDIA DLI course focused on climate action for the global public sector community — with many additional climate-action-related courses being planned.
UNOSAT’s module — based on a real UN case study — highlights an example of a flood in Nepal.
In collaboration with NVIDIA, UNOSAT is offering the module for free with the goal of upskilling data scientists worldwide to harness accelerated computing to predict and respond to climate-related disasters.
“We aim to democratize access to accelerated computing to help nations train more accurate deep learning models that better predict and respond to a full spectrum of humanitarian and natural disasters,” Strier said.
Get started with the course, which is now available.
Learn more about how NVIDIA technology is used to improve the planet and its people.
Finally, there’s a family car any kid would want to be seen in.
Beijing-based startup Li Auto this week rolled out its second electric vehicle, the L9. It’s a full-size SUV decked out with the latest intelligent driving technology.
With AI features and an extended battery range of more than 800 miles, the L9 promises to elevate the playing field for luxury family vehicles.
Li Auto is deploying its newest automated driving features with the expansion of its vehicle lineup, using a software-defined compute platform built on two NVIDIA DRIVE Orin systems-on-a-chip (SoCs).
With more than 500 trillion operations per second (TOPS), the L9’s compute platform can run various deep neural networks simultaneously and in real time, all while ensuring the redundancy and diversity necessary for safety.
First-Class Safety and Security
As a top-line luxury model, the L9 sports only the best when it comes to AI-assisted driving technology.
All Li Auto vehicles come standard with the electric automaker’s advanced driver assistance system, Li AD Max. To achieve surround perception, the system uses one forward-facing lidar, 11 cameras, one radar and 12 ultrasonic sensors, as well as DRIVE Orin SoCs.
In addition to handling the large number of applications and deep neural networks necessary for autonomous driving, DRIVE Orin is architected to achieve systematic safety standards such as the ISO 26262 ASIL-D. Its dual processors provide fallback redundancies for each other, further ensuring safe operation.
The L9’s high-performance sensors also enable round-the-clock security features, monitoring both the car’s interior and exterior.
Innovative Infotainment
Inside the vehicle, five 3D-capable screens transform the in-cabin experience.
In the cockpit, a combined head-up display and confidence view enhance safety for the person driving. The head-up display projects key driving information onto the front windshield, and the interactive visualization feature of the vehicle’s perception system is located above the steering wheel, keeping the driver’s attention on the road.
The L9’s screens for central control, passenger entertainment and rear cabin entertainment are 15.7-inch, 3K-resolution, automotive-grade OLED displays that deliver first-class visual experiences for every occupant.
Passengers can also interact with the intelligent in-cabin system via interior sensors and natural language processing.
Designed to optimize all driver and passenger experiences, the L9 represents the top of the line for luxury family vehicles.