Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

As businesses and schools consider reopening around the world, they’re taking safety precautions to mitigate the lingering threat of COVID-19 — often taking the temperature of each individual entering their facilities.

Fever is a common warning sign for the virus (and the seasonal flu), but manual temperature-taking with infrared thermometers takes time and requires workers stationed at a building’s entrances to collect temperature readings. AI solutions can speed the process and make it contactless, sending real-time alerts to facilities management teams when visitors with elevated temperatures are detected.

Central California-based IntelliSite Corp. and its recently acquired startup, Deep Vision AI, have developed a temperature screening application that can scan over 100 people a minute. Temperature readings are accurate within a tenth of a degree Celcius. And customers can get up and running with the app within a few hours, with an AI platform running on NVIDIA GPUs on premises or in the cloud for inference.

“Our software platform has multiple AI modules, including foot traffic counting and occupancy monitoring, as well as vehicle recognition,” said Agustin Caverzasi, co-founder of Deep Vision AI, and now president of IntelliSite’s AI business unit. “Adding temperature detection was a natural, easy step for us.”

The temperature screening tool has been deployed in several healthcare facilities and is being tested at U.S. airports, amusement parks and education facilities. Deep Vision is part of NVIDIA Inception, a program that helps startups working in AI and data science get to market faster.

“Deep Vision AI joined Inception at the very beginning, and our engineering and research teams received support with resources like GPUs for training,” Caverzasi said. “It was really helpful for our company’s initial development.”

COVID Risk or Coffee Cup? Building AI for Temperature Tracking

As the pandemic took hold, and social distancing became essential, Caverzasi’s team saw that the technology they’d spent years developing was more relevant than ever.

“The need to protect people from harmful viruses has never been greater,” he said. “With our preexisting AI modules, we can monitor in real time the occupancy levels in a store or a hospital’s waiting room, and trigger alerts before the maximum occupancy is reached in a given area.”

With governments and health organizations advising temperature checking, the startup applied its existing AI capabilities to thermal cameras for the first time. In doing so, they had to fine-tune the model so it wouldn’t be fooled by false positives — for example, when a person shows up red on a thermal camera because of their cup of hot coffee..

This AI model is paired with one of IntelliSite’s IoT solutions called human-based monitoring, or hBM. The hBM platform includes a hardware component: a mobile cart mounted with a thermal camera, monitor and Dell Precision tower workstation for inference. The temperature detection algorithms can now scan five people at the same time.

Double Quick: Faster, Easier Screening

The workstation uses the NVIDIA Quadro RTX 4000 GPU for real-time inference on thermal data from the live camera view. This reduces manual scanning time for healthcare customers by 80 percent, and drops the total cost of conducting temperature scans by 70 percent.

Facilities using hBM can also choose to access data remotely and monitor multiple sites, using either an on-premises Dell PowerEdge R740 server with NVIDIA T4 Tensor Core GPUs, or GPU resources through the IntelliSite Cloud Engine.

If businesses and hospitals are also taking a second temperature measurement with a thermometer, these readings can be logged in the hBM system, which can maintain records for over a million screenings. Facilities managers can configure alerts via text message or email when high temperatures are detected.

The Deep Vision developer team, based in Córdoba, Argentina, also had to adapt their AI models that use regular camera data to detect people wearing face masks. They use the NVIDIA Metropolis application framework for smart cities, including the NVIDIA DeepStream SDK for intelligent video analytics and NVIDIA TensorRT to accelerate inference.

Deep Vision and IntelliSite next plan to integrate the temperature screening AI with facial recognition models, so customers can use the application for employee registration once their temperature has been checked.

IntelliSite is a member of the NVIDIA Clara Guardian ecosystem, bringing edge AI to healthcare facilities. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the pandemic.

FDA disclaimer: Thermal measurements are designed as a triage tool and should not be the sole means of diagnosing high-risk individuals for any viral threat. Elevated thermal readings should be confirmed with a secondary, clinical-grade evaluation tool. FDA recommends screening individuals one at a time, not in groups.

Read More

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

Smart phones, smart devices, the cloud — if it seems like AI is everywhere, that’s because it is.

That makes more essential than ever the powerful workstations able to crunch the ever growing quantities of data on which modern AI is built.

Jared Dame, Hewlett Packard Enterprise’s director of business development and strategy for AI, data science and edge technologies, spoke to AI Podcast host Noah Kravitz about the role HPE’s workstations play in cutting-edge AI and data science.

In the AI pipeline, Dame explained, workstations can do just about everything — from training to inference. The biggest demand for workstations is now coming from biopharmaceutical companies, the oil and gas industry and the federal government.

Key Points From This Episode:

  • Z by HP workstations feature hundreds of thousands of sensors that predict problems within a machine up to a month in advance, so customers don’t experience a loss of data or time.
  • The newest Z Book Studio, equipped with NVIDIA Quadro graphics, will be launching this fall.

Tweetables:

“Z by HP is selling literally everywhere. Every vertical market does data science, every vertical market is adopting various types of AI.” — Jared Dame [5:47]

“We’re drinking our own Kool Aid — we use our own machines. And we’re using the latest and greatest technologies from CUDA TensorFlow to traditional programming languages.” — Jared Dame [18:36]

You Might Also Like

Lenovo’s Mike Leach on the Role of the Workstation in Modern AI

Whether it’s the latest generation of AI-enabled mobile apps or robust business systems powered on banks of powerful servers, chances are the technology was built first on a workstation. Lenovo’s Mike Leach describes how these workhorses are adapting to support a plethora of new kinds of AI applications.

Serkan Piantino’s Company Makes AI for Everyone

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC. Piantino, CEO of the New York-based startup, explained how he’s bringing compute power to those who don’t have easy access to GPU clusters.

SAS Chief Operating Officer Oliver Schabenberger

SAS Chief Operating Officer Oliver Schabenberger spoke about how organizations can use AI and related technologies.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

The post HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations appeared first on The Official NVIDIA Blog.

Read More

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

The four undergrads met for the first at the Stanford TreeHacks hackathon, became close friends, and developed an AI-powered app to help physical therapy patients ensure correct posture for their at-home exercises — all within 36 hours.

Back in February, just before the lockdown, Shachi Champaneri, Lilliana de Souza, Riley Howk and Deepa Marti happened to sit across from each other at the event’s introductory session and almost immediately decided to form a team for the competition.

Together, they created PocketPT, an app that lets users know whether they’re completing a physical therapy exercise with the correct posture and form. It captured two prizes against a crowded field, and inspired them to continue using AI to help others.

The app’s AI model uses the NVIDIA Jetson Nano developer kit to detect a user doing the tree pose, a position known to increase shoulder muscle strength and improve balance. The Jetson Nano performs image classification so the model can tell whether the pose is being done correctly based on 100+ images it was trained on, which the team took of themselves. Then, it provides feedback to the user, letting them know if they should adjust their form.

“It can be taxing for patients to go to the physical therapist often, both financially and physically,” said Howk.

Continuing exercises at home is a crucial part of recovery for physical therapy patients, but doing them incorrectly can actually hinder progress, she explained.

Bringing the Idea to Life

In the months leading up to the hackathon, Howk, a rising senior at the University of Alabama, was interning in Los Angeles, where a yoga studio is virtually on every corner. She’d arrived at the competition with the idea to create some kind of yoga app, but it wasn’t until the team came across the NVIDIA table at the hackathon’s sponsor fair that they realized the idea’s potential to expand and help those in need.

“A demo of the Jetson Nano displayed how the system can track bodily movement down to the joint,” said Marti, a rising sophomore at UC Davis. “That’s what sparked the possibility of making a physical therapy app, rather than limiting it to yoga.”

None of the team members had prior experience working with deep learning and computer vision, so they faced the challenge of learning how to implement the model in such a short period of time.

“The NVIDIA mentors were really helpful,” said Champaneri, a rising senior at UC Davis. “They put together a tutorial guide on how to use the Nano that gave us the right footing and outline to follow and implement the idea.”

Over the first night of the hackathon, the team took NVIDIA’s Deep Learning Institute course on getting started with AI on the Jetson Nano. They’d grasped the basics of deep learning. The next morning, they began hacking and training the model with images of themselves displaying correct versus incorrect exercise poses.

In just 36 hours since the idea first emerged, PocketPT was born.

Winning More Than Just Awards

The most exciting part of the weekend was finding out the team had made it to final pitches, according to Howk. They presented their project in front of a crowd of 500 and later found out that it had won the two prizes.

The hackathon attracted 197 projects. Competing against 65 other projects in the Medical Access category — many of which used cloud or other platforms — their project took home the category’s grand prize. It was also chosen as the “Best Use of Jetson Hack,” among 11 other groups that borrowed a Jetson for their projects.

But the quartet is looking to do more with their app than win awards.

Because of the fast-paced nature of the hackathon, PocketPT was only able to fully implement one pose, with others still in the works. However, the team is committed to expanding the product and promoting their overall mission of making physical therapy easily accessible to all.

While the hackathon took place just before the COVID outbreak in the U.S., the team highlighted how their project seems to be all the more relevant now.

“We didn’t even realize we were developing something that would become the future, which is telemedicine,” said de Souza, a rising senior at Northwestern University. “We were creating an at-home version of PT, which is very much needed right now. It’s definitely worth our time to continue working on this project.”

Read about other Jetson projects on the Jetson community projects page and get acquainted with other developers on the Jetson forum page.

Learn how to get started on a Jetson project of your own on the Jetson developers page.

The post It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks

NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks

NVIDIA delivers the world’s fastest AI training performance among commercially available products, according to MLPerf benchmarks released today.

The A100 Tensor Core GPU demonstrated the fastest performance per accelerator on all eight MLPerf benchmarks. For overall fastest time to solution at scale, the DGX SuperPOD system, a massive cluster of DGX A100 systems connected with HDR InfiniBand, also set eight new performance milestones. The real winners are customers applying this performance today to transform their businesses faster and more cost effectively with AI.

This is the third consecutive and strongest showing for NVIDIA in training tests from MLPerf, an industry benchmarking group formed in May 2018. NVIDIA set six records in the first MLPerf training benchmarks in December 2018 and eight in July 2019.

NVIDIA set records in the category customers care about moshttps://mlperf.org/t: commercially available products. We ran tests using our latest NVIDIA Ampere architecture as well as our Volta architecture.

The NVIDIA DGX SuperPOD system set new milestones for AI training at scale.

NVIDIA was the only company to field commercially available products for all the tests. Most other submissions used the preview category for products that may not be available for several months or the research category for products not expected to be available for some time.

NVIDIA Ampere Ramps Up in Record Time

In addition to breaking performance records, the A100, the first processor based on the NVIDIA Ampere architecture, hit the market faster than any previous NVIDIA GPU. At launch, it powered NVIDIA’s third-generation DGX systems, and it became publicly available in a Google cloud service just six weeks later.

Also helping meet the strong demand for A100 are the world’s leading cloud providers, such as Amazon Web Services, Baidu Cloud, Microsoft Azure and Tencent Cloud, as well as dozens of major server makers, including Dell Technologies, Hewlett Packard Enterprise, Inspur and Supermicro.

Users across the globe are applying the A100 to tackle the most complex challenges in AI, data science and scientific computing.

Some are enabling a new wave of recommendation systems or conversational AI applications while others power the quest for treatments for COVID-19. All are enjoying the greatest generational performance leap in eight generations of NVIDIA GPUs.

The NVIDIA Ampere architecture swept all eight tests of commercially available accelerators.

A 4x Performance Gain in 1.5 Years

The latest results demonstrate NVIDIA’s focus on continuously evolving an AI platform that spans processors, networking, software and systems.

For example, the tests show at equivalent throughput rates today’s DGX A100 system delivers up to 4x the performance of the system that used V100 GPUs in the first round of MLPerf training tests. Meanwhile, the original DGX-1 system based on NVIDIA V100 can now deliver up to 2x higher performance thanks to the latest software optimizations.

These gains came in less than two years from innovations across the AI platform. Today’s NVIDIA A100 GPUs — coupled with software updates for CUDA-X libraries — power expanding clusters built with Mellanox HDR 200Gb/s InfiniBand networking.

HDR InfiniBand enables extremely low latencies and high data throughput, while offering smart deep learning computing acceleration engines via the scalable hierarchical aggregation and reduction protocol (SHARP) technology.

4x improve x1280
NVIDIA evolves its AI performance with new GPUs, software upgrades and expanding system designs.

NVIDIA Shines in Recommendation Systems, Conversational AI, Reinforcement Learning

The MLPerf benchmarks — backed by organizations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford — constantly evolve to remain relevant as AI itself evolves.

The latest benchmarks featured two new tests and one substantially revised test, all of which NVIDIA excelled in. One ranked performance in recommendation systems, an increasingly popular AI task; another tested conversational AI using BERT, one of the most complex neural network models in use today. Finally, the reinforcement learning test used Mini-go with the full-size 19×19 Go board and was the most complex test in this round involving diverse operations from game play to training.

Convo RecSys customers x1000
Customers using NVIDIA AI for conversational AI and recommendation systems.

Companies are already reaping the benefits of this performance on these strategic applications of AI.

Alibaba hit a $38 billion sales record on Singles Day in November, using NVIDIA GPUs to deliver more than 100x more queries/second on its recommendation systems than CPUs. For its part, conversational AI is becoming the talk of the town, driving business results in industries from finance to healthcare.

NVIDIA is delivering both the performance needed to run these powerful jobs and the ease of use to embrace them.

Software Paves Strategic Paths to AI

In May, NVIDIA announced two application frameworks, Jarvis for conversational AI and Merlin for recommendation systems. Merlin includes the HugeCTR framework for training that powered the latest MLPerf results.

These are part of a growing family of application frameworks for markets including automotive (NVIDIA DRIVE), healthcare (Clara), robotics (Isaac) and retail/smart cities (Metropolis).

SDKs x1280
NVIDIA application frameworks simplify enterprise AI from development to deployment.

DGX SuperPOD Architecture Delivers Speed at Scale

NVIDIA ran MLPerf tests for systems on Selene, an internal cluster based on the DGX SuperPOD, its public reference architecture for large-scale GPU clusters that can be deployed in weeks. That architecture extends the design principles and best practices used in the DGX POD to serve the most challenging problems in AI today.

Selene recently debuted on the TOP500 list as the fastest industrial system in the U.S. with more than an exaflops of AI performance. It’s also the world’s second most power-efficient system on the Green500 list.

Customers are already using these reference architectures to build DGX PODs and DGX SuperPODs of their own. They include HiPerGator, the fastest academic AI supercomputer in the U.S., which the University of Florida will feature as the cornerstone of its cross-curriculum AI initiative.

Meanwhile, a top supercomputing center, Argonne National Laboratory, is using DGX A100 to find ways to fight COVID-19. Argonne was the first of a half-dozen high performance computing centers to adopt A100 GPUs.

DGX POD Users x1280
Many users have adopted NVIDIA DGX PODs.

DGX SuperPODs are already driving business results for companies like Continental in automotive, Lockheed Martin in aerospace and Microsoft in cloud-computing services.

These systems are all up and running thanks in part to a broad ecosystem supporting NVIDIA GPUs and DGX systems.

Strong MLPerf Showing by NVIDIA Ecosystem

Of the nine companies submitting results, seven submitted with NVIDIA GPUs including cloud service providers (Alibaba Cloud, Google Cloud, Tencent Cloud) and server makers (Dell, Fujitsu, and Inspur), highlighting the strength of NVIDIA’s ecosystem.

NVIDIA AI Ecosystem x1000
Many partners leveraged the NVIDIA AI platform for MLPerf submissions.

Many of these partners used containers on NGC, NVIDIA’s software hub, along with publicly available frameworks for their submissions.

The MLPerf partners represent part of an ecosystem of nearly two dozen cloud-service providers and OEMs with products or plans for online instances, servers and PCIe cards using NVIDIA A100 GPUs.

Test-Proven Software Available on NGC Today

Much of the same software NVIDIA and its partners used for the latest MLPerf benchmarks is available to customers today on NGC.

NGC is host to several GPU-optimized containers, software scripts, pre-trained models and SDKs. They empower data scientists and developers to accelerate their AI workflows across popular frameworks such as TensorFlow and PyTorch.

Organizations are embracing containers to save time getting to business results that matter. In the end, that’s the most important benchmark of all.

Artist’s rendering at top: NVIDIA’s new DGX SuperPOD, built in less than a month and featuring more than 2,000 NVIDIA A100 GPUs, swept every MLPerf benchmark category for at-scale performance among commercially available products. 

The post NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks appeared first on The Official NVIDIA Blog.

Read More

Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS

Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS

As the stunning visual effects in movies and television advance, so do audience expectations for ever more spectacular and realistic imagery.

The National Center for High-performance Computing, home to Taiwan’s most powerful AI supercomputer, is helping video artists keep up with increasing industry demands.

NCHC delivers computing and networking platforms for filmmakers, content creators and artists. To provide them with high-quality, accelerated rendering and simulation services, the company needed some serious GPU power.

So it chose the NVIDIA RTX Server, including Quadro RTX 8000 and RTX 6000 GPUs and NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) software, to bring accelerated rendering performance and real-time ray tracing to its customers.

NVIDIA GPUs and VDI: Driving Force Behind the Scenes

One of NCHC’s products, Render Farm, is built on NVIDIA Quadro RTX GPUs with Quadro vDWS software. It provides users with real-time rendering for high-resolution image processing.

A cloud computing platform, Render Farm enables users to rapidly render large 3D models. Its efficiency is stunning: it can reduce the time needed for opening files from nearly three hours to only three minutes.

“Last year, a team from Hollywood that reached out to us for visual effects production anticipated spending three days working on scenes,” said Chia-Chen Kuo, director of the Arts Technology Computing Division at NCHC. “But with the Render Farm computing platform, it only took one night to finish the work. That was far beyond their expectations.”

NCHC also aims to create a powerful cloud computing environment that can be accessed by anyone around the world. Quadro vDWS technology plays an important role in allowing teams to collaborate in this environment and makes its HPC resources widely available to the public.

With the rapid growth of data, physical hardware systems can’t keep up with data size and complexity. But Quadro vDWS technology makes it easy and convenient for anyone to securely access data and applications from anywhere, on any device.

Using virtual desktop infrastructure, NCHC’s Render Farm can provide up to 100 virtual workstations so users can do image processing at the same time. They only need a Wi-Fi or 4G connection to access the platform.

VMware vSphere and Horizon technology is integrated into Render Farm to provide on-demand virtual remote computing platform services. This virtualizes the HPC environment through NVIDIA virtual GPU technology and reduces by 10x the time required for redeploying the rendering environment. It also allows flexible switching between Windows and Linux operating systems.

High-Caliber Performance for High-Caliber Performers 

Over 200 video works have already been produced with NCHC’s technology services.

NCHC recently collaborated with acclaimed Taiwanese theater artist Huang Yi for one of his most popular productions, Huang Yi and KUKA. The project, which combined modern dance with visual arts and technology, was performed in over 70 locations worldwide such as the Cloud Gate Theater in northwest Taipei, the Ars Electronica Festival in Austria, and TED Conference in Vancouver.

During the program, Huang coordinated a dance with his robot companion KUKA, whose arm possessed a camera to capture the dance movements. Those images were sent to the NCHC Render Farm in Taichung, 170 km away,  to be processed in real time before projecting back to the robot on stage — with less than one second of end-to-end latency.

“I wanted to thoroughly immerse audiences in the performance so they can sense the flow of emotions. This requires strong and stable computing power,” said Huang. “NCHC’s Render Farm, powered by NVIDIA GPUs and NVIDIA virtualization technology, provides everything we need to animate the robot: exceptional computing power, extremely low latency and the remote access that you can use whenever and wherever you are.”

LeaderTek, a 3D scanning and measurement company, also uses NCHC services for image processing. With 3D and cloud rendering technology, LeaderTek is helping the Taiwan government archive historic monuments through creating advanced digital spatial models.

“Adopting Render Farm’s cloud computing platform helps us take a huge leap forward in improving our workflows,” said Hank Huang, general manager at LeaderTek. “The robust computing capabilities with NVIDIA vGPU for Quadro Virtual Workstations is also crucial for us to deliver high-quality images in a timely manner and get things done efficiently.”

Watch Huang Yi’s performance with KUKA below. And learn more about NVIDIA Quadro RTX and NVIDIA vGPU.

The post Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS appeared first on The Official NVIDIA Blog.

Read More

Banking on AI: RBC Builds a DGX-Powered Private Cloud

Banking on AI: RBC Builds a DGX-Powered Private Cloud

Royal Bank of Canada built an NVIDIA DGX-powered cloud and tied it to a strategic investment in AI. Despite headwinds from a global pandemic, it will further enable RBC to transform client experiences.

The voyage started in the fall of 2017. That’s when RBC, Canada’s largest bank with 17 million clients in 36 countries, created  its dedicated research institute, Borealis AI. The institute is headquartered next to Toronto’s MaRS Discovery District, a global hub for machine-learning experts.

Borealis AI quickly attracted dozens of top researchers. That’s no surprise given the institute is led by the bank’s chief science officer, Foteini Agrafioti, a patent-holding serial entrepreneur and Ph.D. in electrical and computer engineering who co-chairs Canada’s AI advisory council.

The bank initially booted up Borealis AI into a mix of systems. But as the group and the AI models it developed grew, it needed a larger, dedicated AI engine.

Brokering a Private AI Cloud for Banking

“I had the good fortune to help commission our first infrastructure for Borealis AI, but it wasn’t adequate to meet our evolving AI needs,” said Mike Tardif, a senior vice president of tech infrastructure at RBC.

The team wanted a distributed AI system that would serve four locations, from Vancouver to Montreal, securely behind the bank’s firewall. It needed to scale as workloads grew and leverage the regular flow of AI innovations in open source software without requiring hardware upgrades to do so.

In short, the bank aimed to build a state-of-the-art private AI cloud. For its key planks, RBC chose six NVIDIA DGX systems and Red Hat’s OpenShift to orchestrate containers running on those systems.

“We see NVIDIA as a leader in AI infrastructure. We were already using its DGX systems and wanted to expand our AI capabilities, so it was an obvious choice,” said Tardif.

AI Steers Bank Toward Smart Apps

RBC is already reporting solid results with the system despite commissioning it early this year in the face of the oncoming COVID-19 storm.

The private AI cloud can run thousands of simulations and analyze millions of data points in a fraction of the time that it could before, the bank says. As a result, it expects to transform the customer banking experience with a new generation of smart applications. And that’s just the beginning.

“For instance, in our capital markets business we are now able to train thousands of statistical models in parallel to cover this vast space of possibilities,” said Agrafioti, head of Borealis AI.

“This would be impossible without a distributed and fully automated environment. We can populate the entire cluster with a single click using the automated pipeline that this new solution has delivered,” she added.

The platform has already helped reduce client calls and resulted in faster delivery of new applications for RBC clients, thanks to the performance of GPUs combined with the automation of orchestrated containers.

RBC deployed Red Hat OpenShift in combination with NVIDIA DGX infrastructure to rapidly spin up AI compute instances in a fraction of the time it used to take.

OpenShift helps by creating an environment where users can run thousands of containers simultaneously, extracting datasets to train AI models and run them in production on DGX systems, said Yan Fisher, a global evangelist for emerging technologies at Red Hat.

OpenShift and NGC, NVIDIA’s software hub, let the companies support the bank remotely through the pandemic, he added.

“Building our AI infrastructure with NVIDIA DGX has given us in-house capabilities similar to what the Amazons and Googles of the world offer and we’ve achieved some significant savings in total cost of ownership,” said Tardif.

He singled out as key hardware assets the NVLink interconnect and NVIDIA’s support for enterprise networking standards with maximum bandwidth and reduced latency. They let users quickly access multiple GPUs within and between systems across data centers that host the bank’s AI cloud.

How a Bank with a Long History Stays Innovative

Though it’s 150 years old, RBC keeps in tune with the times by investing early in emerging technologies, as it did with Borealis AI.

“Innovation is in our DNA — we’re always looking at what’s coming around the corner and how we can operationalize it, and AI is a top strategic priority,” said Tardif.

Although its main expertise is in banking, RBC has tech chops, too. During the COVID lockdown it managed to “pressure test” the latest systems, pushing them well beyond they thought were their limits.

“We’re co-creating this vision of AI infrastructure with NVIDIA, and through this journey we’re raising the bar for AI innovation which everyone in the financial services industry can benefit from,” Tardif said.

Visit NVIDIA’s financial services industry page to learn more.

The post Banking on AI: RBC Builds a DGX-Powered Private Cloud appeared first on The Official NVIDIA Blog.

Read More

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Whether tackling complex visualization challenges or creating Hollywood-caliber visual effects, artists and designers require powerful hardware to create their best work.

The latest application releases from Foundry, Chaos Group and Redshift by Maxon provide advanced features powered by NVIDIA RTX so creators can experience faster ray tracing and accelerated performance to elevate any design workflow.

Foundry Delivers New Features in Modo and Nuke

Foundry recently hosted Foundry LIVE, a series of virtual events where they announced the latest enhancements to their leading content creation applications, including NVIDIA OptiX 7.1 support in Modo.

Modo is Foundry’s powerful and flexible 3D modeling, texturing and rendering toolset. By upgrading to OptiX 7.1 in the mPath renderer, Version 14.1 delivers faster rendering, denoising and real-time feedback with up to 2x the memory savings on the GPU for greater flexibility when working with complex scenes.

Earlier this week, the team announced Nuke 12.2, the latest version of Foundry’s compositing, editorial and review tools. The recent release of Nuke 12.1, the NukeX Cara VR toolset for working with 360-degree video, as well as Nuke’s SphericalTransform and Bilateral nodes, takes advantage of new GPU-caching functionality to deliver significant improvements in viewer processing and rendering. The GPU-caching architecture is also available to developers creating custom GPU-accelerated tools using BlinkScript.

“Moving mPath to OptiX 7.1 dramatically reduces render times and memory usage, but the feature I’m particularly excited by is the addition of linear curves support, which now allows mPath to accelerate hair and fur rendering on the GPU,” said Allen Hastings, head of rendering at Foundry.

Image Courtesy of Foundry, model supplied by Aaron Sims Creative

NVIDIA Quadro RTX GPUs combined with Dell Precision workstations provide the performance, scalability and reliability to help artists and designers boost productivity and create amazing content faster than before. Learn more about how Foundry members in the U.S. can receive exclusive discounts and save on all Dell desktops, notebooks, servers, electronics and accessories.

Chaos Group Releases V-Ray 5 for Autodesk Maya

Chaos Group will soon release V-Ray 5 for Autodesk Maya, with a host of new GPU-accelerated features for lighting and materials.

Using LightMix in the new V-Ray Frame Buffer allows artists to freely experiment with lighting changes after they render, save out permutations and push back improvements in scenes. The new Layer Compositor allows users to fine-tune and finish images directly in the V-Ray frame buffer — without the need for a separate post-processing app.

“V-Ray 5 for Maya brings tremendous advancements for Maya artists wanting to improve their efficiency,” said Phillip Miller, vice president of product management at Chaos Group. “In addition, every new feature is supported equally by V-Ray GPU which can utilize RTX acceleration.”

V-Ray 5 for Maya image for the Nissan GTR. Image courtesy of Millergo CG.

V-Ray 5 also adds support for out-of-core geometry for rendering using NVIDIA CUDA, improving performance for artists and designers working with large scenes that aren’t able to fit into the GPU’s frame buffer.

V-Ray 5 for Autodesk Maya will be generally available in early August.

Redshift Brings Faster Ray Tracing, Bigger Memory

Maxon hosted The 3D and Motion Design Show this week, where they demonstrated Redshift 3.0 with OptiX 7 ray-tracing acceleration and NVLink for both geometry and textures.

Additional features of Redshift 3.0 include:

  • General performance improved 30 percent or more
  • Automatic sampling so users no longer need to manually tweak sampling settings
  • Maxon shader noises for all supported 3D apps
  • Hydra/Solaris support
  • Deeper traces and nested shader blending for even more visually compelling shaders

“Redshift 3.0 incorporates NVIDIA technologies such as OptiX 7 and NVLink. OptiX 7 enables hardware ray tracing so our users can now render their scenes faster than ever. And NVLink allows the rendering of larger scenes with less or no out-of-core memory access — which also means faster render times,” said Panos Zompolas, CTO at Redshift Rendering Technologies. “The introduction of Hydra and Blender support means more artists can join the ever growing Redshift family and render their projects at an incredible speed and quality.”

Redshift 3.0 will soon introduce OSL and Blender support. Redshift 3.0 is currently available to licensed customers, with general availability coming soon.

All registered participants of the 3D Motion and Design Show will be automatically entered for a chance to win an NVIDIA Quadro RTX GPU. See all prizes here.

Check out other RTX-accelerated applications that help professionals transform design workflows. And learn more about how RTX GPUs are powering high-performance NVIDIA Studio systems built to handle the most demanding creative workflows.

For developers looking to get the most out of RTX GPUs, learn more about integrating OptiX 7 into applications.


Featured blog image courtesy of Foundry.

The post Top Content Creation Applications Turn ‘RTX On’ for Faster Performance appeared first on The Official NVIDIA Blog.

Read More

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

Driving requires the ability to predict the future. Every time a car suddenly cuts into a lane or multiple cars arrive at the same intersection, drivers must make predictions as to how others will act to safely proceed.

While humans rely on driver cues and personal experience to read these situations, self-driving cars can use AI to anticipate traffic patterns and safely maneuver in a complex environment.

We have trained the PredictionNet deep neural network to understand the driving environment around a car in top-down or bird’s-eye view, and to predict the future trajectories of road users based on both live perception and map data.

PredictionNet analyzes past movements of all road agents, such as cars, buses, trucks, bicycles and pedestrians, to predict their future movements. The DNN looks into the past to take in previous road user positions, and also takes in positions of fixed objects and landmarks on the scene, such as traffic lights, traffic signs and lane line markings provided by the map.

Based on these inputs, which are rasterized in top-down view, the DNN predicts road user trajectories into the future, as shown in figure 1.

Predicting the future has inherent uncertainty. PredictionNet captures this by also providing the prediction statistics of the future trajectory predicted for each road user, as also shown in figure 1.

Figure 1. PredictionNet results visualized in top-down view. Gray lines denote the map, dotted white lines represent vehicle trajectories predicted by the DNN, while white boxes represent ground truth trajectory data. The colorized clouds represent the probability distributions for predicted vehicle trajectories, with warmer colors representing points that are closer in time to the present, and cooler colors representing points further in the future.

A Top-Down Convolutional RNN-Based Approach

Previous approaches to predicting future trajectories for self-driving cars have leveraged both imitation learning and generative models that sample future trajectories, as well as convolutional neural networks and recurrent neural networks for processing perception inputs and predicting future trajectories.

For PredictionNet, we adopt an RNN-based architecture that uses two-dimensional convolutions. This structure is highly scalable for arbitrary input sizes, including the number of road users and prediction horizons.

As is typically the case with any RNN, different time steps are fed into the DNN sequentially. Each time step is represented by a top-down view image that shows the vehicle surroundings at that time, including both dynamic obstacles observed via live perception, and fixed landmarks provided by a map.

This top-down view image is processed by a set of 2D convolutions before being passed to the RNN. In the current implementation, PredictionNet is able to confidently predict one to five seconds into the future, depending on the complexity of the scene (for example, highway versus urban).

The PredictionNet model also lends itself to a highly efficient runtime implementation in the TensorRT deep learning inference SDK, with 10 ms end-to-end inference times achieved on an NVIDIA TITAN RTX GPU.

Scalable Results

Results thus far have shown PredictionNet to be highly promising for several complex traffic scenarios. For example, the DNN can predict which cars will proceed straight through an intersection versus which will turn. It’s also able to correctly predict the car’s behavior in highway merging scenarios.

We have also observed that PredictionNet is able to learn velocities and accelerations of vehicles on the scene. This enables it to correctly predict speeds of both fast-moving and fully stopped vehicles, as well as to predict stop-and-go traffic patterns.

PredictionNet is trained on highly accurate lidar data to achieve higher prediction accuracy. However, the inference-time perception input to the DNN can be based on any sensor input combination (that is, camera, radar or lidar data) without retraining. This also means that the DNN’s prediction capabilities can be leveraged for various sensor configurations and levels of autonomy, from level 2+ systems all the way to level 4/level 5.

PredictionNet’s ability to anticipate behavior in real time can be used to create an interactive training environment for reinforcement learning-based planning and control policies for features such as automatic cruise control, lane changes or intersections handling.

By using PredictionNet to simulate how other road users will react to an autonomous vehicle’s behavior based on real-world experiences, we can train a more safe, robust and courteous AI driver.

The post All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories appeared first on The Official NVIDIA Blog.

Read More

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world’s fastest AI supercomputer in academia, delivering 700 petaflops of AI performance.

The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

“We’ve created a replicable, powerful model of public-private cooperation for everyone’s benefit,” said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA.

UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

The $70 million public-private partnership promises to make UF one of the leading AI universities in the country, advance academic research and help address some of the state’s most complex challenges.

“This is going to be a tremendous partnership,” Florida Gov. Ron DeSantis said. “As we look to keep our best talent  in state, this will be a significant carrot, you’ll also see people around the country want to come to Florida.”

Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer, HiPerGator, with the recently announced NVIDIA DGX SuperPOD architecture. The system will be up and running by early 2021, just a few weeks after it’s delivered.

This gives faculty and students within and beyond UF the tools to apply AI across a multitude of areas to address major challenges such as rising seas, aging populations, data security, personalized medicine, urban transportation and food insecurity. UF expects to create 30,000 AI-enabled graduates by 2030.

“The partnership here with the UF, the state of Florida, and NVIDIA, anchored by Chris’ generous donation, goes beyond just money,” said NVIDIA CEO Jensen Huang, who founded NVIDIA in 1993 along with Malachowsky and Curtis Priem. “We are excited to contribute NVIDIA’s expertise to work together to make UF a national leader in AI and help address not only the region’s, but the nation’s challenges.”

UF, ranked seventh among public universities in the United States by US News & World Report and aims to break into the top five, offers an extraordinarily broad range of disciplines, Malachowsky said.

The region is also a “living laboratory for some of society’s biggest challenges,” Malachowsky said.

Regional, National AI Leadership

The effort aims to help define a research landscape to deal with the COVID-19, pandemic, which has seen supercomputers take a leading role.

“Our vision is to become the nation’s first AI university,” University of Florida President Kent Fuchs said. “I am so grateful again to Mr. Malachowsky and NVIDIA CEO Jensen Huang.

State and regional leaders already look to the university to bring its capabilities to bear on an array of regional and national issues.

Among them: supporting agriculture in a time of climate change, addressing the needs of an aging population, and managing the effects of rising sea levels in a state with more than 1,300 miles of coastline.

And to ensure no community is left behind, UF plans to promote wide accessibility to these computing capabilities.

As part of this, UF will:

  • Establish UF’s Equitable AI program, to bring faculty members across the university together to create standards and certifications for developing tools and solutions that are cognizant of bias, unethical practice and legal and moral issues.
  • Partner with industry and other academic groups, such as the Inclusive Engineering Consortium, whose students will work with members to conduct research and recruitment to UF graduate programs.

Broad Range of AI Initiatives

Malachowsky has served in a number of leadership roles as NVIDIA has grown from a startup to the global leader in visual and parallel computing. A recognized authority on integrated-circuit design and methodology, he has authored close to 40 patents.

In addition to holding a BSEE from the University of Florida, he has an MSCS from Santa Clara University. He has been named a distinguished alumni of both universities, in addition to being inducted last year into the Florida Inventors Hall of Fame.

UF is the first institution of higher learning in the U.S. to receive NVIDIA DGX A100 systems. These systems are based on the modular architecture of the NVIDIA DGX SuperPOD, which enables the rapid deployment and scaling of massive AI infrastructure.

UF’s HiPerGator 3 supercomputer will integrate 140 NVIDIA DGX A100 systems powered by a combined 1,120 NVIDIA A100 Tensor Core GPUs. It will include 4 petabytes of high-performance storage. An NVIDIA Mellanox HDR 200Gb/s InfiniBand network will provide the high throughput and extremely low-latency network connectivity.

DGX A100 systems are built to make the most of these capabilities as a single software-defined platform. NVIDIA DGX systems are already used by eight of the ten top US national universities.

That platform includes the most advanced suite of AI application frameworks in the world. It’s a software suite that covers data analytics, AI training and inference acceleration, and recommendation systems. Its multi-modal capabilities combine sound, vision, speech and a contextual understanding of the world around us.

Together, these tools have already had a significant impact on healthcare, transportation, science, interactive appliances, the internet and other areas.

More Than Just a Machine

Friday’s announcement, however, goes beyond any single, if singular, machine.

NVIDIA will also contribute its AI expertise to UF through ongoing support and collaboration across the following initiatives:

  • The NVIDIA Deep Learning Institute will collaborate with UF on developing new curriculum and coursework for both students and the community, including programing tuned to address the needs of young adults and teens to encourage their interest in STEM and AI, better preparing them for future educational and employment opportunities.
  • UF will become the site of the latest NVIDIA AI Technology Center, where UF Graduate Fellows and NVIDIA employees will work together to advance AI.
  • NVIDIA solution architects and product engineers will partner with UF on the installation, operation and optimization of the NVIDIA-based supercomputing resources on campus, including the latest AI software applications.

UF will also make investments all around its new machine, well beyond the $20 million targeted at upgrading their data center.

Collectively, all of the data sciences-related activities and programs — and UF’s new supercomputer — will support the university’s broader AI-related aspirations.

To support that effort, the university has committed to fill 100 new faculty positions in AI and related fields, making it one of the top AI universities in the country.

That’s in addition to the 500 recently hired faculty across disciplines, many of whom will weave AI into their teaching and research.

“It’s been thrilling to watch all this,” Malachowsky said. “It provides a blueprint for how other states can work with their region’s resources to make similar investments that bring their residents the benefits of AI, while bolstering our nation’s competitiveness, capabilities, and expertise.”

The post University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia appeared first on The Official NVIDIA Blog.

Read More

Driving the Future: What Is an AI Cockpit?

Driving the Future: What Is an AI Cockpit?

From Knight Rider’s KITT to Ironman’s JARVIS, intelligent copilots have been a staple of forward-looking pop culture.

Advancements in AI and high-performance processors are turning these sci-fi concepts into reality. But what, exactly, is an AI cockpit, and how will it change the way we move?

AI is enabling a range of new software-defined, in-vehicle capabilities across the transportation industry. With centralized, high-performance compute, automakers can now build vehicles that become smarter over time.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Consolidating these components with an AI platform such as NVIDIA DRIVE AGX simplifies the architecture while creating more compute headroom to add new features. In addition, NVIDIA DRIVE IX provides an open and extensible software framework for a software-defined cockpit experience.

Mercedes-Benz released the first such intelligent cockpit, the MBUX AI system, powered by NVIDIA technology, in 2018. The system is currently in more than 20 Mercedes-Benz models, with the second generation debuting in the upcoming S-Class.

The second-generation MBUX system is set to debut in the Mercedes-Benz S-Class.

MBUX and other such AI cockpits orchestrate crucial safety and convenience features much more smoothly than the traditional vehicle architecture. They centralize compute for streamlined functions, and they’re constantly learning. By regularly delivering new features, they extend the joy of ownership throughout the life of the vehicle.

Always Alert

But safety is the foremost benefit of AI in the vehicle. AI acts as an extra set of eyes on the 360-degree environment surrounding the vehicle, as well as an intelligent guardian for drivers and passengers inside.

One key feature is driver monitoring. As automated driving functions become more commonplace across vehicle fleets, it’s critical to ensure the human at the wheel is alert and paying attention.

AI cockpits use interior cameras to monitor whether the driver is paying attention to the road.

Using interior-facing cameras, AI-powered driver monitoring can track driver activity, head position and facial movements to analyze whether the driver is paying attention, drowsy or distracted. The system can then alert the driver, bringing attention back to the road.

This system can also help keep those inside and outside the vehicle safe and alert. By sensing whether a passenger is about to exit a car and using exterior sensors to monitor the outside environment, AI can warn of oncoming traffic or pedestrians and bikers potentially in the path of the opening door.

It also acts as a guardian in emergency situations. If a passenger is not sitting properly in their seat, the system can prevent an airbag activation that would harm rather than help them. It can also use AI to detect the presence of children or pets left behind in the vehicle, helping prevent heat stroke.

An AI cockpit is always on the lookout for a vehicle’s occupants, adding an extra level of safety with full cabin monitoring so they can enjoy the ride.

Constant Convenience

In addition to safety, AI helps make the daily drive easier and more enjoyable.

With crystal-clear graphics, drivers can receive information about their route, as well as what the sensors on the car see, quickly and easily. Augmented reality heads-up displays and virtual reality views of the vehicle’s surroundings deliver the most important data (such as parking assistance, directions, speed and oncoming obstacles) without disrupting the driver’s line of sight.

These visualizations help build trust in the driver assistance system as well as understanding of its capabilities and limitations for a safer and more effective driving experience.

Using natural language processing, drivers can control vehicle settings without taking their eyes off the road. Conversational AI enables easy access to search queries, like finding the best coffee shops or sushi restaurants along a given route. The same system that monitors driver attention can also interpret gesture controls, providing another way for drivers to communicate with the cockpit without having to divert their gaze.

Natural language processing makes it possible to access vehicle controls without taking your eyes off the road.

These technologies can also be used to personalize the driving experience. Biometric user authentication and voice recognition allow the car to identify who is driving, and adjust settings and preferences accordingly.

AI cockpits are being integrated into more models every year, making them smarter and safer and constantly adding new features. High-performance, energy-efficient AI compute platforms, consolidate in-car systems with a centralized architecture to enable the open NVIDIA DRIVE IX software platform to meet future cockpit needs.

What used to be fanciful fiction will soon be part of our daily driving routine.

The post Driving the Future: What Is an AI Cockpit? appeared first on The Official NVIDIA Blog.

Read More