Banking on AI: RBC Builds a DGX-Powered Private Cloud

Banking on AI: RBC Builds a DGX-Powered Private Cloud

Royal Bank of Canada built an NVIDIA DGX-powered cloud and tied it to a strategic investment in AI. Despite headwinds from a global pandemic, it will further enable RBC to transform client experiences.

The voyage started in the fall of 2017. That’s when RBC, Canada’s largest bank with 17 million clients in 36 countries, created  its dedicated research institute, Borealis AI. The institute is headquartered next to Toronto’s MaRS Discovery District, a global hub for machine-learning experts.

Borealis AI quickly attracted dozens of top researchers. That’s no surprise given the institute is led by the bank’s chief science officer, Foteini Agrafioti, a patent-holding serial entrepreneur and Ph.D. in electrical and computer engineering who co-chairs Canada’s AI advisory council.

The bank initially booted up Borealis AI into a mix of systems. But as the group and the AI models it developed grew, it needed a larger, dedicated AI engine.

Brokering a Private AI Cloud for Banking

“I had the good fortune to help commission our first infrastructure for Borealis AI, but it wasn’t adequate to meet our evolving AI needs,” said Mike Tardif, a senior vice president of tech infrastructure at RBC.

The team wanted a distributed AI system that would serve four locations, from Vancouver to Montreal, securely behind the bank’s firewall. It needed to scale as workloads grew and leverage the regular flow of AI innovations in open source software without requiring hardware upgrades to do so.

In short, the bank aimed to build a state-of-the-art private AI cloud. For its key planks, RBC chose six NVIDIA DGX systems and Red Hat’s OpenShift to orchestrate containers running on those systems.

“We see NVIDIA as a leader in AI infrastructure. We were already using its DGX systems and wanted to expand our AI capabilities, so it was an obvious choice,” said Tardif.

AI Steers Bank Toward Smart Apps

RBC is already reporting solid results with the system despite commissioning it early this year in the face of the oncoming COVID-19 storm.

The private AI cloud can run thousands of simulations and analyze millions of data points in a fraction of the time that it could before, the bank says. As a result, it expects to transform the customer banking experience with a new generation of smart applications. And that’s just the beginning.

“For instance, in our capital markets business we are now able to train thousands of statistical models in parallel to cover this vast space of possibilities,” said Agrafioti, head of Borealis AI.

“This would be impossible without a distributed and fully automated environment. We can populate the entire cluster with a single click using the automated pipeline that this new solution has delivered,” she added.

The platform has already helped reduce client calls and resulted in faster delivery of new applications for RBC clients, thanks to the performance of GPUs combined with the automation of orchestrated containers.

RBC deployed Red Hat OpenShift in combination with NVIDIA DGX infrastructure to rapidly spin up AI compute instances in a fraction of the time it used to take.

OpenShift helps by creating an environment where users can run thousands of containers simultaneously, extracting datasets to train AI models and run them in production on DGX systems, said Yan Fisher, a global evangelist for emerging technologies at Red Hat.

OpenShift and NGC, NVIDIA’s software hub, let the companies support the bank remotely through the pandemic, he added.

“Building our AI infrastructure with NVIDIA DGX has given us in-house capabilities similar to what the Amazons and Googles of the world offer and we’ve achieved some significant savings in total cost of ownership,” said Tardif.

He singled out as key hardware assets the NVLink interconnect and NVIDIA’s support for enterprise networking standards with maximum bandwidth and reduced latency. They let users quickly access multiple GPUs within and between systems across data centers that host the bank’s AI cloud.

How a Bank with a Long History Stays Innovative

Though it’s 150 years old, RBC keeps in tune with the times by investing early in emerging technologies, as it did with Borealis AI.

“Innovation is in our DNA — we’re always looking at what’s coming around the corner and how we can operationalize it, and AI is a top strategic priority,” said Tardif.

Although its main expertise is in banking, RBC has tech chops, too. During the COVID lockdown it managed to “pressure test” the latest systems, pushing them well beyond they thought were their limits.

“We’re co-creating this vision of AI infrastructure with NVIDIA, and through this journey we’re raising the bar for AI innovation which everyone in the financial services industry can benefit from,” Tardif said.

Visit NVIDIA’s financial services industry page to learn more.

The post Banking on AI: RBC Builds a DGX-Powered Private Cloud appeared first on The Official NVIDIA Blog.

Read More

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Whether tackling complex visualization challenges or creating Hollywood-caliber visual effects, artists and designers require powerful hardware to create their best work.

The latest application releases from Foundry, Chaos Group and Redshift by Maxon provide advanced features powered by NVIDIA RTX so creators can experience faster ray tracing and accelerated performance to elevate any design workflow.

Foundry Delivers New Features in Modo and Nuke

Foundry recently hosted Foundry LIVE, a series of virtual events where they announced the latest enhancements to their leading content creation applications, including NVIDIA OptiX 7.1 support in Modo.

Modo is Foundry’s powerful and flexible 3D modeling, texturing and rendering toolset. By upgrading to OptiX 7.1 in the mPath renderer, Version 14.1 delivers faster rendering, denoising and real-time feedback with up to 2x the memory savings on the GPU for greater flexibility when working with complex scenes.

Earlier this week, the team announced Nuke 12.2, the latest version of Foundry’s compositing, editorial and review tools. The recent release of Nuke 12.1, the NukeX Cara VR toolset for working with 360-degree video, as well as Nuke’s SphericalTransform and Bilateral nodes, takes advantage of new GPU-caching functionality to deliver significant improvements in viewer processing and rendering. The GPU-caching architecture is also available to developers creating custom GPU-accelerated tools using BlinkScript.

“Moving mPath to OptiX 7.1 dramatically reduces render times and memory usage, but the feature I’m particularly excited by is the addition of linear curves support, which now allows mPath to accelerate hair and fur rendering on the GPU,” said Allen Hastings, head of rendering at Foundry.

Image Courtesy of Foundry, model supplied by Aaron Sims Creative

NVIDIA Quadro RTX GPUs combined with Dell Precision workstations provide the performance, scalability and reliability to help artists and designers boost productivity and create amazing content faster than before. Learn more about how Foundry members in the U.S. can receive exclusive discounts and save on all Dell desktops, notebooks, servers, electronics and accessories.

Chaos Group Releases V-Ray 5 for Autodesk Maya

Chaos Group will soon release V-Ray 5 for Autodesk Maya, with a host of new GPU-accelerated features for lighting and materials.

Using LightMix in the new V-Ray Frame Buffer allows artists to freely experiment with lighting changes after they render, save out permutations and push back improvements in scenes. The new Layer Compositor allows users to fine-tune and finish images directly in the V-Ray frame buffer — without the need for a separate post-processing app.

“V-Ray 5 for Maya brings tremendous advancements for Maya artists wanting to improve their efficiency,” said Phillip Miller, vice president of product management at Chaos Group. “In addition, every new feature is supported equally by V-Ray GPU which can utilize RTX acceleration.”

V-Ray 5 for Maya image for the Nissan GTR. Image courtesy of Millergo CG.

V-Ray 5 also adds support for out-of-core geometry for rendering using NVIDIA CUDA, improving performance for artists and designers working with large scenes that aren’t able to fit into the GPU’s frame buffer.

V-Ray 5 for Autodesk Maya will be generally available in early August.

Redshift Brings Faster Ray Tracing, Bigger Memory

Maxon hosted The 3D and Motion Design Show this week, where they demonstrated Redshift 3.0 with OptiX 7 ray-tracing acceleration and NVLink for both geometry and textures.

Additional features of Redshift 3.0 include:

  • General performance improved 30 percent or more
  • Automatic sampling so users no longer need to manually tweak sampling settings
  • Maxon shader noises for all supported 3D apps
  • Hydra/Solaris support
  • Deeper traces and nested shader blending for even more visually compelling shaders

“Redshift 3.0 incorporates NVIDIA technologies such as OptiX 7 and NVLink. OptiX 7 enables hardware ray tracing so our users can now render their scenes faster than ever. And NVLink allows the rendering of larger scenes with less or no out-of-core memory access — which also means faster render times,” said Panos Zompolas, CTO at Redshift Rendering Technologies. “The introduction of Hydra and Blender support means more artists can join the ever growing Redshift family and render their projects at an incredible speed and quality.”

Redshift 3.0 will soon introduce OSL and Blender support. Redshift 3.0 is currently available to licensed customers, with general availability coming soon.

All registered participants of the 3D Motion and Design Show will be automatically entered for a chance to win an NVIDIA Quadro RTX GPU. See all prizes here.

Check out other RTX-accelerated applications that help professionals transform design workflows. And learn more about how RTX GPUs are powering high-performance NVIDIA Studio systems built to handle the most demanding creative workflows.

For developers looking to get the most out of RTX GPUs, learn more about integrating OptiX 7 into applications.


Featured blog image courtesy of Foundry.

The post Top Content Creation Applications Turn ‘RTX On’ for Faster Performance appeared first on The Official NVIDIA Blog.

Read More

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories

Driving requires the ability to predict the future. Every time a car suddenly cuts into a lane or multiple cars arrive at the same intersection, drivers must make predictions as to how others will act to safely proceed.

While humans rely on driver cues and personal experience to read these situations, self-driving cars can use AI to anticipate traffic patterns and safely maneuver in a complex environment.

We have trained the PredictionNet deep neural network to understand the driving environment around a car in top-down or bird’s-eye view, and to predict the future trajectories of road users based on both live perception and map data.

PredictionNet analyzes past movements of all road agents, such as cars, buses, trucks, bicycles and pedestrians, to predict their future movements. The DNN looks into the past to take in previous road user positions, and also takes in positions of fixed objects and landmarks on the scene, such as traffic lights, traffic signs and lane line markings provided by the map.

Based on these inputs, which are rasterized in top-down view, the DNN predicts road user trajectories into the future, as shown in figure 1.

Predicting the future has inherent uncertainty. PredictionNet captures this by also providing the prediction statistics of the future trajectory predicted for each road user, as also shown in figure 1.

Figure 1. PredictionNet results visualized in top-down view. Gray lines denote the map, dotted white lines represent vehicle trajectories predicted by the DNN, while white boxes represent ground truth trajectory data. The colorized clouds represent the probability distributions for predicted vehicle trajectories, with warmer colors representing points that are closer in time to the present, and cooler colors representing points further in the future.

A Top-Down Convolutional RNN-Based Approach

Previous approaches to predicting future trajectories for self-driving cars have leveraged both imitation learning and generative models that sample future trajectories, as well as convolutional neural networks and recurrent neural networks for processing perception inputs and predicting future trajectories.

For PredictionNet, we adopt an RNN-based architecture that uses two-dimensional convolutions. This structure is highly scalable for arbitrary input sizes, including the number of road users and prediction horizons.

As is typically the case with any RNN, different time steps are fed into the DNN sequentially. Each time step is represented by a top-down view image that shows the vehicle surroundings at that time, including both dynamic obstacles observed via live perception, and fixed landmarks provided by a map.

This top-down view image is processed by a set of 2D convolutions before being passed to the RNN. In the current implementation, PredictionNet is able to confidently predict one to five seconds into the future, depending on the complexity of the scene (for example, highway versus urban).

The PredictionNet model also lends itself to a highly efficient runtime implementation in the TensorRT deep learning inference SDK, with 10 ms end-to-end inference times achieved on an NVIDIA TITAN RTX GPU.

Scalable Results

Results thus far have shown PredictionNet to be highly promising for several complex traffic scenarios. For example, the DNN can predict which cars will proceed straight through an intersection versus which will turn. It’s also able to correctly predict the car’s behavior in highway merging scenarios.

We have also observed that PredictionNet is able to learn velocities and accelerations of vehicles on the scene. This enables it to correctly predict speeds of both fast-moving and fully stopped vehicles, as well as to predict stop-and-go traffic patterns.

PredictionNet is trained on highly accurate lidar data to achieve higher prediction accuracy. However, the inference-time perception input to the DNN can be based on any sensor input combination (that is, camera, radar or lidar data) without retraining. This also means that the DNN’s prediction capabilities can be leveraged for various sensor configurations and levels of autonomy, from level 2+ systems all the way to level 4/level 5.

PredictionNet’s ability to anticipate behavior in real time can be used to create an interactive training environment for reinforcement learning-based planning and control policies for features such as automatic cruise control, lane changes or intersections handling.

By using PredictionNet to simulate how other road users will react to an autonomous vehicle’s behavior based on real-world experiences, we can train a more safe, robust and courteous AI driver.

The post All the Right Moves: How PredictionNet Helps Self-Driving Cars Anticipate Future Traffic Trajectories appeared first on The Official NVIDIA Blog.

Read More

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world’s fastest AI supercomputer in academia, delivering 700 petaflops of AI performance.

The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

“We’ve created a replicable, powerful model of public-private cooperation for everyone’s benefit,” said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA.

UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

The $70 million public-private partnership promises to make UF one of the leading AI universities in the country, advance academic research and help address some of the state’s most complex challenges.

“This is going to be a tremendous partnership,” Florida Gov. Ron DeSantis said. “As we look to keep our best talent  in state, this will be a significant carrot, you’ll also see people around the country want to come to Florida.”

Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer, HiPerGator, with the recently announced NVIDIA DGX SuperPOD architecture. The system will be up and running by early 2021, just a few weeks after it’s delivered.

This gives faculty and students within and beyond UF the tools to apply AI across a multitude of areas to address major challenges such as rising seas, aging populations, data security, personalized medicine, urban transportation and food insecurity. UF expects to create 30,000 AI-enabled graduates by 2030.

“The partnership here with the UF, the state of Florida, and NVIDIA, anchored by Chris’ generous donation, goes beyond just money,” said NVIDIA CEO Jensen Huang, who founded NVIDIA in 1993 along with Malachowsky and Curtis Priem. “We are excited to contribute NVIDIA’s expertise to work together to make UF a national leader in AI and help address not only the region’s, but the nation’s challenges.”

UF, ranked seventh among public universities in the United States by US News & World Report and aims to break into the top five, offers an extraordinarily broad range of disciplines, Malachowsky said.

The region is also a “living laboratory for some of society’s biggest challenges,” Malachowsky said.

Regional, National AI Leadership

The effort aims to help define a research landscape to deal with the COVID-19, pandemic, which has seen supercomputers take a leading role.

“Our vision is to become the nation’s first AI university,” University of Florida President Kent Fuchs said. “I am so grateful again to Mr. Malachowsky and NVIDIA CEO Jensen Huang.

State and regional leaders already look to the university to bring its capabilities to bear on an array of regional and national issues.

Among them: supporting agriculture in a time of climate change, addressing the needs of an aging population, and managing the effects of rising sea levels in a state with more than 1,300 miles of coastline.

And to ensure no community is left behind, UF plans to promote wide accessibility to these computing capabilities.

As part of this, UF will:

  • Establish UF’s Equitable AI program, to bring faculty members across the university together to create standards and certifications for developing tools and solutions that are cognizant of bias, unethical practice and legal and moral issues.
  • Partner with industry and other academic groups, such as the Inclusive Engineering Consortium, whose students will work with members to conduct research and recruitment to UF graduate programs.

Broad Range of AI Initiatives

Malachowsky has served in a number of leadership roles as NVIDIA has grown from a startup to the global leader in visual and parallel computing. A recognized authority on integrated-circuit design and methodology, he has authored close to 40 patents.

In addition to holding a BSEE from the University of Florida, he has an MSCS from Santa Clara University. He has been named a distinguished alumni of both universities, in addition to being inducted last year into the Florida Inventors Hall of Fame.

UF is the first institution of higher learning in the U.S. to receive NVIDIA DGX A100 systems. These systems are based on the modular architecture of the NVIDIA DGX SuperPOD, which enables the rapid deployment and scaling of massive AI infrastructure.

UF’s HiPerGator 3 supercomputer will integrate 140 NVIDIA DGX A100 systems powered by a combined 1,120 NVIDIA A100 Tensor Core GPUs. It will include 4 petabytes of high-performance storage. An NVIDIA Mellanox HDR 200Gb/s InfiniBand network will provide the high throughput and extremely low-latency network connectivity.

DGX A100 systems are built to make the most of these capabilities as a single software-defined platform. NVIDIA DGX systems are already used by eight of the ten top US national universities.

That platform includes the most advanced suite of AI application frameworks in the world. It’s a software suite that covers data analytics, AI training and inference acceleration, and recommendation systems. Its multi-modal capabilities combine sound, vision, speech and a contextual understanding of the world around us.

Together, these tools have already had a significant impact on healthcare, transportation, science, interactive appliances, the internet and other areas.

More Than Just a Machine

Friday’s announcement, however, goes beyond any single, if singular, machine.

NVIDIA will also contribute its AI expertise to UF through ongoing support and collaboration across the following initiatives:

  • The NVIDIA Deep Learning Institute will collaborate with UF on developing new curriculum and coursework for both students and the community, including programing tuned to address the needs of young adults and teens to encourage their interest in STEM and AI, better preparing them for future educational and employment opportunities.
  • UF will become the site of the latest NVIDIA AI Technology Center, where UF Graduate Fellows and NVIDIA employees will work together to advance AI.
  • NVIDIA solution architects and product engineers will partner with UF on the installation, operation and optimization of the NVIDIA-based supercomputing resources on campus, including the latest AI software applications.

UF will also make investments all around its new machine, well beyond the $20 million targeted at upgrading their data center.

Collectively, all of the data sciences-related activities and programs — and UF’s new supercomputer — will support the university’s broader AI-related aspirations.

To support that effort, the university has committed to fill 100 new faculty positions in AI and related fields, making it one of the top AI universities in the country.

That’s in addition to the 500 recently hired faculty across disciplines, many of whom will weave AI into their teaching and research.

“It’s been thrilling to watch all this,” Malachowsky said. “It provides a blueprint for how other states can work with their region’s resources to make similar investments that bring their residents the benefits of AI, while bolstering our nation’s competitiveness, capabilities, and expertise.”

The post University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia appeared first on The Official NVIDIA Blog.

Read More

Driving the Future: What Is an AI Cockpit?

Driving the Future: What Is an AI Cockpit?

From Knight Rider’s KITT to Ironman’s JARVIS, intelligent copilots have been a staple of forward-looking pop culture.

Advancements in AI and high-performance processors are turning these sci-fi concepts into reality. But what, exactly, is an AI cockpit, and how will it change the way we move?

AI is enabling a range of new software-defined, in-vehicle capabilities across the transportation industry. With centralized, high-performance compute, automakers can now build vehicles that become smarter over time.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Consolidating these components with an AI platform such as NVIDIA DRIVE AGX simplifies the architecture while creating more compute headroom to add new features. In addition, NVIDIA DRIVE IX provides an open and extensible software framework for a software-defined cockpit experience.

Mercedes-Benz released the first such intelligent cockpit, the MBUX AI system, powered by NVIDIA technology, in 2018. The system is currently in more than 20 Mercedes-Benz models, with the second generation debuting in the upcoming S-Class.

The second-generation MBUX system is set to debut in the Mercedes-Benz S-Class.

MBUX and other such AI cockpits orchestrate crucial safety and convenience features much more smoothly than the traditional vehicle architecture. They centralize compute for streamlined functions, and they’re constantly learning. By regularly delivering new features, they extend the joy of ownership throughout the life of the vehicle.

Always Alert

But safety is the foremost benefit of AI in the vehicle. AI acts as an extra set of eyes on the 360-degree environment surrounding the vehicle, as well as an intelligent guardian for drivers and passengers inside.

One key feature is driver monitoring. As automated driving functions become more commonplace across vehicle fleets, it’s critical to ensure the human at the wheel is alert and paying attention.

AI cockpits use interior cameras to monitor whether the driver is paying attention to the road.

Using interior-facing cameras, AI-powered driver monitoring can track driver activity, head position and facial movements to analyze whether the driver is paying attention, drowsy or distracted. The system can then alert the driver, bringing attention back to the road.

This system can also help keep those inside and outside the vehicle safe and alert. By sensing whether a passenger is about to exit a car and using exterior sensors to monitor the outside environment, AI can warn of oncoming traffic or pedestrians and bikers potentially in the path of the opening door.

It also acts as a guardian in emergency situations. If a passenger is not sitting properly in their seat, the system can prevent an airbag activation that would harm rather than help them. It can also use AI to detect the presence of children or pets left behind in the vehicle, helping prevent heat stroke.

An AI cockpit is always on the lookout for a vehicle’s occupants, adding an extra level of safety with full cabin monitoring so they can enjoy the ride.

Constant Convenience

In addition to safety, AI helps make the daily drive easier and more enjoyable.

With crystal-clear graphics, drivers can receive information about their route, as well as what the sensors on the car see, quickly and easily. Augmented reality heads-up displays and virtual reality views of the vehicle’s surroundings deliver the most important data (such as parking assistance, directions, speed and oncoming obstacles) without disrupting the driver’s line of sight.

These visualizations help build trust in the driver assistance system as well as understanding of its capabilities and limitations for a safer and more effective driving experience.

Using natural language processing, drivers can control vehicle settings without taking their eyes off the road. Conversational AI enables easy access to search queries, like finding the best coffee shops or sushi restaurants along a given route. The same system that monitors driver attention can also interpret gesture controls, providing another way for drivers to communicate with the cockpit without having to divert their gaze.

Natural language processing makes it possible to access vehicle controls without taking your eyes off the road.

These technologies can also be used to personalize the driving experience. Biometric user authentication and voice recognition allow the car to identify who is driving, and adjust settings and preferences accordingly.

AI cockpits are being integrated into more models every year, making them smarter and safer and constantly adding new features. High-performance, energy-efficient AI compute platforms, consolidate in-car systems with a centralized architecture to enable the open NVIDIA DRIVE IX software platform to meet future cockpit needs.

What used to be fanciful fiction will soon be part of our daily driving routine.

The post Driving the Future: What Is an AI Cockpit? appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Whether you want to know if your squats have the correct form, you’re at the mirror deciding how to dress and wondering what the weather’s like, or you keep losing track of your darts score, the Smells Like ML duo have you covered — in all senses.

This maker pair is using machine learning powered by NVIDIA Jetson’s edge AI capabilities to provide smart solutions to everyday problems.

About the Makers

Behind Smells Like ML are Terry Rodriguez and Salma Mayorquin, freelance machine learning consultants based in San Francisco. The business partners met as math majors in 2013 at UC Berkeley and have been working together ever since. The duo wondered how they could apply their knowledge in theoretical mathematics more generally. Robotics, IoT and computer vision projects, they found, are the answer.

Their Inspiration

The team name, Smells Like ML, stems from the idea that the nose is often used in literature to symbolize intuition. Rodriguez described their projects as “the ongoing process of building the intuition to understand and process data, and apply machine learning in ways that are helpful to everyday life.”

To create proofs of concept for their projects, they turned to the NVIDIA Jetson platform.

“The Jetson platform makes deploying machine learning applications really friendly even to those who don’t have much of a background in the area,” said Mayorquin.

Their Favorite Jetson Projects

Of Smells Like ML’s many projects using the Jetson platform, here are some highlights:

SpecMirror — Make eye contact with this AI-powered mirror, ask it a question and it searches the web to provide an answer. The smart assistant mirror can be easily integrated into your home. It processes sound and video input simultaneously, with the help of NVIDIA Jetson Xavier NX and NVIDIA DeepStream SDK.

ActionAI — Whether you’re squatting, spinning or loitering, this device classifies all kinds of human movement. It’s optimized by the Jetson Nano developer kit’s pose estimation inference capabilities. Upon detecting the type of movement someone displays, it annotates the results right back onto the video it was analyzing. ActionAI can be used to prototype any products that require human movement detection, such as a yoga app or an invisible keyboard.

Shoot Your Shot — Bring a little analytics to your dart game. This computer vision booth analyzes dart throws from multiple camera angles, and then scores, logs and even predicts the results. The application runs on a single Jetson Nano system on module.

Where to Learn More 

In June, Smells Like ML won second place in NVIDIA’s AI at the Edge Hackster.io competition in the intelligent video analytics category.

For more sensory overload, check out other cool projects from Smells Like ML.

Anyone can get started on a Jetson project. Learn how on the Jetson developers page.

The post Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning appeared first on The Official NVIDIA Blog.

Read More

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

After an exhausting 12-hour shift caring for patients, it’s hard to blame frontline workers for forgetting to sing “Happy Birthday” twice to guarantee a full 30 seconds of proper hand-washing.

Though at times tedious, the process of confirming such detailed, protective measures like the amount of time hospital employees spend sanitizing their hands, the cleaning status of a room, or the number of beds available is crucial to preventing the spread of infectious diseases such as COVID-19.

DARVIS, an AI company founded in San Francisco in 2015, automates tasks like these to make hospitals “smarter” and give hospital employees more time for patient care, as well as peace of mind for their own protection.

The company developed a COVID-19 infection-control compliance model within a month of the pandemic breaking out. It provides a structure to ensure that workers are wearing personal protective equipment and complying with hygiene protocols amidst the hectic pace of hospital operations, compounded by the pandemic. The system can also provide information on the availability of beds and other equipment.

Short for “Data Analytics Real-World Visual Information System,” DARVIS uses the NVIDIA Clara Guardian application framework, employing machine learning and advanced computer vision.

The system analyzes information processed by optical sensors, which act as the “eyes and ears” of the machine, and alerts users if a bed is clean or not, or if a worker is missing a glove, among other contextual insights. Upon providing feedback, all records are fully anonymized.

“It’s all about compliance,” said Jan-Philipp Mohr, co-founder and CEO of the company. “It’s not about surveilling workers, but giving them feedback where they could harm themselves. It’s for both worker protection and patient security.”

DARVIS is a member of NVIDIA Inception, a program that helps startups working in AI and data science accelerate their product development, prototyping and deployment.

The Smarter the Hospital, the Better

Automation in hospitals has always been critical to saving lives and increasing efficiency, said Paul Warren, vice president of Product and team lead for AI at DARVIS. However, the need for smart hospitals is all the more urgent in the midst of the COVID-19 crisis, he said.

“We talk to the frontline caregivers, the doctors, the nurses, the transport staff and figure out what part of their jobs is particularly repetitive, frustrating or complicated,” said Warren. “And if we can help automate that in real time, they’re able to do their job a lot more efficiently, which is ultimately good for improving patient outcomes.”

DARVIS can help save money as well as lives. Even before the COVID crisis, the U.S. Centers for Disease Control and Prevention estimated the annual direct medical costs of infectious diseases in U.S. hospitals to be around $45 billion, a cost bound to rise due to the global pandemic. By optimizing infection control practices and minimizing the spread of infectious disease, smart hospitals can decrease this burden, Mohr said.

To save costs and time needed to train and deploy their own devices, DARVIS uses PyTorch and TensorFlow optimized on NGC, NVIDIA’s registry of GPU-accelerated software containers.

“NVIDIA engineering efforts to optimize deep learning solutions is a game-changer for us,” said Warren. “NGC makes structuring and maintaining the infrastructure environment very easy for us.”

DARVIS’s current centralized approach involves deep learning techniques optimized on NVIDIA GPU-powered servers running on large workstations within the hospital’s data center.

As they onboard more users, the company plans to also use NVIDIA DeepStream SDK on edge AI embedded systems like NVIDIA Jetson Xavier NX to scale out and deploy at hospitals in a more decentralized manner, according to Mohr.

Same Technology, Numerous Possibilities

While DARVIS was initially focused on tracking beds and inventory, user feedback led to the expansion of its platform to different areas of need.

The same technology was developed to evaluate proper usage of PPE, to analyze worker compliance with infection control practices and to account for needed equipment in an operating room.

The team at DARVIS continues to research what’s possible with their device, as well as in the field of AI more generally, as they expand and deploy their product at hospitals around the world.

Watch DARVIS in action:

Learn more about NVIDIA’s healthcare-application framework on the NVIDIA Clara developers page.

Images courtesy of DARVIS, Inc.

The post Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis appeared first on The Official NVIDIA Blog.

Read More

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Ahmed Elnaggar and Michael Heinzinger are helping computers read proteins as easily as you read this sentence.

The researchers are applying the latest AI models used to understand text to the field of bioinformatics. Their work could accelerate efforts to characterize living organisms like the coronavirus.

By the end of the year, they aim to launch a website where researchers can plug in a string of amino acids that describe a protein. Within seconds, it will provide some details of the protein’s 3D structure, a key to knowing how to treat it with a drug.

Today, researchers typically search databases to get this kind of information. But the databases are growing rapidly as more proteins are sequenced, so a search can take up to 100 times longer than the approach using AI, depending on the size of a protein’s amino acid string.

In cases where a particular protein hasn’t been seen before, a database search won’t provide any useful results — but AI can.

“Twelve of the 14 proteins associated with COVID-19 are similar to well validated proteins, but for the remaining two we have very little data — for such cases, our approach could help a lot,” said Heinzinger, a Ph.D. candidate in computational biology and bioinformatics.

While time consuming, methods based on the database searches have been 7-8 percent more accurate than previous AI methods. But using the latest models and datasets, Elnaggar and Heinzinger cut the accuracy gap in half, paving the way for a shift to using AI.

AI Models, GPUs Drive Biology Insights

“The speed at which these AI algorithms are improving makes me optimistic we can close this accuracy gap, and no field has such fast growth in datasets as computational biology, so combining these two things I think we will reach a new state of the art soon,” said Heinzinger.

“This work couldn’t have been done two years ago,” said Elnaggar, an AI specialist with a Ph.D. in transfer learning. “Without the combination of today’s bioinformatics data, new AI algorithms and the computing power from NVIDIA GPUs, it couldn’t be done,” he said.

Elnaggar and Heinzinger are team members in the Rostlab at the Technical University of Munich, which helped pioneer this field at the intersection of AI and biology. Burkhard Rost, who heads the lab, wrote a seminal paper in 1993 that set the direction.

The Semantics of Reading a Protein

The underlying concept is straightforward. Proteins, the building blocks of life, are made up of strings of amino acids that need to be interpreted sequentially, just like words in a sentence.

So, researchers like Rost started applied emerging work in natural-language processing to understand proteins. But in the 1990s they had very little data on proteins and the AI models were still fairly crude.

Fast forward to today and a lot has changed.

Sequencing has become relatively fast and cheap, generating massive datasets. And thanks to modern GPUs, advanced AI models such as BERT can interpret language in some cases better than humans.

AI Models Grow 6x in Sophistication

The breakthroughs in natural-language processing have been particularly breathtaking. Just 18 months ago, Elnaggar and Heinzinger reported on work using a version of recurrent neural network models with 90 million parameters; this month their work leveraged Transformer models with 567 million parameters.

“Transformer models are hungry for compute power, so to do this work we used 5,616 GPUs on the Summit supercomputer and even then it took up to two days to train some of the models,” said Elnaggar.

Running the models on thousands of Summit’s nodes presented challenges.

Elnaggar tells a story familiar to those who work on supercomputers. He needed lots of patience to sync and manage files, storage, comms and their overheads at such a scale. He started small, working on a few nodes, and moved a step at a time.

Patient, stepwise work paid off in scaling complex AI algorithms across thousands of GPUs on the Summit supercomputer.

“The good news is we can now use our trained models to handle inference work in the lab using a single GPU,” he said.

Now Available: Pretrained AI Models

Their latest paper, published in July, characterizes the pros and cons of a handful of the latest AI models they used on various tasks. The work is funded with a grant from the COVID-19 High Performance Computing Consortium.

The duo also published the first versions of their pretrained models. “Given the pandemic, it’s better to have an early release,” rather than wait until the still ongoing project is completed, Elnaggar said.

“The proposed approach has the potential to revolutionize the way we analyze protein sequences,” said Heinzinger.

The work may not in itself bring the coronavirus down, but it is likely to establish a new and more efficient research platform to attack future viruses.

Collaborating Across Two Disciplines

The project highlights two of the soft lessons of science: Keep a keen eye on the horizon and share what’s working.

“Our progress mainly comes from advances in natural-language processing that we apply to our domain — why not take a good idea and apply it to something useful,” said Heinzinger, the computational biologist.

Elnaggar, the AI specialist, agreed. “We could only succeed because of this collaboration across different fields,” he said.

See more stories online of researchers advancing science to fight COVID-19.

The image at top shows language models trained without labelled samples picking up the signal of a protein sequence that is required for DNA binding.

The post Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19 appeared first on The Official NVIDIA Blog.

Read More

How B&N Keeps Designs Fresh While Working Remotely

How B&N Keeps Designs Fresh While Working Remotely

Whether designing buildings, complex highway interchanges or water purification systems, the architects and engineers at Burgess & Niple have a common goal: bringing data together to develop incredible visuals that will be the blueprint for their design.

B&N, an engineering and architecture firm headquartered in Columbus, Ohio, with approximately 400 employees, specializes in the designing and planning of roads, buildings, bridges and utility infrastructure, such as the award-winning Southwestern Parkway Combined Sewer Overflow Basin, located in the historic Shawnee Park in Louisville, Kentucky.

To provide infrastructure designs for federal, state and local government agencies and private corporations across 10 states, often in remote locations, B&N needs to have access to its applications and data anytime and anywhere.

To enable geographically dispersed architects, engineers and designers to collaborate on these projects, B&N transitioned 100 percent of its users to virtual desktop infrastructure. The company turned to NVIDIA virtual GPU technology to provide access to graphics-intensive applications, such as Autodesk AutoCAD, Revit and Civil 3D, Bentley Systems MicroStation, and Microsoft Office 365 along with other office productivity apps.

B&N chose Dell PowerEdge servers, each installed with two NVIDIA M10 GPUs with NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) software and VMware ESXi 6.7 U3. The systems enable the company to maintain the same level of productivity and performance in virtual workstations as it would have running the applications natively on physical workstations.

Because it was already using VDI, the company was able to shift to conducting business at home during the COVID-19 outbreak almost immediately and has continued to collaborate seamlessly and efficiently in real time.

“It is very much business as usual for us, which is pretty remarkable given the circumstances,” said Rod Dickerson, chief technology officer at B&N.

VDI enables B&N to keep data centralized in the data center to protect intellectual property and enable quick access across different locations without version control issues and lengthy upload and download times.

“The files that we work with are very large, which makes sharing them between engineers using traditional means difficult, especially with the inconsistencies in residential broadband during the COVID-19 pandemic,” said Dickerson. “Using VDI with NVIDIA GPUs and Quadro vDWS allows us to maintain our productivity output regardless of our physical locations.”

NVIDIA Quadro vDWS: Work from Anywhere Without Sacrificing Performance

Keeping its employees productive while working from home has not only allowed B&N to continue work with existing clients without delays, but it also allowed them to win new projects. In Ohio, the firm interviewed for a new highway interchange design project and won in part thanks to the ability to seamlessly collaborate internally and present to the client virtually.

They were also able to accelerate construction on a project in Indiana when the opportunity arose due to stay-at-home orders. Without NVIDIA Quadro vDWS providing access to needed applications and infrastructure, it would’ve been difficult to meet the accelerated schedule without any glitches.

“Across the board, project delivery continues to be seamless as a result of VDI with NVIDIA vGPU,” said Dickerson. “We are continuing to work as if we were in the office with the same level of performance.”

B&N has added about 40 new employees since the pandemic started. Onboarding is simplified because they can begin working with high-performance virtual workstations on their very first day. With virtual machines centrally located in the data center, the B&N IT team can easily maintain and manage the client computing environment as well.

B&N has used NVIDIA vGPUs to provide employees with workstation performance with the mobility and security of virtualization, effectively eliminating physical boundaries.

“NVIDIA vGPU is central to our business continuity strategy and has proven not only viable, but vital,” said Dickerson. “We would have significant business continuity issues if it weren’t for our implementation of NVIDIA vGPU technology.”

Learn more about how NVIDIA vGPU helps employees to work remotely.

The post How B&N Keeps Designs Fresh While Working Remotely appeared first on The Official NVIDIA Blog.

Read More

Not So Taxing: Intuit Uses AI to Make Tax Day Easier

Not So Taxing: Intuit Uses AI to Make Tax Day Easier

Understanding the U.S. tax code can take years of study — it’s 80,000 pages long. Software company Intuit has decided that it’s a job for AI.

Ashok Srivastava, its senior vice president and chief data officer, spoke to AI Podcast host Noah Kravitz about how the company is utilizing machine learning to help customers with taxes and aid small businesses through the financial effects of COVID-19.

EMBED PODCAST HERE

To help small businesses, Intuit has a range of programs such as the Intuit Aid Assist Program, which helps business owners figure out if they’re eligible for loans from the government. Other programs include cash flow forecasting, which estimates how much money businesses will have within a certain time frame.

And in the long term, Intuit is working on a machine learning program capable of using photos of financial documents to automatically extract necessary information and fill in tax documents.

Key Points From This Episode:

  • Intuit frequently uses an AI technique called knowledge engineering, which converts written regulations or rules into code, providing the information behind programs such as TurboTax.
  • Intuit also provides natural language processing and chatbot services, which use a customer’s questions as well as their feedback and product usage to determine the best reply.

Tweetables:

“We’re spending our time not only analyzing data, but thinking about new ways that we can use artificial intelligence in order to help small businesses.” — Ashok Srivastava [10:23]

“Data and artificial intelligence are going to come into play again and again to help people … make the best decisions about their own financial future.” — Ashok Srivastava [26:43]

You Might Also Like

AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative. Using computer vision, Drishyam.AI is eliminating service lines to help consumers more quickly.

Dial A for AI: Charter Boosts Customer Service with AI

Charter Communications is working to make customer service smarter before an operator even picks up the phone. Senior Director of Wireless Engineering Jared Ritter speaks about Charter’s perspective on customer relations.

What’s in Your Wallet? For Capital One, the Answer is AI

Nitzan Mekel, managing vice president of machine learning at Capital One, explains how the banking giant is integrating AI and machine learning into customer-facing applications such as fraud-monitoring and detection, call center operations and customer experience.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Not So Taxing: Intuit Uses AI to Make Tax Day Easier appeared first on The Official NVIDIA Blog.

Read More