Startup Green Lights AI Analytics to Improve Traffic, Pedestrian Safety

For all the attention devoted to self-driving cars, there’s another, often-overlooked, aspect to transportation efficiency and safety: smarter roads.

Derq, a startup operating out of Detroit and Dubai, has developed an AI system that can be installed on intersections and highways. Its AI edge appliance uses NVIDIA GPUs to process video and other data from cameras and radars to predict crashes before they happen and to warn connected road users. It can also understand roadways better, with applications ranging from accurate traffic counts to predicting spots on roads that are most prone to crashes.

Derq CEO and co-founder Georges Aoude says his fascination with automotive safety systems stretches back to long weekend drives with his family to visit relatives. He’d wonder why these fast-moving hunks of metal didn’t collide more often.

Time revealed to him that vehicles frequently do, sometimes to deadly effect. In fact, 1.35 million people around the globe perish in auto accidents every year, and millions more are seriously injured.

“Many of our team members have been touched by deadly road crashes,” said Aoude, who himself has lost two relatives on the roads. “This only makes us more determined to get our technology out there to make roadways safer for all.”

As a scholar at MIT, Aoude worked on autonomous satellites and then drone safety, before doing his Ph.D. work on autonomous vehicles. During his graduate work, he began to envision smart cities that work in tandem with autonomous vehicles, and the seed for Derq was planted. Along the way, he received a patent for AI systems that can predict dangerous behaviors.

Dissecting an Intersection

For its initial use case, Derq zeroed in on intersection and pedestrian safety. It chose to test its technology at a busy downtown street crossing considered one of Detroit’s most dangerous. The intersection had no cameras or radars, so Derq procured those and began its monitoring work.

For the past three years, it’s been capturing and training footage of tens of thousands of vehicles and road users a day, 24/7, at that intersection. In addition to predicted red-light violations and dangerous pedestrian movements, the company has been using that data to refine its AI models. As it monitors more intersections and roadways, Derq is constantly expanding its models. All data is anonymized, and no personal information from road users is ever collected or stored.

Those models can determine which actions have the potential to cause crashes and which don’t, and Derq strives for over 95 percent accuracy. Incidents considered high risk are uploaded to Derq’s GPU cloud instance for further analysis and documentation. Eventually, the system will be able to notify connected and autonomous cars and warn them of impending dangers.

“If the driver detects someone is running a red light, that’s too late,” Aoude said. “If you tell the driver two seconds in advance, they now have the chance to react and avoid the collision.”

The Detroit test demonstrated that the system was able to identify potential crashes effectively. Having seen the technology, the Michigan Department of Transportation has expanded its engagement with Derq, which was recently awarded a federal project to deploy its technology at 65 key intersections along 25 miles of connected roads in the Motor City.

Derq has also been deploying its technology in Dublin, Ohio, a suburb of Columbus, and in Las Vegas, and is running a pilot with California Department of Transportation. It’s in discussions with cities in Florida and Texas, as well as in Canada. Further afield, it’s deployed its systems in Dubai and is working with the road and transport agency there on expanding that deployment as well.

Choosing the Right GPU

Derq’s edge units are equipped with NVIDIA GPUs and hardware acceleration that enable them to process large amounts of internet of things data in real time. It’s developing for a variety of GPUs, including the NVIDIA RTX, T4, and Jetson Nano, AGX Xavier and TX2.

But GPUs are just one part of Derq’s relationship with NVIDIA. As a member of the NVIDIA Inception program for AI startups, Aoude said Derq has had an Inception team visit the company’s Dubai office to provide support such as how to optimize their use of GPUs. They’re also in the process of being incorporated into the NVIDIA Metropolis smart spaces AI platform, which Aoude said “will be a great platform to help us scale.”

The company has focused its product on two main unit sizes. A basic unit runs a single application — think of a box that’s deployed at a pedestrian crosswalk — a perfect pairing for a Jetson AGX Xavier. More elaborate boxes, which run more apps on more camera streams at complex and busy intersections, will rely on the more powerful NVIDIA T4 or RTX GPUs.

Derq will also provide traffic planners and engineers with valuable statistics and intelligence, such as real-time graphs, incident notifications and heatmaps, that can help them assess and improve road safety. The system also can provide forensics for government agencies and insurance companies investigating crashes, an area that the company is exploring with insurance providers.

Going forward, Aoude said Derq plans to scale quickly through partnerships with cities, smart infrastructure, and autonomous vehicle firms so that it can get its technology deployed as widely as possible before autonomous vehicles hit the roadways in large numbers.

“We can’t do it on our own,” he said. “We need forward-thinking cities and collaborative partners, and get ready to scale deployments together, to achieve vision zero – eliminating all road fatalities.”

The post Startup Green Lights AI Analytics to Improve Traffic, Pedestrian Safety appeared first on The Official NVIDIA Blog.

Read More

Is AI Important to Financial Services’ Future? New Survey Says You Can Bank on It

Financial services companies are challenged with defining and executing their AI strategy.

AI solutions contribute to both the top and bottom line for firms by powering nearly every function, including customer service, cybersecurity, new account acquisition and regulatory compliance.

Everyone from executives to data scientists are involved with determining how much to invest, the most profitable use cases to pursue and the biggest challenges that must be overcome in 2021 and beyond.

These are some of the findings of NVIDIA’s recent survey of over 200 financial services professionals from around the world. To fill in a more complete picture of how financial services institutions are using AI, and where it’s headed, our “State of AI in Financial Services” survey consisted of questions covering a range of AI topics, such as deployment models, infrastructure spending, top use cases and biggest challenges. Respondents included C-suite leaders, managers, developers and IT architects from fintechs, investment firms and retail banks.

Getting a Pulse on AI in Financial Services

The survey results showed two consistent themes: AI provides a competitive advantage in financial services, and banks plan to invest significantly in AI infrastructure to unlock its full potential.

Among different roles and subsectors within the industry, the survey data showed finer differences in how AI can best be deployed and the specific challenges for business decision makers and technical implementers.

Three highlights stood out among the survey results:

AI-Enabled Services Grow Revenue and Cut Costs

Our respondents were in widespread agreement on the value of enterprise AI, as 83 percent agreed with the statement that “AI is important to my company’s future success.”

The survey results showed how financial services firms view AI as an enabler of growth opportunities. Over half of those surveyed who had an opinion stated AI will increase their company’s annual revenue by 10 percent or more. In contrast, only 12 percent of respondents — excluding those who marked “Don’t Know” — stated that AI is having no impact on their revenue growth.

AI can also improve the bottom line of financial services institutions through cost savings. For instance, banks, insurers and asset managers are creating significant efficiencies in their daily operations using technologies such as conversational AI, robotic process automation, optical character recognition and other machine learning and deep learning applications.

These AI services save time and reduce expenditures by automating insurance claims processing, augmenting call center agents via automated speech recognition for call transcription and carrying out other manually intensive services.

Passing AI Benefits to Customers

Survey respondents said the top three areas where AI affected their companies were yielding more accurate models (42 percent), creating a competitive advantage (41 percent) and building new products (34 percent).

Utilizing AI to create more accurate models means better outcomes for banks and their customers, particularly in protecting against fraud and maximizing investment returns. These benefits translate into competitive advantage that often leads to increased market share and greater shareholder value. New products from AI enable cross-sell opportunities through enhanced personalization, which generates higher customer retention.

Challenges to Achieving AI Goals

While the benefits of leveraging AI in financial services are unmistakable, the journey from research to enterprise-scale production for AI models within banks, insurers and asset managers is marked with potential pitfalls and challenges.

Our survey identified those barriers, starting with the biggest challenges to achieving a company’s AI goals. The top three cited by respondents were too few data scientists (38 percent), insufficient technology infrastructure (35 percent) and a lack of data (35 percent).

The C-suite is looking to overcome these challenges by building AI expertise across the enterprise. 60 percent of C-level executives responded that their largest focus moving forward is identifying additional AI use cases. One in two respondents from the C-suite noted that their company also plans to hire more AI experts — addressing the gap of too few data scientists.

These findings warrant further exploration, especially in the context of new AI frameworks and platforms for smarter banking.

Popular AI Use Cases for Financial Services

Survey respondents from fintechs and investment firms highlighted portfolio optimization and algorithmic trading as the top AI use cases their companies currently invest in. This data can be understood in the context of maximizing client returns on investment.

Respondents from commercial and retail banks, on the other hand, noted that their companies are mainly investing in AI for fraud detection through payments, transactions and anti-money laundering. These survey results reflect a primary focus on protecting sensitive financial data for their customers.

Powering the Future of Banking with Enterprise AI

With these top use cases for AI in financial services, and dozens if not hundreds more available to banks, insurers and asset managers, the industry is understandably looking to grow its investment in AI. Sixty-two percent of our survey respondents — excluding those who marked “Don’t Know” — agreed that their company should spend more on AI applications.

Financial services professionals not only see the potential in AI, but are willing to invest more to deliver on its promise. That potential is actively being realized by companies who see AI generating competitive advantage, creating new products, adding significant revenues to the top line, and reducing costs to grow the bottom line.

As new use cases are identified and AI becomes more pervasive across organizations, the next challenge for C-suite and IT leadership will be creating enterprise-level AI platforms that deliver the productivity, scalability and return on investment necessary to support the variety of AI teams across their companies.

And, instead of starting from scratch, data scientists building models for a variety of use cases can utilize containers from NGC, NVIDIA’s hub of GPU-optimized software. These include NVIDIA Jarvis for automated speech recognition and speech to text for call center transcription to NVIDIA Merlin for recommendation system application frameworks.

To learn more about AI in the future of finance, download the survey report for more in-depth results.

And join GTC 2021 for free to hear from industry experts at Citibank, Morgan Stanley, Munich Re, Scotiabank, Wells Fargo and other leading financial institutions.

The post Is AI Important to Financial Services’ Future? New Survey Says You Can Bank on It appeared first on The Official NVIDIA Blog.

Read More

A Plus for Autonomous Trucking: Startup to Build Next-Gen Self-Driving Platform with NVIDIA DRIVE Orin

The autonomous trucking industry is about to get a major new addition.

Self-driving truck company Plus announced that its upcoming autonomous vehicle platform will be built on NVIDIA DRIVE Orin. This software-defined system will continuously improve upon the safety and efficiency of the delivery and logistics industry with high-performance compute and AI algorithms that can be updated over the air.

The self-driving system, known as PlusDrive, can be retrofitted to existing trucks or added as an upfit option on new vehicles by manufacturers. The company plans to roll out the next-generation platform in 2022 and has already received more than 10,000 pre-orders.

With DRIVE Orin at the center, PlusDrive uses lidar, radar and cameras to provide a 360-degree view of the truck’s surroundings. Data gathered through the sensors helps the system identify objects nearby, plan its course, predict the movement of those objects and control the vehicle to make its next move safely.

Innovating an Industry

Trucking has played an increasingly vital role in the world economy. For example, commercial vehicles transport more than 70 percent of all freight in the U.S. as e-commerce and next-day delivery have skyrocketed in popularity.

These trends come as driver shortages accelerate. The American Trucking Association estimates the industry could be in need of 160,000 drivers by 2028, in addition to limits on the amount of hours drivers can consecutively work that restrict operation. With the ability to efficiently operate around the clock, autonomous trucks promise to ease much of these industry pressures.

In contrast to lighter passenger vehicles, trucks can total 80,000 pounds with a fully loaded trailer and need a longer time to come to a stop. Developing autonomous commercial vehicles requires a system that can perceive long distances in real time to enable these trucks to safely maneuver.

By leveraging DRIVE Orin, Plus is building an intelligent, software-defined solution to address these challenges in the near term.

Big Brains for Big Rigs

Autonomous vehicles require high-performance compute for truly intelligent operation.

NVIDIA Orin is a system-on-a-chip born out of the data center. It achieves 254 TOPS — nearly 8x the performance of the previous generation Xavier SoC — and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous trucks, while achieving systematic safety standards such as ISO 26262 ASIL-D.

This massive compute capability ensures the PlusDrive system is continuously learning, expanding the environments and situations in which it can operate autonomously.

“The greatest computing power is needed to process the trillions of operations that our autonomous driving system runs every second,” said Hao Zheng, Plus CTO and co-founder. “NVIDIA Orin is a natural choice for us.”

Plus is also a member of NVIDIA Inception, an accelerator program for startups in the AI and data science fields.

“The early access to NVIDIA’s autonomous vehicle technologies roadmap and solutions through the Inception program enables us to evaluate and adopt the latest hardware to provide the powerful real time performance required of our autonomous trucking system,” Zheng said.

Software-Defined Progress

Plus has already achieved significant self-driving milestones, and with a new software-defined platform, the company is giving itself plenty of room to excel further.

In 2019, Plus made history in the U.S. by successfully completing a cross-country autonomous trucking route in just three days. The company has conducted testing in 17 states and is currently operating commercial pilots with leading shipping companies in both the U.S. and China.

With the new Orin-based system, PlusDrive’s capabilities will continue to get better as AI technology advances. Orin’s massive compute headroom makes it easier to integrate and update advanced software features as they’re developed, enabling these intelligent trucks to continue down the open road of safer, more efficient delivery and logistics.

The post A Plus for Autonomous Trucking: Startup to Build Next-Gen Self-Driving Platform with NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Read More

From Audi to Zoox: Autonomous Vehicle Innovators to Showcase Latest Breakthroughs at GTC 2021

This April, learn about the future of AI-powered transportation from those who are building it.

The NVIDIA GPU Technology Conference returns to the virtual stage April 12-16, featuring autonomous vehicle leaders in a range of talks, panels and virtual networking events. Attendees will also have access to hands-on training for self-driving development and other deep learning topics. Registration is free of charge.

Experience the latest advancements in safer, more efficient transportation, from training in the data center, to high-fidelity simulation testing, to new robotaxi and autonomous trucking pilots around the world.

With digital networking events and sessions with live Q&A, GTC is the only place where it’s possible to interact with researchers, engineers, developers and technologists in the autonomous vehicle space, as well as healthcare, robotics, graphics and numerous other industries.

The week will kick off with a not-to-be-missed opening keynote from NVIDIA founder and CEO Jensen Huang that will be packed with the latest NVIDIA news and breakthroughs.

Here’s a sneak peek at the automotive offerings at this year’s event.

Expert Lineup

With rapid advances in AI technology, it can be difficult to follow the flurry of headlines and announcements surrounding autonomous vehicles. GTC provides the unique opportunity to hear about these innovations from those in the trenches of self-driving development — from global automakers and suppliers to startups and researchers.

Automotive session highlights include:

  • Jesse Levinson, CTO and co-founder of Zoox, sits down for a fireside chat on the robotaxi startup’s recently unveiled vehicle and upcoming technology roadmap.
  • Hildegard Wortmann, member of the board of management at Audi, discusses how the global automaker is addressing the largest transformation in the history of the automotive industry, driven by digitalization, electrification and sustainability.
  • Mo ElShenawy, senior vice president of engineering at Cruise, covers the challenges and benefits of building an autonomous vehicle from the ground up.
  • Raquel Urtasun, professor at University of Toronto and chief scientist at Uber ATG Toronto, outlines the upcoming autonomous future, where robotics and AI are intertwined in daily life, and the technological challenges that remain to achieve this new era.
  • Gavriel State, senior director of system software at NVIDIA, showcases the NVIDIA DRIVE Sim platform on Omniverse, generating synthetic data to comprehensively train deep neural networks for autonomous vehicle applications.
  • Cetin Mericli, CEO of Locomation, explains the technology as well as the challenges behind human-guided autonomous convoy systems for self-driving long-haul trucks.

Give AI a Try

Attendees don’t just hear about AI innovation at GTC, they can get hands on with new deep learning techniques to push technology further.

Courses hosted by the NVIDIA Deep Learning Institute provide a foundation for developing AI applications, including autonomous driving. In Deep Learning for Autonomous Vehicles — Perception, DLI attendees can learn how to design, train and deploy deep neural networks and optimize perception components for autonomous vehicles using the NVIDIA DRIVE development platform.

Additional topics include the fundamentals of deep learning, natural language processing and deep learning for multiple GPUs.

Take advantage of free GTC registration today. Short, hands-on DLI training sessions are available at no charge, while full-day workshops are priced at $249.

Don’t miss out on the only opportunity to meet, listen to and learn from the biggest stars in AI and autonomous vehicle technology.

The post From Audi to Zoox: Autonomous Vehicle Innovators to Showcase Latest Breakthroughs at GTC 2021 appeared first on The Official NVIDIA Blog.

Read More

How to Build Your Game Library in the Cloud

With GeForce NOW, over 5 million gamers are playing their favorite games in the cloud on PC, Mac, Chromebook, NVIDIA SHIELD TV, Android and iOS devices.

With over 800 instantly available games and 80+ free-to-play games, there’s something for everyone. And there are multiple ways to build your library.

We’ll review how to sync your Steam game catalog, search for games, add games to your library, and explain the difference between Instant Access and Single Session games.

Start with a Steam Sync

GeForce NOW members, on average, have about 50 games in their Steam catalog. The quickest and easiest way to find all the instant access games you already own is by using the Game Sync option.

Simply launch GeForce NOW, select the Sync Your Games tile and log in to Steam. Now all your GeForce NOW supported Steam games appear in your GeForce NOW library.

Sync your entire Steam game library in about one minute.

Members can also sync their Steam library by going to Settings > Game Sync.

Once your Steam account is authorized, you can update your GeForce NOW library from Settings > Game Sync every time you purchase a new Steam game. It’s the easiest way to keep your games catalog up to date in the GeForce NOW app.

Discover Games on GeForce NOW

The complete GeForce NOW games list is available online, including the latest releases.

Members can also find games in GeForce NOW using the search function, a powerful tool that lets you search by game title, developer, publisher, genre or keyword. Try searching for “kittens” and see what you find.

GeForce NOW expands its games library each week. The latest games are available day-and-date, while back catalog games are added every Thursday. Check back regularly to see what’s new.

Add Your Games to Your GeForce NOW Library

Owners of games from Ubisoft Connect, Epic Games Store (looking at you, Fortnite fans) or custom game launchers can search for games and manually add them to My Library by clicking +LIBRARY.

After that, click PLAY and if it’s the first time playing that game, just sign in to your account, when prompted, and you’re ready to game.

Suddenly your games are available on all the devices you own.

Buying Games to Play with GeForce NOW

GeForce NOW is a powerful gaming PC in the cloud — not a store.

Even though it’s not a store, members can still buy Steam games while in the app. There are two ways to do this. Add the game to your library, click PLAY, log in to Steam and purchase as you normally would. Or launch Steam to browse and make purchases.

For other game stores, or custom game launchers, you can purchase a supported game through the respective store, then go back into the GeForce NOW app to add it as outlined above.

Discover a game you want to play? Find which store it’s supported on in the game page.

Featured GeForce NOW supported games are “Instant Access.” These appear within the app and can be added to your GeForce NOW library in less than a minute, all with streaming optimizations applied automatically for optimal graphics settings.

Don’t forget to check out weekly free games on the Epic Games Store, which are often supported on GeForce NOW.

Game Selection Explained

It’s important to remember how new cloud gaming is to NVIDIA and, especially, game publishers. Adding games to a new platform is a critical decision for publishers. Remember what Netflix was like when it first launched? Sending DVDs through the mail? Bold, but undeniably forward-thinking.

GeForce NOW uses an opt-in process to make it easy for developers and publishers to join the service. NVIDIA signs an agreement with every game developer before publishing their titles on GeForce NOW. There is no cost for developers — games just run without difficult porting requirements — helping them reach millions of players who don’t have game-ready PCs.

Now, Get YOUR Game On

GeForce NOW supports more than 800 games in total. More than 80 of those games are free-to-play. That’s a great way to start building your library, especially for members who are new to PC gaming.

Members can build their library with +80 (and counting) free-to-play games.

Sign up, download the app, browse a wide selection of titles, click the game of your choice and begin your cloud gaming journey with GeForce NOW.

Editor’s note: This is the second in a series on the GeForce NOW game-streaming service, how it works and where it’s going next. To learn more about cloud gaming, and the origins of GeForce NOW, read part one: GeForce NOW Explained: What is Cloud Gaming?

In our next blog, we’ll talk about latency, what it is and tips to improve your GeForce NOW experience.

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post How to Build Your Game Library in the Cloud appeared first on The Official NVIDIA Blog.

Read More

Innovators, Researchers, Industry Leaders: Meet the Women Headlining at GTC

An A-list of female AI researchers and industry executives will take the stage at next month’s GPU Technology Conference to share the latest breakthroughs in every industry imaginable.

Recognized in Forbes as a top conference for women to attend to further their careers in AI, GTC runs online, April 12-16, and is set to draw tens of thousands of technologists, business leaders and creators from around the world.

GTC will kick off with a livestreamed keynote by NVIDIA founder and CEO Jensen Huang, and feature 1,300 speaker sessions, including hundreds from standout women speakers.

Industry, Public Sector Luminaries

AI and accelerated applications are driving innovations across the public and private sectors, spanning healthcare, robotics, design and more. At GTC, Danielle Merfeld, vice president and chief technology officer of GE Renewable Energy, will present a session, as will Jie Chen, managing director in corporate model risk at Wells Fargo.

Audi’s Hildegard Wortmann, member of the company’s board of management, will deliver a talk on digitalization, electrification and sustainability in the automotive industry. And Vicki Dobbs Beck, executive in charge at visual effects studio ILMxLAB, will speak about immersive entertainment.

Speakers from the federal sector include Lauren Knausenberger, chief information officer of the U.S. Air Force, and Suzette Kent, former federal chief information officer of the United States.

Pioneering AI, HPC Researchers

Several research luminaries will speak at GTC, including Rommie Amaro of the University of California at San Diego, winner of a special Gordon Bell Prize for work fighting COVID-19. So too will another pioneer in AI and healthcare, Daphne Koller, adjunct professor of computer science and pathology at Stanford University.

Leaders from NVIDIA Research — including Anima Anandkumar, director of machine learning research; Sanja Fidler, director of AI; and Kate Kallot, head of emerging areas — will present groundbreaking work. Raquel Urtasun, professor at the University of Toronto, will discuss her work in machine perception for self-driving cars.

Female Founders

The number of female-founded companies has doubled since 2009, a Crunchbase report found in 2019. GTC features female founders from around the world, such as Nigeria-based Ada Nduka Oyom, founder of the nonprofit organization She Code Africa.

Nora Khaldi, founder and CEO of Ireland-based startup Nuritas, will speak at GTC about how AI startups are revolutionizing healthcare around the world. And from Silicon Valley, Inga Petryaevskaya, co-founder and CEO of virtual reality startup Tvori, will present a talk on the role of AI startups in media and entertainment.

Advancing Inclusion at GTC

In addition to women speakers, GTC has grown its representation of female attendees by almost 4x since 2017 — and strengthened support for underrepresented developers and scientists through education, training and networking opportunities.

We’ve partnered with women in a variety of tech groups, and boosted GTC participation across underrepresented developer communities by working with organizations such as the National Society of Black Engineers, Black in AI and Latinx in AI to offer access to GTC content and online training from NVIDIA’s Deep Learning Institute.

This GTC, NVIDIA’s Women in Technology employee community is hosting an all-female panel sharing tools, strategies and frameworks to keep up with the pace of AI innovation during the pandemic. The group will also hold a networking event hosted by NVIDIA women and fellow industry leaders.

Registration for GTC is free, and provides access to more than 1,300 talks as well as dozens of hands-on training sessions, demos and networking events.

Main image shows GTC speakers (clockwise from top left) Kate Kallot, Raquel Urtasun, Jie Chen and Daphne Koller.

The post Innovators, Researchers, Industry Leaders: Meet the Women Headlining at GTC appeared first on The Official NVIDIA Blog.

Read More

How Suite It Is: NVIDIA and VMware Deliver AI-Ready Enterprise Platform

As enterprises modernize their data centers to power AI-driven applications and data science, NVIDIA and VMware are making it easier than ever to develop and deploy a multitude of different AI workloads in the modern hybrid cloud.

The companies have teamed up to optimize the just-announced update to vSphere — VMware vSphere 7 Update 2 — for AI applications with the NVIDIA AI Enterprise software suite (see Figure 1 below). This combination enables scale-out, multi-node performance and compatibility for a vast set of accelerated CUDA applications, AI frameworks, models and SDKs for the hundreds of thousands of enterprises that use vSphere for server virtualization.

Through this first-of-its-kind industry collaboration, AI researchers, data scientists and developers gain the software they need to deliver successful AI projects, while IT professionals acquire the ability to support AI using the tools they’re most familiar with for managing large-scale data centers, without compromise.

VMware + NVIDIA AI-Ready Platform
Figure 1: NVIDIA AI Enterprise for VMware vSphere runs on NVIDIA-Certified Systems to make it easy for IT to deploy virtualized AI at scale.

One Suite Package for AI Enterprise

NVIDIA AI Enterprise is a comprehensive suite of enterprise-grade AI tools and frameworks that optimize business processes and boost efficiency for a broad range of key industries, including manufacturing, logistics, financial services, retail and healthcare. With NVIDIA AI Enterprise, scientists and AI researchers have easy access to NVIDIA’s leading AI tools to power AI development across projects ranging from advanced diagnostics, smart factories, fraud detection and more.

The solution overcomes the complexity of deploying individual AI applications, as well as the potential failures that can result from having to manually provision and manage different applications and infrastructure software that can often be incompatible.

With NVIDIA AI Enterprise running on vSphere, customers can avoid silos of AI-specific systems that are difficult to manage and secure. They can also mitigate the risks of shadow AI deployments, where data scientists and machine learning engineers procure resources outside of the IT ecosystem.

Licensed by NVIDIA, AI Enterprise for vSphere is supported on NVIDIA-Certified Systems which include mainstream servers from Dell Technologies, HPE, Lenovo and Supermicro. This allows even the most modern, demanding AI applications to be easily supported just like traditional enterprise workloads on a common infrastructure and using data center management tools like VMware vCenter.

IT can manage availability, optimize resource allocation and enable the security of its valuable IP and customer data for AI workloads running on premises and in the hybrid cloud.

Scalable, Multi-Node, Virtualized AI Performance

NVIDIA AI Enterprise enables virtual workloads to run at near bare-metal performance on vSphere with support for the record-breaking performance of NVIDIA A100 GPUs for AI and data science (see Chart 1 below). AI workloads can now scale across multiple nodes, allowing even the largest deep learning training models to run on VMware Cloud Foundation.

AI Enterprise VMware vSphere
Chart 1: With NVIDIA AI Enterprise for vSphere, distributed deep learning training scales linearly, across multiple nodes, and delivers performance that is indistinguishable from bare metal.

AI workloads come in all sizes with a wide variety of data requirements. Some process images, like live traffic reporting systems or online shopping recommender systems. Others are text-based, like a customer service support system powered by conversational AI.

Training an AI model can be incredibly data intensive and requires scale-out performance across multiple GPUs in multiple nodes. Running inference on a model in deployment usually requires fewer computing resources and may not need the power of a whole GPU.

Through the collaboration between NVIDIA and VMware, vSphere is the only server virtualization software to provide hypervisor support for live migration with NVIDIA Multi-Instance GPU technology. With MIG, each A100 GPU can be partitioned into up to seven instances at the hardware level to maximize efficiency for workloads of all sizes.

Extensive Resources for AI Applications and Infrastructure

NVIDIA AI Enterprise includes key technologies and software from NVIDIA for the rapid deployment, management and scaling of AI workloads in virtualized data centers running on VMware Cloud Foundation.

NVIDIA AI Enterprise is a certified, end-to-end suite of key NVIDIA AI technologies and applications as well as enterprise support services.

Customers who would like to adopt NVIDIA AI Enterprise as they upgrade to vSphere 7 U2 can contact NVIDIA and VMware to discuss their needs.

For more information on bringing AI to VMware-based data centers, read the NVIDIA developer blog and the VMware vSphere 7 U2 blog.

To further develop AI expertise with NVIDIA and VMware, register for free for GTC 2021.

The post How Suite It Is: NVIDIA and VMware Deliver AI-Ready Enterprise Platform appeared first on The Official NVIDIA Blog.

Read More

Artists: Unleash Your Marble Arts in NVIDIA Omniverse Design Challenge

Artists, this is your chance to push past creative limits — and win great prizes — while exploring NVIDIA Omniverse through a new design contest.

Called “Create with Marbles,” the contest is set in Omniverse, the groundbreaking platform for virtual collaboration, creation and simulation, and based on the Marbles RTX demo that first previewed at GTC last year and showcases how complex physics can be simulated in a real-time, ray-traced world.

In the design challenge, artists can experience how Omniverse is transforming the future of real-time graphics. The best designed sets will win fantastic prizes: the top three entries will receive an NVIDIA RTX A6000, GeForce RTX 3090 and GeForce RTX 3080 GPU, respectively.

Creators from all over the world are invited to experiment with Omniverse and take the first step by staging a scene with all the objects in the original Marbles demo. No modeling or animation is required — challenge participants will have access to over 100 Marbles assets, which they can use to easily assemble their own scene.

The “Create with Marbles” contest is the perfect opportunity for creators to dive into Omniverse and familiarize themselves with the platform. Install Omniverse Create to access any of the available Marbles assets. Leverage the tools built into Omniverse Create, or pick your favorite content creation application that’s available on Omniverse Connector. Then render photorealistic graphics using RTX Renderer, and submit your final scene to showcase your work.

Explore Omniverse and use existing assets to unleash your artistic creativity through scene composition, camera settings and lighting.

Entries will be judged on various criteria, including the use of Omniverse Create and Marbles assets, the quality of the final render and overall originality.

Guest judges include Rachel Rose, the research and development supervisor at Industrial Light & Magic; Yangtian Li, senior concept artist at Singularity 6; and Lynn Yang, freelance concept artist and matte painter.

Deadline for submissions is April 2, 2021. The winners of the contest will be announced on the contest winners page in mid-April.

Learn more about the “Create with Marbles” challenge, and start creating in Omniverse today.

The post Artists: Unleash Your Marble Arts in NVIDIA Omniverse Design Challenge appeared first on The Official NVIDIA Blog.

Read More

In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA

Like a traveler who overpacks a suitcase with a closet’s worth of clothes, most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus.

But an individual cell pulls out only the subsection of genetic apparel that it needs to function, with each cell type — such as liver, blood or skin cells — activating different genes. The regions of DNA that determine a cell’s unique function are opened up for easy access, while the rest remains wadded up around proteins.

Researchers from NVIDIA and Harvard University’s Department of Stem Cell and Regenerative Biology have developed a deep learning toolkit to help scientists study these accessible regions of DNA, even when sample data is noisy or limited — which is often the case in the early detection of cancer and other genetic diseases.

AtacWorks, featured today in Nature Communications, both denoises sequencing data and identifies areas with accessible DNA, and can run inference on a whole genome in just half an hour with NVIDIA Tensor Core GPUs. It’s available on NGC, NVIDIA’s hub of GPU-optimized software.

AtacWorks works with ATAC-seq, a popular method for finding open areas in the genome in both healthy and diseased cells, enabling critical insights for drug discovery.

ATAC-seq typically requires tens of thousands of cells to get a clean signal — making it very difficult to investigate rare cell types, like the stem cells that produce blood cells and platelets. By applying AtacWorks to ATAC-seq data, the same quality of results can be achieved with just tens of cells, enabling scientists to learn more about the sequences active in rare cell types, and to identify mutations that make people more vulnerable to diseases.

“With AtacWorks, we’re able to conduct single-cell experiments that would typically require 10 times as many cells,” says paper co-author Jason Buenrostro, assistant professor at Harvard and the developer of the ATAC-seq method. “Denoising low-quality sequencing coverage with GPU-accelerated deep learning has the potential to significantly advance our ability to study epigenetic changes associated with rare cell development and diseases.”

Needle in a Noisy Haystack

Buenrostro pioneered ATAC-seq in 2013 as a way to scan the epigenome to locate sites with accessible areas within a chromosome, known as chromatin. The method, popular among leading genomics research labs and pharmaceutical companies, measures the intensity of a signal at every region across the genome. Peaks in the signal correspond to areas with open DNA.

The fewer the cells available, the noisier the data appears — making it difficult to identify which areas of the DNA are accessible.

AtacWorks, a PyTorch-based convolutional neural network, was trained on labeled pairs of matching ATAC-seq datasets: one high quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal.

The researchers found that using AtacWorks, they could identify accessible chromatin in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads. With this capability, scientists could conduct research with a smaller number of cells, significantly reducing the cost of sample collection and sequencing.

Analysis, too, becomes faster and cheaper with AtacWorks: Running on NVIDIA Tensor Core GPUs, the model took under 30 minutes for inference on a whole genome, a process that would take 15 hours on a system with 32 CPU cores.

“With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” said NVIDIA researcher Avantika Lal, lead author on the paper. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.”

Enabling Insights into Disease, Drug Discovery

Looking at accessible regions of DNA could help medical researchers identify specific mutations or biomarkers that make people more vulnerable to conditions including Alzheimer’s, heart disease or cancers. This knowledge could also inform drug discovery by giving researchers a better understanding of the mechanisms of disease.

In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods.

With a sample set of just 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells.

Learn more about NVIDIA’s work in healthcare at the GPU Technology Conference, April 12-16. Registration is free. The healthcare track includes 16 live webinars, 18 special events, and over 100 recorded sessions, including a talk by Lal titled Deep Learning and Accelerated Computing for Epigenomic Data.

Subscribe to NVIDIA healthcare news

The DOI for this Nature Communications paper is 10.1038/s41467-021-21765-5.

The post In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA appeared first on The Official NVIDIA Blog.

Read More

Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease

Florida orange juice is getting a taste of AI.

With the Sunshine State’s $9 billion annual citrus crops plagued by a fruit-souring disease, researchers and businesses are tapping AI to help rescue the nation’s largest producer of orange juice.

University of Florida researchers are developing AI applications for agriculture. And the technology — computer vision for smart sprayers — is now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The efforts promise to help farmers combat what’s known as “citrus greening,” the disease brought on by bacteria from the Asian citrus psyllid insect hitting farms worldwide.

Citrus greening causes patchy leaves and green fruit and can quickly decimate orchards.

The agricultural equipment supplier has seen farmers lose one-third of the orchard acreage in Florida from the onslaught of citrus greening.

“It’s having a huge impact on the state of Florida, California, Brazil, China, Mexico — the entire world is battling a citrus crisis,” said Yiannis Ampatzidis, assistant professor at UF’s Department of Agricultural and Biological Engineering.

Fertilizing Precision Agriculture

Ampatzidis works with a team of researchers focused on automation in agriculture. They develop AI applications to forecast crop yields and reduce pesticide use. The team’s image recognition models are run on the Jetson AI platform in the field for inference.

“The goal is to use Jetson Xavier to detect the size of the tree and the leaf density to instantly optimize the flow of the nozzles on sprayers for farming,” said Ampatzidis. “It also allows us to count fruit density, predict yield, and study water usage and pH levels.”

The growing popularity of organic produce and the adoption of more sustainable farming practices have drawn a field of startups plowing AI for benefits to businesses and the planet. John Deere-owned Blue River, FarmWise, SeeTree and Smart Ag are just some of the agriculture companies adopting NVIDIA GPUs for training and inference.

Like many, UF and CCI are developing applications for deployment on the NVIDIA Jetson edge AI platform. And UF has wider ambitions for fostering AI development that benefits the state

Last July, UF and NVIDIA hatched plans to build one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of processing power. Built with NVIDIA DGX systems and NVIDIA Mellanox networking, HiPerGator AI is now online to power UF’s precision agriculture research.  The new supercomputer was made possible by a $25 million donation from alumnus and NVIDIA founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

UF is a member of the NVIDIA Applied Research Accelerator Program, which supports applied research in coordination with businesses relying on NVIDIA platforms for GPU-accelerated application deployments.

Deploying Robotic Sprayers

Citrus greening has required farmers to act quickly to remove diseased trees to prevent its advances. Many orchards now have gaps in their rows of trees. As a result, conventional sprayers that apply agrochemicals uniformly along entire rows will often overspray, wasting resources and creating unnecessary environmental contamination.

UF researchers developed a sensor system of lidar and cameras for sprayers used in orchards. These sensors feed into the NVIDIA Jetson AGX Xavier, which can process split-second inference on whether the sprayer is facing a tree to spray or not, enabling autonomous spraying.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size, said Kieth Hollingsworth, a CCI sales specialist.

“It cuts down on spraying waste overspray and on wasted material that ultimately gets washed into the groundwater. We can also predict yield based on the oranges we see on the tree,” said Hollingsworth.

Commercializing AgTech AI

CCI began working with UF eight years ago. In the past couple of years, the company has been working with the university to upgrade its infrared laser-based spraying system to one with AI.

And customers are coming to CCI for novel ways to attack the problem, said Hollingsworth.

Working with NVIDIA’s Applied Research Accelerator Program, CCI has gotten a boost with technical guidance on Jetson Xavier that has sped its development.

Citrus industry veteran Hollingsworth says AI is a useful tool in the field to wield against the crop disease that has taken some of the sweetness out of orange juice over the years.

“People have no idea how complex of a crop oranges are to grow and what it takes to produce and squeeze the juice that goes into a glass of orange juice,” said Hollingsworth.

Academic researchers can apply now for the Applied Research Accelerator Program.

Photo credit: Samuel Branch on Unsplash

The post Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease appeared first on The Official NVIDIA Blog.

Read More