How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC 

It could only happen in NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

And it happened during an interview with a virtual toy model of NVIDIA’s CEO, Jensen Huang.

“What are the greatest …” one of Toy Jensen’s creators asked, stumbling, then stopping before completing his scripted question.

Unfazed, the tiny Toy Jensen paused for a moment, considering the answer carefully.

“The greatest are those,” Toy Jensen replied, “who are kind to others.”

Leading-edge computer graphics, physics simulation, a live CEO, and a supporting cast of AI-powered avatars came together to make NVIDIA’s GTC keynote — delivered using Omniverse — possible.

Along the way, a little soul got into the mix, too.

The AI-driven comments, added to the keynote as a stinger, provided an unexpected peek at the depth of Omniverse’s technology.

“Omniverse is the hub in which all the various research domains converge and align and work in unison,” says Kevin Margo, a member of NVIDIA’s creative team who put the presentation together. “Omniverse facilitates the convergence of all of them.”

Toy Jensen’s ad-lib capped a presentation that seamlessly mixed a real CEO with virtual and real environments as Huang took viewers on a tour of how NVIDIA technologies are weaving AI, graphics and robotics together with humans in real and virtual worlds.

Real CEO, Digital Kitchen

While the CEO viewers saw was all real, the environment around him morphed as he spoke to support the story he was telling.

Viewers saw Huang deliver a keynote that seemed to begin, like so many during the global COVID pandemic, in Huang’s kitchen.

Then, with a flourish, Huang’s kitchen — modeled down to the screws holding its cabinets together — slid away from sight as Huang strolled toward a virtual recreation of Endeavor’s gleaming lobby.

“One of our goals is to find a way to elevate our keynote events,” Margo says. “We’re always looking for those special moments when we can do something novel and fantastical, and that showcase NVIDIA’s latest technological innovations.”

It was the start of a visual journey that would take Huang from that lobby to Shannon’s, a gathering spot inside Endeavor, through a holodeck, and a data center with stops inside a real robotics lab and the exterior of Endeavor.

Virtual environments such as Huang’s kitchen were created by a team using familiar tools supported by Omniverse such as Autodesk Maya and 3ds Max, and Adobe Substance Painter.  

Omniverse served to connect them all in real-time — so each team member could see changes made by colleagues using different tools simultaneously, accelerating their work.

“That was critical,” Margo says.

The virtual and the real came together quickly once live filming began.

A small on-site video team recorded Huang’s speech in just four days, starting October 30, in a spare pair of conference rooms at NVIDIA’s Silicon Valley headquarters.

Omniverse allowed NVIDIA’s team to project the dynamic virtual environments their colleagues had created on a screen behind Huang.

As a result, the light spill onto Huang changed as the scene around him changed, better integrating him into the virtual environment.

And as Huang moved through the scene, or as the camera shifted, the environment changed around Huang.

“As the camera moves, the perspective and parallax of the world on the video wall responds accordingly,” Mago says.

And because Huang could see the environment projected on the screens around him, he was better able to navigate each scene.

At the Speed of Omniverse

All of this accelerated the work of NVIDIA’s production team, which had most of what they needed in-camera after each shot rather than adding elaborate digital sets in post-production.

As a result, the video team quickly created a presentation seamlessly blending a real CEO with virtual and real-world settings.

However, Omniverse was more than just a way to speed collaboration between creatives working with real and digital elements hustling to hit a deadline. It also served as the platform that knit the string of demos featured in the keynote together.

To help developers create intelligent, interactive agents with Omniverse that can see, speak, converse on a wide range of subjects and understand naturally spoken intent, Huang announced Omniverse Avatar.

Omniverse brings together a deep stack of technologies — from ray-tracing to recommender systems — that were mixed and matched throughout the keynote to drive a series of stunning demos.

In a demo that swiftly made headlines, Huang showed how “Project Tokkio” for Omniverse Avatar connects Metropolis computer vision, Riva speech AI, avatar animation and graphics into a real-time conversational AI robot — the Toy Jensen Omniverse Avatar.

The conversation between three of NVIDIA’s engineers and a tiny toy model of Huang was more than just a technological tour de force, demonstrating expert, natural Q&A.

It showed how photorealistic modeling of Toy Jensen and his environment — right down to the glint on Toy Jensen’s glasses as he moved his head — and NVIDIA’s Riva speech synthesis technology powered by the Megatron 530B large language model could support natural, fluid conversations.

To create the demo, NVIDIA’s creative team created the digital model in Maya Substance, and Omniverse did the rest.

“None of it was manual, you just load up the animation assets and talk to it,” he said.

Huang also showed a second demo of Project Tokkio, a customer-service avatar in a restaurant kiosk that was able to see, converse with and understand two customers.

Rather than relying on Megatron, however, this model relied on a model that integrated the restaurant’s menu, allowing the avatar to smoothly guide customers through their options.

That same technology stack can help humans talk to one another, too. Huang showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and video content creation applications.

A demo showed a woman speaking English on a video call in a noisy cafe, but she can be heard clearly without background noise. As she speaks, her words are transcribed and translated in real-time into French, German and Spanish.

Thanks to Omniverse, they’re spoken by an avatar able to engage in conversation with her same voice and intonation.

These demos were all possible because Omniverse, through Omniverse Avatar, unites advanced speed AI, computer vision, natural language understanding, recommendation engines, facial animation and graphics technologies.

Omniverse Avatar’s speech recognition is based on NVIDIA Riva, a software development kit that recognizes speech across multiple languages. Riva is also used to generate human-like speech responses using text-to-speech capabilities.

Omniverse Avatar’s natural language understanding is based on the Megatron 530B large language model that can recognize, understand and generate human language.

Megatron 530B is a pretrained model that can, with little or no additional training, complete sentences, answers questions involving a large domain of subjects. It can summarize long, complex stories, translate to other languages, and handle many domains that it is not trained specifically to do.

Omniverse Avatar’s recommendation engine is provided by NVIDIA Merlin, a framework that allows businesses to build deep learning recommender systems capable of handling large amounts of data to make smarter suggestions.

Its perception capabilities are enabled by NVIDIA Metropolis, a computer vision framework for video analytics.

And its avatar animation is powered by NVIDIA Video2Face and Audio2Face, 2D and 3D AI-driven facial animation and rendering technologies.

All of these technologies are composed into an application and processed in real-time using the NVIDIA Unified Compute Framework.

Packaged as scalable, customizable microservices, the skills can be securely deployed, managed and orchestrated across multiple locations by NVIDIA Fleet Command.

Using them, Huang was able to tell a sweeping story about how NVIDIA Omniverse is changing multitrillion-dollar industries.

All of these demos were built on Omniverse. And thanks to Omniverse, everything came together — a real CEO, real and virtual environments, and a string of demos made within Omniverse as well.

Since its launch late last year, Omniverse has been downloaded over 70,000 times by designers at 500 companies. Omniverse Enterprise is now available starting at $9,000 a year.

The post How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC  appeared first on The Official NVIDIA Blog.

Read More

Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin

Meet the electric vehicle that’s truly future-proof.

Electric-automaker NIO took the wraps off its fifth mass-production model, the ET5, during NIO Day 2021 last week.

The mid-size sedan borrows from its luxury and performance predecessors for an intelligent vehicle that’s as agile as it is comfortable. Its AI features are powered by the NIO Adam supercomputer, built on four NVIDIA DRIVE Orin systems-on-a-chip (SoC).

In addition to centralized compute, the ET5 incorporates high-performance sensors into its sleek design, equipping it with the hardware necessary for advanced AI-assisted driving features.

The sedan also embodies the NIO concept of vehicles serving as a second living room, with a luxurious interior and immersive augmented reality digital cockpit.

These cutting-edge features are built to go the distance. The ET5 achieves more than 620 miles of range with the 150 kWh Ultralong Range Battery and a lightning-fast acceleration from zero to 60 mph in about four seconds.

A Truly Intelligent Creation

The ET5 and its older sibling, the ET7 full-size sedan, rely on a centralized, high-performance compute architecture to power AI features and continuously receive upgrades over the air.

The NIO Adam supercomputer is built on four DRIVE Orin SoCs, making it one of the most powerful platforms to run in a vehicle, achieving a total of more than 1,000 TOPS of performance.

Orin is the world’s highest-performance, most-advanced AV and robotics processor. It delivers up to 254 TOPS to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots while achieving systematic safety standards such as ISO 26262 ASIL-D.

Adam integrates the redundancy and diversity necessary for safe autonomous operation by using multiple SoCs.

The first two SoCs process the eight gigabytes of data produced by the vehicle’s sensor set every second.

The third Orin serves as a backup to ensure the system can operate safely in any situation.

While the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on user preferences.

With high-performance computing at its core, Adam is a major achievement in the creation of automotive intelligence and autonomous driving.

Going Global

After beginning deliveries in Norway earlier this year, NIO will expand worldwide in 2022.

The ET7, the first vehicle built on the DRIVE Orin-powered Adam supercomputer, will become available in March, with the ET5 following in September.

Next year, NIO vehicles will begin deliveries in the Netherlands, Sweden and Denmark.

By 2025, NIO vehicles will be in 25 countries and regions worldwide, bringing one of the most advanced AI platforms to even more customers.

With the ET5, NIO is showing no signs of slowing as it charges into the future with sleek, intelligent EVs powered by NVIDIA DRIVE.

The post Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Read More

Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection

Imagine picking out a brand new car — only to find a chip in the paint, rip in the seat fabric or mark in the glass.

AI can help prevent such moments of disappointment for manufacturers and potential buyers.

Mariner, an NVIDIA Metropolis partner based in Charlotte, North Carolina, offers an AI-enabled video analytics system to help manufacturers improve surface defect detection. For over 20 years, the company has worked to provide its customers with deep learning-based insights to optimize their manufacturing processes.

The vision AI platform, called Spyglass Visual Inspection, or SVI, helps manufacturers detect the defects they couldn’t see before. It’s built on the NVIDIA Metropolis intelligent video analytics framework and powered by NVIDIA GPUs.

SVI is installed in factories and used by customers like Sage Automotive Interiors to enhance their defect detection in cases where traditional, rules-based machine vision systems often pinpoint false positives.

Reducing Waste with AI

According to David Dewhirst, vice president of marketing at Mariner, up to 40 percent of annual revenue for automotive manufacturers is consumed by producing defective products.

Traditional machine vision systems installed in factories have difficulty discerning between true defects — like a stain in fabric or a chip in glass — and false positives, like lint or a water droplet that can be easily wiped away.

SVI, however, uses AI software and NVIDIA hardware connected to camera systems that provide real-time inspection of pieces on production lines, identify potential issues and determine whether they are true material defects — in just a millisecond.

This speeds up factory lines, removing the need to slow or stop the workflow to have a person inspect each potential defect. SVI results in a 20 percent increase in line speed and 30x reduction of incorrect defect classification over traditional machine vision systems.

The platform can be integrated with a factory’s existing machine vision system, giving it a boost with AI-based analysis and processing. It offers a factory an average annual savings of $2 million, Dewhirst said.

SVI uses a deep learning model that analyzes images, identifies a defect, and then labels the defects by type — which are all tasks that require powerful graphics processors.

“NVIDIA GPUs guarantee that SVI can handle almost any pixel combination and processing speed, which is why it was our choice of hardware on which to standardize our platform,” Dewhirst said.

Mariner is on track to revolutionize the defect detection process by expanding the use of its platform, which can identify defects in metal, plastic or virtually any other surface type.

Learn more about how the Spyglass system works:

The post Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection appeared first on The Official NVIDIA Blog.

Read More

Top 5 Edge AI Trends to Watch in 2022

2021 saw massive growth in the demand for edge computing — driven by the pandemic, the need for more efficient business processes, as well as key advances in the Internet of Things, 5G and AI.

In a study published by IBM in May, for example, 94 percent of surveyed executives said their organizations will implement edge computing in the next five years.

From smart hospitals and cities to cashierless shops to self-driving cars, edge AI — the combination of edge computing and AI — is needed more than ever.

Businesses have been slammed by logistical problems, worker shortages, inflation and uncertainty caused by the ongoing pandemic. Edge AI solutions can be used as a bridge between humans and machines, enabling improved forecasting, worker allocation, product design and logistics.

Here are the top five edge AI trends NVIDIA expects to see in 2022:

1. Edge Management Becomes an IT Focus

While edge computing is rapidly becoming a must-have for many businesses, deployments remain in the early stages.

To move to production, edge AI management will become the responsibility of IT departments. In a recent report, Gartner wrote, “Edge solutions have historically been managed by the line of business, but the responsibility is shifting to IT, and organizations are utilizing IT resources to optimize cost.”1

To address the edge computing challenges related to manageability, security and scale, IT departments will turn to cloud-native technology. Kubernetes, a platform for containerized microservices, has emerged as the leading tool for managing edge AI applications on a massive scale.

Customers with IT departments that already use Kubernetes in the cloud can transfer their experience to build their own cloud-native management solutions for the edge. More will look to purchase third-party offerings such as Red Hat OpenShift, VMware Tanzu, Wind River Cloud Platform and NVIDIA Fleet Command.

2. Expansion of AI Use Cases at the Edge

Computer vision has dominated AI deployments at the edge. Image recognition led the way in AI training, resulting in a robust ecosystem of computer vision applications.

NVIDIA Metropolis, an application framework and set of developer tools that helps create computer vision AI applications, has grown its partner network 100-fold since 2017 to now include 1,000+ members.

Many companies are deploying or purchasing computer vision applications. Such companies at the forefront of computer vision will start to look to multimodal solutions.

Multimodal AI brings in different data sources to create more intelligent applications that can respond to what they see, hear and otherwise sense. These complex AI use cases employ skills like natural language understanding, conversational AI, pose estimation, inspection and visualization.

Combined with data storage, processing technologies, and input/output or sensor capabilities, multimodal AI can yield real-time performance at the edge for an expansion of use cases in robotics, healthcare, hyper-personalized advertising, cashierless shopping, concierge experiences and more.

Imagine shopping with a virtual assistant. With traditional AI, an avatar might see what you pick up off a shelf, and a speech assistant might hear what you order.

By combining both data sources, a multimodal AI-based avatar can hear your order, provide a response, see your reaction, and provide further responses based on it. This complementary information allows the AI to deliver a better, more interactive customer experience.

To see an example of this in action, check out Project Tokkio:

3. Convergence of AI and Industrial IoT Solutions

The intelligent factory is another space being driven by new edge AI applications. According to the same Gartner report, “By 2027, machine learning in the form of deep learning will be included in over 65 percent of edge use cases, up from less than 10 percent in 2021.”

Factories can add AI applications onto cameras and other sensors for inspection and predictive maintenance. However, detection is just step one. Once an issue is detected, action must be taken.

AI applications are able to detect an anomaly or defect and then alert a human to intervene. But for safety applications and other use cases when instant action is required, real-time responses are made possible by connecting the AI inference application with the IoT platforms that manage the assembly lines, robotic arms or pick-and-place machines.

Integration between such applications relies on custom development work. Hence, expect more partnerships between AI and traditional IoT management platforms that simplify the adoption of edge AI in industrial environments.

4. Growth in Enterprise Adoption of AI-on-5G 

AI-on-5G combined computing infrastructure provides a high-performance and secure connectivity fabric to integrate sensors, computing platforms and AI applications — whether in the field, on premises or in the cloud.

Key benefits include ultra-low latency in non-wired environments, guaranteed quality-of-service and improved security.

AI-on-5G will unlock new edge AI use cases:

One of the world’s first full stack AI-on-5G platforms, Mavenir Edge AI, was introduced in November. Next year, expect to see additional full-stack solutions that provide the performance, management and scale of enterprise 5G environments.

5. AI Lifecycle Management From Cloud to Edge

For organizations deploying edge AI, MLOps will become key to helping drive the flow of data to and from the edge. Ingesting new, interesting data or insights from the edge, retraining models, testing applications and then redeploying those to the edge improves model accuracy and results.

With traditional software, updates may happen on a quarterly or annual basis, but AI gains significantly from a continuous cycle of updates.

MLOps is still in early development, with many large players and startups building solutions for the constant need for AI technology updates. While mostly focused on solving the problem of the data center for now, such solutions in the future will shift to edge computing.

Riding the Next Wave of AI Computing

Waves of AI Computing

The development of AI has consisted of several waves, as pictured above.

Democratization of AI is underway, with new tools and solutions making it a reality. Edge AI, powered by huge growth in IoT and availability of 5G, is the next wave to break.

In 2022, more enterprises will move their AI inference to the edge, bolstering ecosystem growth as the industry looks at how to extend from cloud to the edge.

Learn more about edge AI by watching the GTC session, The Rise of Intelligent Edge: From Enterprise to Device Edge, on demand.

Check out NVIDIA edge computing solutions.

1 Gartner, “Predicts 2022: The Distributed Enterprise Drives Computing to the Edge”, 20 October 2021. By analysts: Thomas Bittman, Bob Gill, Tim Zimmerman, Ted Friedman, Neil MacDonald, Karen Brown

The post Top 5 Edge AI Trends to Watch in 2022 appeared first on The Official NVIDIA Blog.

Read More

Omniverse Creator Uses AI to Make Scenes With Singing Digital Humans

The thing about inspiration is you never know where it might come from, or where it might lead.

Anderson Rohr, a 3D generalist and freelance video editor based in southern Brazil, has for more than a dozen years created content ranging from wedding videos to cinematic animation.

After seeing another creator animate a sci-fi character’s visage and voice using NVIDIA Omniverse and its AI-powered Audio2Face application, Rohr said he couldn’t help but play around with the technology.

The result is a grimly voiced, lip-synced cover of “Bad Moon Rising,” the 1960s anthem from Credence Clearwater Revival, which Rohr created using his own voice.

To make the video, Rohr used an NVIDIA Studio system with a GeForce RTX 3090 GPU.

Rohr’s Artistic Workflow

For this personal project, Rohr first recorded himself singing and opened the file in Audio2Face.

The application, built on NVIDIA AI and Omniverse technology, instantly generates expressive facial animations for digital humans with only a voice-over track or any other audio source.

Rohr then manually animated the eyes, brows and neck of his character and tweaked the lighting for the scene — all of which was rendered in Omniverse via Epic Games Unreal Engine, using an Omniverse Connector and the NVIDIA RTX Global Illumination software development kit.

“NVIDIA Omniverse is helping me achieve more natural results for my digital humans and speeding up my workflow, so that I can spend more time on the creative process,” Rohr said.

Before using Omniverse, some of Rohr’s animations took as long as 300 hours to render. He also faced software incompatibilities, which he said further slowed his work.

Now, with Omniverse and its connectors for various software applications, Rohr’s renderings are achieved in real time.

“Omniverse goes beyond my expectations,” he said. “I see myself using it a lot, and I hope my artwork inspires people to seek real-time results for virtual productions, games, cinematic scenes or any other creative project.”

With Omniverse, NVIDIA Studio creators can supercharge their artistic workflows with optimized RTX-accelerated hardware and software drivers, and state-of-the-art AI and simulation features.

Watch Rohr talk more about his work with NVIDIA Omniverse:

The post Omniverse Creator Uses AI to Make Scenes With Singing Digital Humans appeared first on The Official NVIDIA Blog.

Read More

Get the Best of Cloud Gaming With GeForce NOW RTX 3080 Memberships Available Instantly

The future of cloud gaming is available NOW, for everyone, with preorders closing and GeForce NOW RTX 3080 memberships moving to instant access. Gamers can sign up for a six-month GeForce NOW RTX 3080 membership and instantly stream the next generation of cloud gaming, starting today.

Snag the NVIDIA SHIELD TV or SHIELD TV Pro for $20 off and stream PC games to the biggest screen in the home at up to 4K HDR resolution.

Participate in a unique cloud-based DAF Drive, powered by GeForce NOW and Euro Truck Simulator 2.

And check out the four new titles joining the ever-expanding GeForce NOW library this week.

RTX 3080 Memberships Available Instantly

The next generation of cloud gaming is ready and waiting.

Make the leap to the newest generation of cloud gaming instantly. GeForce NOW RTX 3080 memberships are available today for instant access. Preorders poof, be gone!

The new tier of service transforms nearly any device into a gaming rig capable of streaming at up to 1440p resolution and 120 frames per second on PCs, native 1440p or 1600p at 120 FPS on Macs, and 4K HDR at 60 FPS on SHIELD TV, with ultra-low latency that rivals many local gaming experiences. On top of this, the membership comes with the longest gaming session length — clocking in at eight hours — as well as full control to customize in-game graphics settings, and RTX ON rendering environments in cinematic quality in supported games.

Level up your gaming experience to enjoy the GeForce NOW library of over 1,100 games with the boost of a six-month RTX 3080 membership streaming across your devices for $99.99. Founders receive 10 percent off the subscription price and can upgrade with no risk to their “Founders for Life” benefits.

For more information, check out our membership FAQ.

The Deal With SHIELD

The GeForce NOW experience goes legendary, playing in 4K HDR exclusively on the NVIDIA SHIELD — which is available with a sweet deal this holiday season.

Save $20 on SHIELD TV this holiday.
Grab a controller and stream PC gaming at up to 4K with GeForce NOW on SHIELD TV.

Just in time for the holidays, give the gift of great entertainment at a discounted price. Starting Dec. 13 in select regions, get $20 ($30 CAD, €25, £20) off SHIELD TV and SHIELD TV Pro. But hurry, this offer ends soon! And in the U.S., get six months of Peacock Premium as an added bonus, to enrich the entertainment experience.

With the new GeForce NOW RTX 3080 membership, PC gamers everywhere can stream with 4K resolution and HDR on the SHIELD TV, bringing PC gaming to the biggest screen in the house. Connect to Steam, Epic Games Store and more to play from your library, find new games or check out the 100+ free-to-play titles included with a GeForce NOW membership.

Customize play even further with your preferred gaming controller by connecting SHIELD TV with Xbox One, Series X, PlayStation DualSense or DualShock 4 and Scuf controllers and bring your gaming sessions to life with immersive 7.1 surround sound.

Roll On Into the Ride and Drive

Euro Truck Simulator 2 on GeForce NOW
Push the pedal to the metal driving the 2021 DAF XF, available in Euro Truck Simulator 2.

GeForce NOW is powering up new experiences with SCS Software by supporting a unique DAF Drive experience. It adds the New Generation DAF XF to the popular game Euro Truck Simulator 2 and gives everyone the opportunity to take a virtual test drive through a short and scenic route, streaming with GeForce NOW. Take the wheel of one of the DAF Truck vehicles, instantly, on the DAF virtual experience website.

Coming in tow is a free in-game content update to the full Euro Truck Simulator 2 game, which brings the 2021 DAF XF to players. Ride in style as you travel across Europe in the newest truck, test your skill and speed, deliver cargo and become king of the road, streaming on the cloud.

Moar Gamez Now & Later, Plz

GTFO on GeForce NOW
The only way to survive the Rundown is by working together.

Late last week a pair of games got big GeForce NOW announcements, GTFO and ARC Raiders.

GTFO is now out of early access. Jump on into this extreme cooperative horror shooter that requires stealth, strategy and teamwork to survive a deadly, underground prison.

ARC Raiders, a free-to-play cooperative third-person shooter from Embark Studios, is coming to GeForce NOW in 2022. In the game, which will be available on Steam and Epic Games Store, you and your squad of Raiders will unite to resist the onslaught of ARC – a ruthless mechanized threat descending from space.

Plus, slide on into the weekend with a pack of four new titles ready to stream from the GeForce NOW library today:

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Grab a Gift for a Gamer

Looking to spoil a gamer or yourself this holiday season?

Digital gift cards for GeForce NOW Priority memberships are available in two-, six- or 12-month options. Make your favorite player happy by powering up their GeForce NOW compatible devices with the kick of a full gaming rig, priority access to gaming servers, extended session lengths and RTX ON for supported games.

Gift cards can be redeemed on an existing GeForce NOW account or added to a new one. Existing Founders and Priority members will have the number of months added to their accounts.

As your weekend gaming session kicks off, we’ve got a question for you:

Shout at us on Twitter or in the comments below.

The post Get the Best of Cloud Gaming With GeForce NOW RTX 3080 Memberships Available Instantly appeared first on The Official NVIDIA Blog.

Read More

‘AI 2041: Ten Visions for Our Future’: AI Pioneer Kai-Fu Lee Discusses His New Work of Fiction

One of AI’s greatest champions has turned to fiction to answer the question: how will technology shape our world in the next 20 years?

Kai-Fu Lee, CEO of Sinovation Ventures and a former president of Google China, spoke with NVIDIA AI Podcast host Noah Kravitz about AI 2041: Ten Visions for Our Future. The book, his fourth available in the U.S. and first work of fiction, was in collaboration with Chinese sci-fi writer Chen Qiufan, also known as Stanley Chan.

Lee and Chan blend their expertise in scientific forecasting and speculative fiction in this collection of short stories, which was published in September.

Among Lee’s books is the New York Times bestseller AI Superpowers: China, Silicon Valley, and the New World Order, which he spoke about on a 2018 episode of the AI Podcast.

Key Points From This Episode:

  • Each of AI 2041‘s stories takes place in a different country and tackles various AI-related topics. For example, one story follows a teenage girl in Mumbai who rebels when AI gets in the way of her romantic endeavors. Another is about virtual teachers in Seoul who offer orphaned twins new ways to learn and connect.
  • Lee added written commentaries to go along with each story, covering what real technologies are used in it, how those technologies work, as well as their potential upsides and downsides.

Tweetables:

As AI still seems to be an intimidating topic for many, “I wanted to see if we could create a new, innovative piece of work that is not only accessible, but also engaging and entertaining for more people.” — Kai-Fu Lee [1:48]

“By the end of the 10th story, [readers] will have taken their first lesson in AI.” — Kai-Fu Lee [2:02]

You Might Also Like:

Investor, AI Pioneer Kai-Fu Lee on the Future of AI in the US, China

Kai-Fu Lee talks about his book, AI Superpowers: China, Silicon Valley, and the New World Order, which ranked No. 6 on the New York Times Business Books bestsellers list.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He talks about working with his wife, Andrea Frank, a professional curator of art images, to authenticate artistic masterpieces with AI’s help.

Author Cade Metz Talks About His New Book “Genius Makers”

Call it Moneyball for AI. In his book, Genius Makers, the New York Times writer Cade Metz tells the funny, inspiring — and ultimately triumphant — tale of how a dogged group of AI researchers bet their careers on the long-dismissed technology of deep learning.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post ‘AI 2041: Ten Visions for Our Future’: AI Pioneer Kai-Fu Lee Discusses His New Work of Fiction appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research

For more than two decades, NVIDIA has supported graduate students doing GPU-based work through the NVIDIA Graduate Fellowship Program. Today we’re announcing the latest awards of up to $50,000 each to 10 Ph.D. students involved in GPU computing research.

Selected from a highly competitive applicant pool, the awardees will participate in a summer internship preceding the fellowship year. The work they’re doing puts them at the forefront of GPU computing, with fellows tackling projects in deep learning, robotics, computer vision, computer graphics, architecture, circuits, high performance computing, life sciences and programming systems.

“Our fellowship recipients are among the most talented graduate students in the world,” said NVIDIA Chief Scientist Bill Dally. “They’re working on some of the most important problems in computer science, and we’re delighted to support their research.”

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

Our 2022-2023 fellowship recipients are:

  • Davis Rempe, Stanford University — Modeling 3D motion to solve pose estimation, shape reconstruction and motion forecasting, which enables intelligent systems that understand dynamic 3D objects, humans and scenes.
  • Hao Chen, University of Texas at Austin — Developing next-generation VLSI physical synthesis tools capable of generating sign-off quality layouts in advanced manufacturing nodes, particularly in analog/mixed-signal circuits.
  • Mohit Shridhar, University of Washington — Connecting language to perception and action for vision-based robotics, where representations of vision and language are learned through embodied interactions rather than from static datasets.
  • Sai Praveen Bangaru, Massachusetts Institute of Technology — Developing algorithms and compilers for the systematic differentiation of numerical integrators, allowing them to mix seamlessly with machine learning components.
  • Shlomi Steinberg, University of California, Santa Barbara — Developing models and computational tools for physical light transport — the computational discipline that studies the simulation of partially coherent light in complex environments.
  • Sneha Goenka, Stanford University — Exploring genomic analysis pipelines through hardware-software co-design to enable the ultra-rapid diagnosis of genetic diseases and accelerate large-scale comparative genomic analysis.
  • Yufei Ye, Carnegie Mellon University — Building agents that can perceive physical interactions among objects, understand the consequences of interactions with the physical world, and even predict the potential effects of specific interactions.
  • Yuke Wang, University of California, Santa Barbara — Exploring novel algorithm- and system-level designs and optimizations to accelerate diverse deep-learning workloads, including deep neural networks and graph neural networks.
  • Yuntian Deng, Harvard University — Developing scalable, controllable and interpretable natural language generation approaches using deep generative models with potential applications in long-form text generation.
  • Zekun Hao, Cornell University — Developing algorithms that learn from real-world visual data and apply that knowledge to help human creators build photorealistic 3D worlds.

We also acknowledge the 2022-2023 fellowship finalists:

  • Enze Xie, University of Hong Kong
  • Gokul Swamy, Carnegie Mellon University
  • Hong-Xing (Koven) Yu, Stanford University
  • Suyeon Choi, Stanford University
  • Yash Sharma, University of Tübingen

The post NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research appeared first on The Official NVIDIA Blog.

Read More

What Is a Digital Twin?

Step inside an auto assembly plant. See workers ratcheting down nuts to bolts. Hear the whirring of air tools. Watch pristine car bodies gliding along the line and robots rolling up with parts.

Now, fire up its digital twin in 3D online. See animated digital humans at work in the exact same, but digital version of the plant. Drag and drop in robots to move heavy materials, and run simulations for optimizations, taking in real-time factory floor data for improvements. That’s a digital twin.

A digital twin is a virtual representation — a true-to-reality simulation of physics and materials — of a real-world physical asset or system, which is continuously updated.

Digital twins aren’t just for inanimate objects and people. They can be a virtual representation of computer networking architecture used as a sandbox for cyberattack simulations. They can replicate a fulfillment center process to test out human-robot interactions before activating certain robot functions in live environments. The applications are as wide as the imagination.

Digital twins are shaking up operations of businesses. The worldwide market for digital twin platforms is forecast to reach $86 billion by 2028, according to Grand View Research. Its report cites COVID-19 as a catalyst for the adoption of digital twins in specific industries.

What’s Driving Digital Twins? 

The Internet of Things is revving up digital twins.

IoT is helping to enable connected machines and devices to share data with their digital twins and vice versa. That’s because digital twins are always on and up-to-date computer-simulated versions of real-world IoT-connected physical things or processes they represent.

Digital twins are virtual representations that can capture the physics of structures and changing conditions internally and externally, as measured by myriad connected sensors driven by edge computing. They can also run simulations within the virtualizations to test for problems and seek improvements through service updates.

Robotics development and autonomous vehicles are just a couple of the growing number of examples used in digital twins to mimic physical equipment and environments.

“Autonomous vehicles at a very simple level are robots that operate in the open world, striving to avoid contact with anything,” said Rev Lebaredian, vice president of Omniverse and Simulation Technology at NVIDIA. “Eventually we’ll have sophisticated autonomous robots working alongside humans in settings like kitchens — manipulating knives and other dangerous tools. We need digital twins of the worlds they are going to be operating in, so we can teach them safely in the virtual world before transferring their intelligence into the real world.”

Digital Twins in 3D Virtual Environments  

Shared virtual 3D worlds are bringing people together to collaborate on digital twins.

The interactive 3D virtual universe is evident in gaming. Online social games such as Fortnite and the user-generated virtual world of Roblox offer a glimpse of the potential of interactions.

Video conferencing calls in VR, with participants existing as avatars of themselves in a shared virtual conference room, are a step toward realizing the possibilities for the enterprise.

Today the tools exist to develop each of these shared virtual worlds in a shared virtual collaboration platform within this environment.

Omniverse Replicator for Digital Twin Simulations

At GTC, NVIDIA unveiled Omniverse Replicator to help develop digital twins. It’s a  synthetic-data-generation engine that produces physically simulated data for training deep neural networks.

Along with this, the company introduced two implementations of the engine for applications that generate synthetic data: NVIDIA DRIVE Sim, a virtual world for hosting the digital twin of autonomous vehicles, and NVIDIA Isaac Sim, a virtual world for the digital twin of manipulation robots.

Autonomous vehicles and robots developed using this data can master skills across an array of virtual environments before applying them in the real world.

Based on Pixar’s Universal Scene Description and NVIDIA RTX technology, NVIDIA Omniverse is the world’s first scalable, multi-GPU physically accurate world simulation platform.

Omniverse offers users the ability to connect to multiple software ecosystems — including Epic Games Unreal Engine, Reallusion, OnShape, Blender and Adobe — that can assist millions of users.

The reference development platform is modular and can be extended easily. Teams across NVIDIA have enlisted the platform to build core simulation apps such as the previously mentioned NVIDIA Isaac Sim for robotics and synthetic data generation, and NVIDIA DRIVE Sim.

DRIVE Sim enables recreating real-world driving scenarios in a virtual environment to enable testing and development of rare and dangerous use cases.  In addition, because the simulator has a perfect understanding of the ground-truth in any scene, the data from the simulator can be used for training the deep neural networks used in autonomous vehicle perception.

As shown in BMW Group’s factory of the future, Omniverse’s modularity and openness allows it to utilize several other NVIDIA platforms such as the NVIDIA Isaac platform for robotics, NVIDIA Metropolis for intelligent video analytics, and the NVIDIA Aerial software development kit, which brings GPU-accelerated, software-defined 5G wireless radio access networks to environments as well as third-party software for users and companies to continue to use their own tools.

How Are Digital Twins Coming Online?

When building a digital twin and deploying its features, corralling AI resources is necessary.

NVIDIA Base Command Platform enables enterprises to deploy large-scale AI infrastructure. It optimizes resources for users and teams, and it can monitor the workflow from early development to production.

Base Command was developed to help support NVIDIA’s in-house research team with AI resources. It helps manage the available GPU resources, select databases, workspaces and container images available.

It manages the lifecycle of AI development, including workload management and resource sharing, providing both a graphical user interface and a command line interface, and integrated monitoring and reporting dashboards. It delivers the latest NVIDIA updates directly into your AI workflows.

Think of it as the compute engine of AI.

How Are Digital Twins Managed?

NVIDIA Fleet Command provides remote AI management.

Implementing AI from digital twins into the real world requires a deployment platform to handle the updates to the thousands, or even millions, of machines and devices of the edge.

NVIDIA Fleet Command is a cloud-based service accessible from the NVIDIA NGC hub of GPU-accelerated software to securely deploy, manage and scale AI applications across edge-connected systems and devices.

Fleet Command enables fulfillment centers, manufacturing facilities, retailers and many others to remotely implement AI updates.

How Are Digital Twins Advancing?

Digital twins enable the autonomy of things. They can be used to control a physical counterpart autonomously.

An electric vehicle maker, for example, might use a digital twin of a sedan to run simulations on software updates. And when the simulations show improvements to the car’s performance or solve a problem, those software updates can be dispatched over the air to the physical vehicle.

Siemens Energy is creating digital twins to support predictive maintenance of power plants. A digital twin of this scale promises to reduce downtime and help save utility providers an estimated $1.7 billion a year, according to the company.

Passive Logic, a startup based in Salt Lake City, offers an AI platform to engineer and autonomously operate the IoT components of buildings. Its AI engine understands how building components work together, down to the physics, and can run simulations of building systems.

The platform can take in multiple data points and make control decisions to optimize operations autonomously. It compares this optimal control path to actual sensor data, applies machine learning and learns improvements about operating the building over time.

Trains are on a fast track to autonomy as well, and digital twins are being developed to help get there. They’re being used in simulations for features such as automated braking and collision detection systems, enabled by AI run on NVIDIA GPUs.

What Is the History of Digital Twins?

By many accounts, NASA was the first to introduce the notion of the digital twin. While clearly not connected in the Internet of Things way, NASA’s early twin concept and its usage share many similarities with today’s digital twins.

NASA began with the digital twin idea as early as the 1960s. The space agency illustrated its enormous potential in the Apollo 13 moon mission. NASA had set up simulators of systems on the Apollo 13 spacecraft, which could get updates from the real ship in outer space via telecommunications. This allowed NASA engineers to run situation simulations between astronauts and engineers ahead of departure, and it came in handy when things went awry on that mission in 1970.

Engineers on the ground were able to troubleshoot with the astronauts in space, referring to the models on Earth and saving the mission from disaster.

What Types of Digital Twins Are There?

Smart Cities Sims

Smart cities are popping up everywhere. Using video cameras, edge computing and AI, cities are able to understand everything from parking to traffic flow to crime patterns. Urban planners can study the data to help draw up and improve city designs.

Digital twins of smart cities can enable better planning of construction as well as constant improvements in municipalities. Smart cities are building 3D replicas of themselves to run simulations. These digital twins help optimize traffic flow, parking, street lighting and many other aspects to make life better in cities, and these improvements can be implemented in the real world.

Dassault Systèmes has helped build digital twins around the world. In Hong Kong, the company presented examples for a walkability study, using a 3D simulation of the city for visualization.

NVIDIA Metropolis is an application framework, a set of developer tools and a large ecosystem of specialist partners that help developers and service providers better instrument physical space and build smarter infrastructure and spaces through AI-enabled vision. The platform spans AI training to inference, facilitating edge-to-cloud deployment, and it includes enterprise management tools like Fleet Command to better manage fleets of edge nodes.

Earth Simulation Twins 

Digital twins are even being applied to climate modeling.

NVIDIA CEO Jensen Huang disclosed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change.

Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

Separately, the European Union has launched Destination Earth, an effort to build a digital simulation of the planet. The plan is to help scientists accurately map climate development as well as extreme weather.

Supporting an EU mandate for achieving climate neutrality by 2050, the digital twin effort would be rendered at one-kilometer scale and based on continuously updated observational data from climate, atmospheric and meteorological sensors. It also plans to take into account measurements of the environmental impacts of human activities.

It is predicted that the Destination Earth digital twin project would require a system with 20,000 GPUs to operate at full scale, according to a paper published in Nature Computational Science. Simulation insights can enable scientists to develop and test scenarios. This can help inform policy decisions and sustainable development planning.

Such work can help assess drought risk, monitor rising sea levels and track changes in the polar regions. It can also be used for planning on food and water issues, and renewable energy such as wind farms and solar plants. The goal is for the main digital modeling platform to be operating by 2023, with the digital twin live by 2027.

Data Center Networking Simulation

Networking is an area where digital twins are reducing downtime for data centers.

Over time, networks have become more complicated. The scale of networks, the number of nodes and the interoperability between components add to their complexity, affecting preproduction and staging operations.

Network digital twins speed up initial deployments by pretesting routing, security, automation and monitoring in simulation. They also enhance ongoing operations, including validating network change requests in simulation, which reduces maintenance times.

Networking operations have also evolved to more advanced capabilities with the use of APIs and automation. And streaming telemetry — think IoT-connected sensors for devices and machines — allows for constant collection of data and analytics on the network for visibility into problems and issues.

The NVIDIA Air infrastructure simulation platform enables network engineers to host digital twins of data center networks.

Ericsson, a maker of telecommunications equipment, is combining decades of radio network simulation expertise with NVIDIA Omniverse Enterprise.

The Stockholm-based company is building city-scale digital twins in NVIDIA Omniverse to help accurately simulate the interplay between 5G cells and the environment to maximize performance and coverage.

 

 

Automotive Manufacturing Twins

BMW Group, which has 31 factories around the world, is collaborating with NVIDIA on digital twins. The German automaker is relying on NVIDIA Omniverse Enterprise to run factory simulations to optimize its operations.

 

Its factories provide more than 100 options for each car, and more than 40 BMW models, offering 2,100 possible configurations of a new vehicle. Some 99 percent of the vehicles produced in BMW factories are custom configurations, which creates challenges for keeping materials stocked on the assembly line.

To help maintain the flow of materials for its factories, BMW Group is also harnessing the NVIDIA Isaac robotics platform to deploy a fleet of robots for logistics to improve the distribution of materials in its production environment. These human-assisting robots, which are put into simulation scenarios with digital humans in pre-production, enable the company to safely test out robot applications on the factory floor of the digital twin before launching into production.

Virtual simulations also enable the company to optimize the assembly line as well as worker  ergonomics and safety. Planning experts from different regions can connect virtually with NVIDIA Omniverse, which lets global 3D design teams work together simultaneously across multiple software suites in a shared virtual space.

NVIDIA Omniverse Enterprise is enabling digital twins for many different industrial applications.

Architecture, Engineering and Construction

Building design teams face a growing demand for efficient collaboration, faster iteration on renderings, and expectations for accurate simulation and photorealism.

These demands can become even more challenging when teams are dispersed worldwide.

Creating digital twins in Omniverse for architects, engineers and construction teams to assess designs together can quicken the pace of development, helping contracts run on time.

Teams on Omniverse can be brought together virtually in a single, interactive platform — even when simultaneously working in different software applications — to rapidly develop architectural models as if they are in the same room and simulate with full physical accuracy and fidelity.

Retail and Fulfillment

Logistics for order fulfillment is a massive industry of moving parts. Fulfillment centers now are aided by robots to help workers avoid injury and boost their efficiency. It’s an environment filled with cameras driven by AI and edge computing to help rapidly pick, pull and pack products. It’s how one-day deliveries arrive at our doors.

The use of digital twins means that much of this can be created in a virtual environment, and simulations can be run to eliminate bottlenecks and other problems.

Kinetic Vision is reinventing intelligent fulfillment and distribution centers with digital twins through digitization and AI. Successfully implementing a network of intelligent stores and fulfillment centers needs robust information, data, and operational technologies to enable innovative edge computing and AI solutions like real-time product recognition. This drives faster, more agile product inspections and order fulfillments.

Energy Industry Twins 

Siemens Energy is relying on the NVIDIA Omniverse platform to create digital twins to support predictive maintenance of power plants.

Using NVIDIA Modulus software frameworks, running on NVIDIA A100 Tensor Core GPUs, Siemens Energy can simulate the corrosive effects of heat, water and other conditions on metal over time to fine-tune maintenance needs.

Hydrocarbon Exploration 

Oil companies face huge risks in seeking to tap new reservoirs or reassess production stage fields with the least financial and environmental downside. Drilling can cost hundreds of millions of dollars. After locating hydrocarbons, these energy companies need to quickly figure out the most profitable strategies for new or ongoing production

Digital twins for reservoir simulations can save many millions of dollars and avoid environmental problems. Using technical software applications, these companies can model how water and hydrocarbons flow under the ground amid wells. This allows them to evaluate potentially problematic situations and virtual production strategies on supercomputers.

Having assessed the risks beforehand, in the digital twins, these exploration companies can minimize losses when committing to new projects. Real-world versions in production can also be optimized for better output based on analytics from their digital doppelganger.

Airport Efficiencies

Digital twins can enable airports to improve customer experiences. For instance, video cameras could monitor the Transportation Security Administration, or TSA, and apply AI to look for ways to analyze bottlenecks at peak hours. Those could be addressed in digital models, and then moved into production to reduce missed flights. Baggage handling video can be assessed to improve ways in the digital environment to ensure luggage arrives on time.

Airplane turnarounds can benefit, too. Many vendors service arriving planes to get them turned around and back on the runway for departures. Video can help airlines track these vendors to ensure timely turnarounds. Digital twins can also analyze the services coordination to optimize workflows before changing things up.

Airlines can then hold their vendors accountable to quickly carrying out services. Caterers, cleaners, refueling, trash and waste removal and other service providers all have what’s known as service-level agreements with airlines to help keep the planes running on time. All of these activities can be run in simulations in the digital world and then applied to scheduling in production for real-world results to help reduce departure delays.

NVIDIA Metropolis helps to process massive amounts of video from the edge so that airports and other industries can analyze operations in real time and derive insights from analytics.

What’s the Future for Digital Twins?

Digital twin simulations have been simmering for half a century. But the past decade’s advances in GPUs, AI and software platforms are heating up their adoption amid this higher-fidelity era of more immersive experiences.    

Increasing penetration of virtual reality and augmented reality will accelerate this work.

Worldwide sales of VR headsets are expected to increase from roughly 7 million in 2021 to more than 28 million in 2025, according to analyst firm IDC.

That’s a lot more headset-connected, content-consuming eyeballs for virtual environments.

And all those in it will be able to access the NVIDIA Omniverse platform for AI, human and robot interactions, and infinite simulations, driving a wild ride of advances from digital twins.

“There has been talk of virtual worlds and digital twins for years. We’re right at the beginning of this transition into reality, much as AI became viable and created an explosion of possibilities,” said NVIDIA’s Lebaredian.

Buckle up for the adventure.

 

 

The post What Is a Digital Twin? appeared first on The Official NVIDIA Blog.

Read More

Startup Surge: Utility Feels Power of Computer Vision to Track its Lines 

It was the kind of message Connor McCluskey loves to find in his inbox.

As a member of the product innovation team at FirstEnergy Corp. — an electric utility serving 6 million customers from central Ohio to the New Jersey coast — his job is to find technologies that open new revenue streams or cut costs.

In the email, Chris Ricciuti, the founder of Noteworthy AI, explained his ideas for using edge computing to radically improve how utilities track their assets. For FirstEnergy, those assets include tens of millions of devices mounted on millions of poles across more than 269,000 miles of distribution lines.

Bucket Trucks Become Smart Cameras

Ricciuti said his startup aimed to turn every truck in a utility’s fleet into a smart camera that takes pictures of every pole it passes. What’s more, Noteworthy AI’s software would provide the location of the pole, identify the gear on it and help analyze its condition.

“I saw right away that this could be a game changer, so I called him,” said McCluskey.

In the U.S. alone, utilities own 185 million poles. They spend tens, if not hundreds, of millions of dollars a year trying to track the transformers, fuses and other devices on them, as well as the vegetation growing around them.

Utilities typically send out workers each year to manually inspect a fraction of their distribution lines. It’s an inventory that can take a decade, yet the condition of each device is critical to delivering power safely.

5x More Images in 30 Days

In a pilot test last summer, Noteworthy AI showed how edge computing gets better results.

In 30 days, two FirstEnergy trucks, outfitted with the startup’s smart cameras, collected more than 5,000 high-res images of its poles. That expanded the utility’s database more than fivefold.

“People were astounded at what we could do in such a short time frame,” said McClusky.

What’s more, the pictures were of higher quality than those in the utility’s database. That would help eliminate wasted trips when actual line conditions vary from what engineers expect to find.

Noteworthy AI computer vision system mounted on a First Energy truck
The startup’s camera system can be mounted on a utility truck in less than an hour.

Use Cases Multiply

News of the pilot program spread to other business units.

A team that inspects FirstEnergy’s 880,000 streetlights and another responsible for tracking vegetation growth around its lines wanted to try the technology. Both saw the value of having more and better data.

So, an expanded pilot is in the works with more trucks over a larger area.

It’s too early to estimate the numbers, but McCluskey “felt right away we could find some significant cost savings with this technology — in a couple years I can imagine its use expanded to all our states,” he said.

An Inside Look at Edge Computing

In a unit the size of a small cake box that attaches to a truck with magnets or suction cups, Noteworthy AI packs two cameras and communications gear. It links to a smaller unit inside the cab that processes the images and AI on an NVIDIA Jetson Xavier NX.

“We developed a pretty sophisticated workflow that runs at the edge on Jetson,” Ricciuti said.

It uses seven AI models. One model looks for poles in images taken at 30 frames/second. When it finds one, it triggers a higher res camera to take bursts of 60-megabyte pictures.

Other models identify gear on the poles and determine which images to send to a database in the cloud.

Noteworthy AI camera processes images with NVIDIA Jetson
Designing a fast, resilient camera was even more challenging than implementing AI, said Ricciuti.

“We’re doing all this AI compute at the edge on Jetson, so we don’t have to send all the images to the cloud — it’s a huge cost savings,” Ricciuti said.

“With customer use cases growing, we’ll graduate to products like Jetson AGX Orin in the future — NVIDIA has been awesome in computing at the edge,” he added.

Software, Support Speeds Startup

The startup uses NVIDIA TensorRT, code that keeps its AI models trim, so they run fast. It also employs the NVIDIA JetPack SDK with drivers and libraries for computer vision and deep learning as well as ROS, an operating system, now accelerated on Jetson.

In addition, Ricciuti ticks off three benefits from being part of NVIDIA Inception, a program designed to nurture cutting-edge startups.

“When we have engineering questions, we get introduced to technical people who unblock us; we meet potential customers when we’re ready to go to market; and we get computer credits for GPUs in the cloud to train our models,” he said.

AI Spells Digital Transformation

The GPUs, software and support help Ricciuti do the work he loves: finding ways AI can transform legacy practices at large, regulated companies.

“We’re just seeing the tip of the iceberg of what we can do as people are being forced to innovate in the face of problems like climate change, and we’re getting a lot of interest from utilities with large distribution networks,” he said.

Learn more about how NVIDIA is accelerating innovation in the energy industry.

The post Startup Surge: Utility Feels Power of Computer Vision to Track its Lines  appeared first on The Official NVIDIA Blog.

Read More