Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins

With tens of millions of weekly transactions across its more than 2,000 stores, Lowe’s helps customers achieve their home-improvement goals. Now, the Fortune 50 retailer is experimenting with high-tech methods to elevate both the associate and customer experience.

Using NVIDIA Omniverse Enterprise to visualize and interact with a store’s digital data, Lowe’s is testing digital twins in Mill Creek, Wash,. and Charlotte, N.C. Its ultimate goal is to empower its retail associates to better serve customers, collaborate with one another in new ways and optimize store operations.

“At Lowe’s, we are always looking for ways to reimagine store operations and remove friction for our customers,” said Seemantini Godbole, executive vice president and chief digital and information officer at Lowe’s. “With NVIDIA Omniverse, we’re pulling data together in ways that have never been possible, giving our associates superpowers.”

Augmented Reality Restocking and ‘X-Ray Vision’

With its interactive digital twin, Lowe’s is exploring a variety of novel augmented reality use cases, including reconfiguring layouts, restocking support, real-time collaboration and what it calls “X-ray vision.”

Wearing a Magic Leap 2 AR headset, store associates can interact with the digital twin. This AR experience helps an associate compare what a store shelf should look like with what it actually looks like, and ensure it’s stocked with the right products in the right configurations.

And this isn’t just a single-player activity. Store associates on the ground can communicate and collaborate with centralized store planners via AR. For example, if a store associate notices an improvement that could be made to a proposed planogram for their store, they can flag it on the digital twin with an AR “sticky note.”

Lastly, a benefit of the digital twin and Magic Leap 2 headset is the ability to explore “X-ray vision.” Traditionally, a store associate might need to climb a ladder to scan or read small labels on cardboard boxes held in a store’s top stock. With an AR headset and the digital twin, the associate could look up at a partially obscured cardboard box from ground level, and, thanks to computer vision and Lowe’s inventory application programming interfaces, “see” what’s inside via an AR overlay.

Store Data Visualization and Simulation

Home-improvement retail is a tactile business. And when making decisions about how to create a new store display, a common way for retailers to see what works is to build a physical prototype, put it out into a brick-and-mortar store and examine how customers react.

With NVIDIA Omniverse and AI, Lowe’s is exploring more efficient ways to approach this process.

Just as e-commerce sites gather analytics to optimize the customer shopping experience online, the digital twin enables new ways of viewing sales performance and customer traffic data to optimize the in-store experience. 3D heatmaps and visual indicators that show the physical distance of items frequently bought together can help associates put these objects near each other. Within a 100,000 square-foot location, for example, minimizing the number of steps needed to pick up an item is critical.

Using historical order and product location data, Lowe’s can also use NVIDIA Omniverse to simulate what might happen when a store is set up differently. Using AI avatars created in Lowe’s Innovation Labs, the retailer can simulate how far customers and associates might need to walk to pick up items that are often bought together.

NVIDIA Omniverse allows for hundreds of simulations to be run in a fraction of the time that it takes to build a physical store display, Godbole said.

Expanding Into the Metaverse

Lowe’s also announced today at NVIDIA GTC that it will soon make the over 600 photorealistic 3D product assets from its home-improvement library free for other Omniverse creators to use in their virtual worlds. All of these products will be available in the Universal Scene Description format on which Omniverse is built, and can be used in any metaverse created by developers using NVIDIA Omniverse Enterprise.

For Lowe’s, the future of home improvement is one in which AI, digital twins and mixed reality play a part in the daily lives of its associates, Godbole said. With NVIDIA Omniverse, the retailer is taking steps to build this future – and there’s a lot more to come as it tests new strategies.

Join a GTC panel discussion on Wednesday, Sept. 21, with Lowe’s Innovation Labs VP Cheryl Friedman and Senior Director of Creative Technology Mason Sheffield, who will discuss how Lowe’s is using AI and NVIDIA Omniverse to make the home-improvement retail experience even better.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register free for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins appeared first on NVIDIA Blog.

Read More

Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat

With NVIDIA DRIVE, in-vehicle infotainment, or IVI, is so much more than just giving directions and playing music.

NVIDIA founder and CEO Jensen Huang demonstrated the capabilities of a truly IVI experience during today’s GTC keynote. Using centralized, high-performance compute, the NVIDIA DRIVE Concierge platform spans traditional cockpit and cluster capabilities, as well as personalized, AI-powered safety, convenience and entertainment features for every occupant.

Drivers in the U.S. spend an average of nearly 450 hours in their car every year. With just a traditional cockpit and infotainment display, those hours can seem even longer.

DRIVE Concierge makes time in vehicles more enjoyable, convenient and safe, extending intelligent features to every passenger using the DRIVE AGX compute platform, DRIVE IX software stack and Omniverse Avatar Cloud Engine (ACE).

These capabilities include crystal-clear graphics and visualizations in the cockpit and cluster, intelligent digital assistants, driver and occupant monitoring, and streaming content such as games and movies.

With DRIVE Concierge, every passenger can enjoy their own intelligent experience.

Cockpit Capabilities

By running on the cross-domain DRIVE platform, DRIVE Concierge can virtualize, as well as host, multiple virtual machines on a single chip — rather than distributed computers — for streamlined development.

With this centralized architecture, DRIVE Concierge seamlessly orchestrates driver information, cockpit and infotainment functions. It supports the Android Automotive operating system, so automakers can easily customize and scale their IVI offerings.

And digital cockpit and cluster features are just the beginning. DRIVE Concierge extends this premium functionality to the entire vehicle, with world-class confidence view, video-conferencing capabilities, digital assistants, gaming and more.

Visualizing Intelligence

Speed, fuel range and distance traveled are key data for human drivers to be aware of. When AI is at the wheel, however, a detailed view of the vehicle’s perception and planning layers is also crucial.

DRIVE Concierge is tightly integrated with the DRIVE Chauffeur platform to provide high-quality, 360-degree, 4D visualization with low latency. Drivers and passengers can always see what’s in the mind of the vehicle’s AI, with beautiful 3D graphics.

This visualization is critical to building trust between the autonomous vehicle and its passengers, so occupants can be confident in the AV system’s perception and planned path.

How May AI Help You?

In addition to revolutionizing driving, AI is creating a more intelligent vehicle interior with personalized digital assistants.

Omniverse ACE is a collection of cloud-based AI models and services for developers to easily build, customize and deploy interactive avatars.

With ACE, AV developers can create in-vehicle assistants that are easily customizable with speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.

These avatars can help make recommendations, book reservations, access vehicle controls and provide alerts for situations like if a valuable item is left behind.

Game On

With software-defined capabilities, cars are becoming living spaces, complete with the same entertainment available at home.

NVIDIA DRIVE Concierge lets passengers watch videos and experience high-performance gaming wherever they go. Users can choose from their favorite apps and stream videos and games on any vehicle screen.

By using the NVIDIA GeForce NOW cloud gaming service, passengers can access more than 1,400 titles without the need for downloads, benefitting from automatic updates and unlimited cloud storage.

Safety and Security

Intelligent interiors provide an added layer of safety to vehicles, in addition to convenience and entertainment.

DRIVE Concierge uses interior sensors and dedicated deep neural networks for driver monitoring, which ensures attention is on the road in situations where the human is in control.

It can also perform passenger monitoring to make sure that occupants are safe and no precious cargo is left behind.

Using NVIDIA DRIVE Sim on Omniverse, developers can collaborate to design passenger interactions with such cutting-edge features in the vehicle.

By tapping into NVIDIA’s past heritage of infotainment technology, DRIVE Concierge is revolutionizing the future of in-vehicle experiences.

The post Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat appeared first on NVIDIA Blog.

Read More

NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer

The next generation of autonomous vehicle computing is improving performance and efficiency at the speed of light.

During today’s GTC keynote, NVIDIA founder and CEO Jensen Huang unveiled DRIVE Thor, a superchip of epic proportions. The automotive-grade system-on-a-chip (SoC) is built on the latest CPU and GPU advances to deliver 2,000 teraflops of performance while reducing overall system costs.

DRIVE Thor succeeds NVIDIA DRIVE Orin in the company’s product lineup, incorporating the newest compute technology to accelerate industry deployment of intelligent-vehicle technology, targeting automakers’ 2025 models.

DRIVE Thor is the next generation in the NVIDIA AI compute roadmap.

Geely-owned premium EV maker ZEEKR will be the first customer for the next-generation platform, with production starting in 2025.

DRIVE Thor unifies traditionally distributed functions in vehicles — including digital cluster, infotainment, parking and assisted driving — for greater efficiency in development and faster software iteration.

Manufacturers can configure the DRIVE Thor superchip in multiple ways. They can dedicate all of the platform’s 2,000 teraflops to the autonomous driving pipeline, or use a portion for in-cabin AI and infotainment and another portion for driver assistance.

Like the current-generation NVIDIA DRIVE Orin, DRIVE Thor uses the productivity of the NVIDIA DRIVE software development kit, is designed to be ASIL-D functionally safe, and is built on a scalable architecture, so developers can seamlessly port their past software development to the latest platform.

Lightning Fast

In addition to raw performance, DRIVE Thor delivers an incredible leap in deep neural network accuracy.

DRIVE Thor marks the first inclusion of a transformer engine in the AV platform family. The transformer engine is a new component of the NVIDIA GPU Tensor Core. Transformer networks process video data as a single perception frame, enabling the compute platform to process more data over time.

With 8-bit floating point (FP8) precision, the SoC introduces a new data type for automotive. Traditionally, AV developers see a loss in accuracy when moving from 32-bit floating point to 8-bit integer data formats. FP8 precision eases this transition, making it possible for developers to transfer data types without sacrificing accuracy.

Additionally, DRIVE Thor uses updated ARM Poseidon AE cores, making it one of the highest performance processors in the industry.

Multi-Domain Computing

DRIVE Thor is as efficient as it is powerful.

The SoC is capable of multi-domain computing, meaning it can partition tasks for autonomous driving and in-vehicle infotainment. This multi-compute domain isolation lets concurrent time-critical processes run without interruption. On one computer, the vehicle can simultaneously run Linux, QNX and Android.

Typically, these types of functions are controlled by tens of electronic control units distributed throughout a vehicle. Rather than relying on these distributed ECUs, manufacturers can now consolidate vehicle functions using DRIVE Thor’s ability to isolate specific tasks.

With DRIVE Thor, automakers can consolidate intelligent vehicle functions on a single SoC.

All vehicle displays, sensors and more can connect to this single SoC, simplifying what has been an incredibly complex supply chain for automakers.

Two Is Always Better Than One

If one DRIVE Thor seems incredible, try two.

Customers can use one DRIVE Thor SoC, or they can connect two via the latest NVLink-C2C chip interconnect technology to serve as a monolithic platform that runs a single operating system.

This capability provides automakers with the compute headroom and flexibility to build software-defined vehicles that are continuously upgradeable through secure, over-the-air updates.

Designed with the best of NVIDIA GPU technology, DRIVE Thor is truly an AV SoC of heroic proportions.

The post NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer appeared first on NVIDIA Blog.

Read More

HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse

Telecoms began touting the benefits of 5G networks six years ago. Yet the race to deliver ultrafast wireless internet today resembles a contest between the tortoise and the hare, as some mobile network operators struggle with costly and complex network requirements.

Advanced data analytics company HEAVY.AI today unveiled solutions to put carriers on more even footing. Its initial product, HeavyRF, delivers a next-generation network planning and operations tool based on the NVIDIA Omniverse platform for creating digital twins.

“Building out 5G networks globally will cost trillions of dollars over the next decade, and our telco network customers are rightly worried about how much of that is money not well spent,” said Jon Kondo, CEO of HEAVY.AI. “Using HEAVY advanced analytics and NVIDIA Omniverse-based real-time simulations, they’ll see big savings in time and money.”

HEAVY.AI also announced that Charter Communications is collaborating on incorporating the tool into its modeling and planning operations for its Spectrum telco network, which has 32 million customers in 41 U.S. states. The collaboration extends HEAVY’s relationship with Charter, building on the existing analytics operations to 5G network planning.

“HEAVY.AI’s new digital twin capabilities give us a way to explore and fine-tune our expanding 5G networks in ways that weren’t possible before,” said Jared Ritter, senior director of analytics and automation at Charter Communications.

Without the digital twin approach, telco operators must either: physically place microcell towers in densely populated areas to understand the interaction between radio transmitters, the environment, and humans and devices that are on the move — or use tools that offer less detail about key factors  such as tree density or high-rise interference.

Early deployments of 5G needed 300% more base stations for the same level of coverage offered by the previous generation, called Long Term Evolution (LTE), because of higher spectrum bands. A 5G site will consume 300% more power and cost 4x more than an LTE site if they’re deployed in the same way, according to researcher Analysys Mason.

Those sobering figures are prompting the industry to look for efficiencies. Harnessing GPU-accelerated analytics and real-time geophysical mapping, HEAVY.AI’s digital twin solution allows telcos to test radio frequency (RF) propagation scenarios in seconds, powered by the HeavyRF module. This results in significant time and cost savings, because the base stations and microcells can be more accurately placed and tuned at first installation.

The HeavyRF module supports telcos’ goals to plan, build and operate new networks more efficiently by tightly integrating key business information such as mobility and parcels data, as well as customer experience data, within RF planning workflows.

Using an RF-synchronized digital twin would enable planners at Charter Communications to optimize capacity and coverage, plus interactively see how changes in deployment patterns translate into customer acquisition and retention at the household level.

The goal is to use machine learning and big data pipelines to continuously mirror existing real-world conditions.

The digital twin will use the parallel computing capabilities of modern GPUs for visual simulation, as well as to generate physical simulations of RF signals using real-time RTX ray tracing, powered by NVIDIA Omniverse’s RTX Renderer.

For telcos, it’s not just about investing in traditional networks. With the rise of AI applications and services, these companies seek to lay the foundation for 5G-enabled devices, autonomous vehicles, appliances, robots and city infrastructure.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Reconstructing the Real World in DRIVE Sim With AI

Autonomous vehicle simulation poses two challenges: generating a world with enough detail and realism that the AI driver perceives the simulation as real, as well as creating simulations at a large enough scale to cover all the cases on which the AI driver needs to be fully trained and tested.

To address these challenges, NVIDIA researchers have created new AI-based tools to build simulations directly from real-world data. NVIDIA founder and CEO Jensen Huang previewed the breakthrough during the GTC keynote.

This research includes award-winning work first published at SIGGRAPH, a computer graphics conference held last month.

Neural Reconstruction Engine

The Neural Reconstruction Engine is a new AI toolset for the NVIDIA DRIVE Sim simulation platform that uses multiple AI networks to turn recorded video data into simulation.

The new pipeline uses AI to automatically extract the key components needed for simulation, including the environment, 3D assets and scenarios. These pieces are then reconstructed into simulation scenes that have the realism of data recordings, but are fully reactive and can be manipulated as needed. Achieving this level of detail and diversity by hand is costly, time consuming and not scalable.

Environments and Assets

A simulation needs an environment in which to operate. The AI pipeline converts 2D video data from a real-world drive to a dynamic, 3D digital twin environment that can be loaded into DRIVE Sim.

A 3D simulation environment generated from recorded driving data using AI.

The DRIVE Sim AI pipeline follows a similar process to reconstruct other 3D assets. Engineers can use the assets to reconstruct the current scene or place them in a larger library of assets to be used in any simulation.

Using the asset-harvesting pipeline is key to growing the DRIVE Sim library and ensuring it matches the diversity and distribution of the real world.

Assets can be harvested from real-world data, turned into 3D objects and reused in other scenes. Here, the tow truck is reconstructed from the scene on the left and used in a different simulation shown on the right.

Scenarios

Scenarios are the events that take place during a simulation in an environment combined with assets.

The Neural Reconstruction Engine assigns AI-based behaviors to the actors in the scene, so that when presented with the original events, they behave precisely as they did in the real drive. However, since they have an AI behavior model, the figures in the simulation can respond and react to changes by the AV or other scene elements.

Because these scenarios are all occurring in simulation, they can also be manipulated to add new situations. Timing and location of events can be altered. Developers can even incorporate entirely new elements, synthetic or real, to make a scenario more challenging, such as the addition of a child chasing a ball to the scene below.

Synthetic objects can be mixed with real-world scenarios.

Integration Into DRIVE Sim

Once the environment, assets and scenario have been extracted, they’re reassembled in DRIVE Sim to create a 3D simulation of the recorded scene or mixed with other assets to create a completely new scene.

DRIVE Sim provides the tools for developers to adjust dynamic and static objects, the vehicle’s path, and the location, orientation and parameters of the vehicle sensors.

The same scenes in DRIVE Sim are also used to generate pre-labeled synthetic data to train perception systems. Randomizations are applied on top of recreated scenes to add diversity to the training data. Building scenes out of real-world data greatly reduces the sim-to-real gap.

Reconstructed scenes can be augmented with synthetic assets and used to produce new data with ground truth for training AV perception systems.

The ability to mix and match simulation formats is a significant advantage in comprehensively testing and validating AVs at scale. Engineers can manipulate events in a world that is responsive and matches their needs precisely.

The Neural Reconstruction Engine is the result of work by the research team at NVIDIA, and will be integrated into future releases of DRIVE Sim. This breakthrough will enable developers to take advantage of both physics-based and neural-driven simulation on the same cloud-based platform.

The post Reconstructing the Real World in DRIVE Sim With AI appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Christopher Scott Constructs Architectural Designs, Virtual Environments With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Christopher Scott

Growing up in a military family, Christopher Scott moved more than 30 times, which instilled in him “the ability to be comfortable with, and even motivated by, new environments,” he said.

Today, the environments he explores — and creates — are virtual ones.

As chief technical director for 3D design and visualization services at Infinite-Compute, Scott creates physically accurate virtual environments using familiar architectural products in conjunction with NVIDIA Omniverse Enterprise, a platform for connecting and building custom 3D pipelines.

With a background in leading cutting-edge engineering projects for the U.S. Department of Defense, Scott now creates virtual environments focused on building renovation and visualization for the architecture, engineering, construction and operations (AECO) industry.

These true-to-reality virtual environments — whether of electrical rooms, manufacturing factories, or modern home designs — enable quick, efficient design of products, processes and facilities before bringing them to life in the real world.

They also help companies across AECO and other industries save money, speed project completion and make designs interactive for customers — as will be highlighted at NVIDIA GTC, a global conference on AI and the metaverse, running online Sept. 19-22.

“Physically accurate virtual environments help us deliver client projects faster, while maintaining a high level of quality and performance consistency,” said Scott, who’s now based in Austin, Texas. “The key value we offer clients is the ability to make better decisions with confidence.”

To construct his visualizations, Scott uses Omniverse Create and Omniverse Connectors for several third-party applications: Trimble SketchUp for 3D models for drawing and design; Autodesk Revit for 3D design and 2D annotation of buildings; and Unreal Engine for creating walkthrough simulations and 3D virtual spaces.

In addition, he uses software like Blender for visual effects, motion graphics and animation, and PlantFactory for modeling 3D vegetation, which gives his virtual spaces a lively and natural aesthetic.

Project Speedups With Omniverse

Within just four years, Scott went from handling 50 projects a year to more than 3,500, he said.

Around 80 of his projects each month include lidar-to-point-cloud work, a complex process that involves transforming spatial data into a collection of coordinates for 3D models for manufacturing and design.

Using Omniverse doubles productivity for this demanding workload, he said, as it offers physically accurate photorealism and rendering in real time, as well as live-sync collaboration across users.

“Previously, members of our team functioned as individual islands of productivity,” Scott said. “Omniverse gave us the integrated collaboration we desired to enhance our effectiveness and efficiency.”

At Omniverse’s core is Universal Scene Description — an open-source, extensible 3D framework and common language for creating virtual worlds.

“Omniverse’s USD standard to integrate outputs from multiple software programs allowed our team to collaborate on a source-of-truth project — letting us work across time zones much faster,” said Scott, who further accelerates his workflow by running it on NVIDIA RTX GPUs, including the RTX A6000 on Infinite-Compute’s on-demand cloud infrastructure.

“It became clear very soon after appreciating the depth and breadth of Omniverse that investing in this pipeline was not just enabling me to improve current operations,” he added. “It provides a platform for future growth — for my team members and my organization as a whole.”

While Scott says his work leans more technical than creative, he sees using Omniverse as a way to bridge these two sides of his brain.

“I’d like to think that adopting technologies like Omniverse to deliver cutting-edge solutions that have a meaningful and measurable impact on my clients’ businesses is, in its own way, a creative exercise, and perhaps even a work of art,” he said.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Hear about NVIDIA’s latest AI breakthroughs powering graphics and virtual worlds at GTC, running online Sept. 19-22. Register free now and attend the top sessions for 3D creators and developers to learn more about how Omniverse can accelerate workflows.

Join the NVIDIA Omniverse User Group to connect with the growing community and see Scott’s work in Omniverse celebrated.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Christopher Scott Constructs Architectural Designs, Virtual Environments With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

GFN Thursday Delivers Seven New Games This Week

TGIGFNT: thank goodness it’s GFN Thursday. Start your weekend early with seven new games joining the GeForce NOW library of over 1,400 titles.

Whether it’s streaming on an older-than-the-dinosaurs PC, a Mac that normally couldn’t dream of playing PC titles, or mobile devices – it’s all possible to play your way thanks to GeForce NOW.

Get Right Into the Gaming

Test your tactical skills in the new authentic WW1 first person shooter, Isonzo.

Isonzo
The Great War on the Italian Front is brought to life and streaming from the cloud.

Battle among the scenic peaks, rugged valleys and idyllic towns of northern Italy. Choose from six classes based on historical combat roles and build a loadout from a selection of weapons, equipment and perks linked to that class. Shape a dynamic battlefield by laying sandbags and wire, placing ammo crates, deploying trench periscopes or sniper shields, and more.

Lead to charge to victory in this game and six more this week, including:

Members can also discover impressive new prehistoric species with the Jurassic World Evolution 2: Late Cretaceous Pack DLC, available on GeForce NOW this week.

Inspired by the fascinating Late Cretaceous period, this pack includes four captivating species that roamed the land, sea and air over 65 million years ago from soaring, stealthy hunters of the skies to one of the largest dinosaurs ever discovered.

Finally, kick off the weekend by telling us about a game that you love on Twitter or in the comments below.

The post GFN Thursday Delivers Seven New Games This Week appeared first on NVIDIA Blog.

Read More

Reinventing the Wheel: Gatik’s Apeksha Kumavat Accelerates Autonomous Delivery for Wal-Mart and More

As consumers expect faster, cheaper deliveries, companies are turning to AI to rethink how they move goods.

Foremost among these new systems are “hub-and-spoke,” or middle-mile, operations, where companies place distribution centers closer to retail operations for quicker access to inventory. However, faster delivery is just part of the equation. These systems must also be low-cost for consumers.

Autonomous delivery company Gatik seeks to provide lasting solutions for faster and cheaper shipping. By automating the routes between the hub — the distribution center — and the spokes — retail stores — these operations can run around the clock efficiently and with minimal investment.

Gatik co-founder and Chief Engineer Apeksha Kumavat joined NVIDIA’s Katie Burke Washabaugh on the latest episode of the AI Podcast to walk through how the company is developing autonomous trucks for middle-mile delivery.

Kumavat also discussed the progress of commercial pilots with companies such as Walmart and Georgia-Pacific.

She’ll elaborate on Gatik’s autonomous vehicle development in a virtual session at NVIDIA GTC on Tuesday, Sept. 20. Register free to learn more.

You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game, Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

 

The post Reinventing the Wheel: Gatik’s Apeksha Kumavat Accelerates Autonomous Delivery for Wal-Mart and More appeared first on NVIDIA Blog.

Read More

Get up to Speed: Five Reasons Not to Miss NVIDIA CEO Jensen Huang’s GTC Keynote Sept. 20

Think fast. Enterprise AI, new gaming technology, the metaverse and the 3D internet, and advanced AI technologies tailored to just about every industry are all coming your way.

NVIDIA founder and CEO Jensen Huang’s keynote at NVIDIA GTC on Tuesday, Sept. 20, is the best way to get ahead of all these trends.

NVIDIA’s virtual technology conference, which takes place Sept. 19-22, sits at the intersections of business and technology, science and the arts in a way no other event can.

This GTC will focus on neural graphics — which bring together AI and visual computing to create stunning new possibilities — the metaverse, an update on large language models, and the changes coming to every industry with the latest generation of recommender systems.

The free online gathering features speakers from every corner of industry, academia and research.

Speakers include Johnson & Johnson CTO Rowena Yao; Boeing Vice President Linda Hapgood; Polestar COO Dennis Nobelius; Deutsche Bank CTO Bernd Leukert; UN Assistant Secretary-General Ahunna Eziakonwa; UC San Diego distinguished professor Henrik Christensen, and hundreds more.

For those who want to get hands on, GTC features developer sessions for newbies and veteran developers.

Two-hour training labs are included for those who sign up for a free conference pass. Those who want to dig deeper can sign up for one of 21 full-day virtual hands-on workshops at a special price of $149, and for group purchases of more than five seats, we are offering a special of $99 per seat.

Finally, GTC offers networking opportunities that bring together people working on the most challenging problems of our time from all over the planet.

Register free and start loading up your calendar with content today.

The post Get up to Speed: Five Reasons Not to Miss NVIDIA CEO Jensen Huang’s GTC Keynote Sept. 20 appeared first on NVIDIA Blog.

Read More

AI on the Stars: Hyperrealistic Avatars Propel Startup to ‘America’s Got Talent’ Finals

More than 6 million pairs of eyes will be on real-time AI avatar technology in this week’s finale of America’s Got Talent — currently the second-most popular primetime TV show in the U.S..

Metaphysic, a member of the NVIDIA Inception global network of technology startups, is one of 11 acts competing for $1 million and a headline slot in AGT’s Las Vegas show in tonight’s final on NBC. It’s the first AI act to reach an AGT finals.

Called “the best act of the series so far” and “one of the most unique things we’ve ever seen on this show” by notoriously tough judge Simon Cowell, the team’s performances involve a demonstration of photorealistic AI avatars, animated in real time by singers on stage.

In Metaphysic’s semifinals act, three singers — Daniel Emmet, Patrick Dailey and John Riesen — lent their voices to AI avatars of Cowell, fellow judge Howie Mandel and host Terry Crews, performing the opera piece “Nessun Dorma.” For the finale, the team plans to “bring back one of the greatest rock and roll icons of all time,” but it’s keeping the audience guessing.

The AGT winner will be announced on Wednesday, Sept. 14.

“Metaphysic’s history-making run on America’s Got Talent has allowed us to showcase the application of AI on one of the most-watched stages in the world,” said the startup’s co-founder and CEO Tom Graham, who appears on the show alongside co-founder Chris Umé.

AMERICA'S GOT TALENT -- “Auditions” Episode 1702 -- Pictured: MetaPhysic Synthetic Media --
(L to R): Daniel Emmet, Tom Graham and Chris Umé presented Metaphysic’s audition for “America’s Got Talent.” (Photo by Trae Patton/NBC, courtesy of Metaphysic.)

“While overall awareness of synthetic media has grown in recent years, Metaphysic’s AGT performances provide a front-row seat into how this technology could impact the future of everything, from the internet to entertainment to education,” he said.

Capturing Imaginations While Raising AI Awareness

Founded in 2021, London-based Metaphysic is developing AI technologies to help creators build virtual identities and synthetic content that is hyperrealistic, moving beyond the so-called uncanny valley.

The team initially went viral last year for DeepTomCruise, a TikTok channel featuring videos where actor Miles Fisher animated an AI avatar of Tom Cruise. The posts garnered around 100 million views and “provided many people with their first introduction to the incredible capabilities of synthetic media,” Graham said.

By bringing its AI avatars to the AGT stage, the company has been able to reach millions more viewers — with sophisticated camera rigs and performers on stage demonstrating how the technology works live and in real time.

AI, GPU Acceleration Behind the Curtain

Metaphysic’s AI avatar software pipeline includes variants of the popular StyleGAN model developed by NVIDIA Research. The team, which uses the TensorFlow deep learning framework, relies on NVIDIA CUDA software to accelerate its work on NVIDIA GPUs.

“Without NVIDIA hardware and software libraries, we wouldn’t be able to pull off these hyperreal results to the level we have,” said Jo Plaete, director of product innovation at Metaphysic. “The computation provided by our NVIDIA hardware platforms allows us to train larger and more complex models at a speed that allows us to iterate on them quickly, which results in those most perfectly tuned results.”

For both AI model development and inference during live performances, Metaphysic uses NVIDIA DGX systems as well as other workstations and data center configurations with NVIDIA GPUs — including NVIDIA A100 Tensor Core GPUs.

“Excellent hardware support has helped us troubleshoot things really fast when in need,” said Plaete. “And having access to the research and engineering teams helps us get a deeper understanding of the tools and how we can leverage them in our pipelines.”

Following AGT, Metaphysic plans to pursue several collaborations in the entertainment industry. The company has also launched a consumer-facing platform, called Every Anyone, that enables users to create their own hyperrealistic AI avatars.

Discover the latest in AI and metaverse technology by registering free for NVIDIA GTC, running online Sept. 19-22. Metaphysic will be part of the panel “AI for VCs: NVIDIA Inception Global Startup Showcase.”

Header photo by Chris Haston/NBC, courtesy of Metaphysic

The post AI on the Stars: Hyperrealistic Avatars Propel Startup to ‘America’s Got Talent’ Finals appeared first on NVIDIA Blog.

Read More