Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Developers and teams building avatars and virtual assistants can now register to join the early-access program for NVIDIA Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native AI microservices that make it easier to build and deploy intelligent virtual assistants and digital humans at scale.

Omniverse ACE eases avatar development, delivering the AI building blocks necessary to add intelligence and animation to any avatar, built on virtually any engine and deployed on any cloud. These AI assistants can be designed for organizations across industries, enabling organizations to enhance existing workflows and unlock new business opportunities.

ACE is one of several generative AI applications that will help creators accelerate the development of 3D worlds and the metaverse. Members who join the program will receive access to the prerelease versions of NVIDIA’s AI microservices, as well as the tooling and documentation needed to develop cloud-native AI workflows for interactive avatar applications.

Bring Interactive AI Avatars to Life With Omniverse ACE

Methods for developing avatars often require expertise, specialized equipment and manually intensive workflows. To ease avatar creation, Omniverse ACE enables seamless integration of NVIDIA’s AI technologies — including pre-built models, toolsets and domain-specific reference applications — into avatar applications built on most engines and deployed on public or private clouds.

Since it was unveiled in September, Omniverse ACE has been shared with select partners to capture early feedback. Now, NVIDIA is looking for partners who will provide feedback on the microservices, collaborate to improve the product, and push the limits of what’s possible with lifelike, interactive digital humans.

The early-access program includes access to the prerelease versions of ACE animation AI and conversational AI microservices, including:

  • 3D animation AI microservice for third-party avatars, which uses Omniverse Audio2Face generative AI to bring to life characters in Unreal Engine and other rendering tools by creating realistic facial animation from just an audio file.
  • 2D animation AI microservice, called Live Portrait, enables easy animation of 2D portraits or stylized human faces using live video feeds.
  • Text-to-speech microservice uses NVIDIA Riva TTS to synthesize natural-sounding speech from raw transcripts without any additional information, such as patterns or rhythms of speech.

Program members will also get access to tooling, sample reference applications and supporting resources to help get started.

Avatars Make Their Mark Across Industries

Omniverse ACE can help teams build interactive, digital humans that elevate experiences across industries, providing:

  • Easy animation of characters, so users can bring them to life with minimal expertise.
  • The ability to deploy on cloud, which means avatars will be usable virtually anywhere, such as a quick-service restaurant kiosk, a tablet or a virtual-reality headset.
  • A plug-and-play suite, built on NVIDIA Unified Compute Framework (UCF), which enables interoperability between NVIDIA AI and other solutions, ensuring state-of-the-art AI that fits each use case.

Partners such as Ready Player Me and Epic Games have experienced how Omniverse ACE can enhance workflows for AI avatars.

The Omniverse ACE animation AI microservice supports 3D characters from Ready Player Me, a platform for building cross-game avatars.

“Digital avatars are becoming a significant part of our daily lives. People are using avatars in games, virtual events and social apps, and even as a way to enter the metaverse,” said Timmu Tõke, CEO and co-founder of Ready Player Me. “We spent seven years building the perfect avatar system, making it easy for developers to integrate in their apps and games and for users to create one avatar to explore various worlds — with NVIDIA Omniverse ACE, teams can now more easily bring these characters to life.”

Epic Games’ advanced MetaHuman technology transformed the creation of realistic, high-fidelity digital humans. Omniverse ACE, combined with the MetaHuman framework, will make it even easier for users to design and deploy engaging 3D avatars.

Digital humans don’t just have to be conversational. They can be singers, as well — just like the AI avatar Toy Jensen. NVIDIA’s creative team quickly created a holiday performance by TJ, using Omniverse ACE to extract the voice of a singer and turn it into TJ’s voice. This enabled the avatar to sing at the same pitch and with the same rhythm as the original artist.

Many creators are venturing into VTubing, a new way of livestreaming. Users embody a 2D avatar and interact with viewers. With Omniverse ACE, creators can move their avatars into 3D from 2D animation, including photos and stylistic faces. Users can render the avatars from the cloud and animate the characters from anywhere.

Additionally, the NVIDIA Tokkio reference application is expanding, with early partners building cloud-native customer service avatars for industries such as telco, banking and more.

Join the Early-Access Program

Early access to Omniverse ACE is available to developers and teams building avatars and virtual assistants.

Watch the NVIDIA special address at CES on demand. Learn more about NVIDIA Omniverse ACE and register to join the early-access program.

Read More

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

Those looking to join the ranks of AI trailblazers or chart a new course in their careers need look no further.

At NVIDIA’s latest GTC conference, industry leaders in a panel called “5 Paths to a Career in AI” shared tips and insights on how to make a mark in this rapidly evolving field.

Representing diverse sectors such as healthcare, automotive, augmented and virtual reality, climate and energy, and manufacturing, these experts offered valuable advice for all seeking to build a career in AI.

Here are five key takeaways from the discussion:

  1. Be curious and constantly learn: “I think in order to break into this field, you’ve got to be curious. It’s so important to always be learning [and] always be asking questions,” emphasized Chelsea Sumner, healthcare AI startups lead for North and Latin America at NVIDIA. “If we’re not asking questions, and we’re not learning, we’re not growing.”
  2. Tell your story effectively to different audiences: “Your ability to tell your story to a variety of different audiences is essential,” noted Justin Taylor, vice president of AI at Lockheed Martin. “So for them to understand what you’re doing [with AI], how you’re doing it, why you’re doing it is essential.”
  3. Embrace challenges and be resilient: “When you have all of these different experiences, you understand that it’s not always going to be perfect,” advised Laura Leal-Taixé, professor at the Technical University of Munich and principal scientist at Argo AI. “And when things aren’t always perfect, you’re able to have competence because [you know that you] did that really hard thing and was able to get through it.”
  4. Understand the purpose behind your work: “Understand the baseline, how do you collect the data baseline — understand the physical, the bottom line. What’s the purpose, what do you want to do?” advised Jay Lee, Ohio eminent scholar of the University of Cincinnati and board member of Foxconn.
  5. Collaborate and seek support from others: “It’s so important for resiliency to find people across different domains and really tap into that,” said Carrie Gotch, creator and content strategy for 3D/AR at Adobe. “No one does it alone, right? You’re always part of a system, part of a team of people.”

The panelists stressed the importance of staying up to date and curious, gaining practical experience, collaborating with others and taking risks when building a career in AI.

Start your journey to an AI career by signing up for NVIDIA GTC, running in March, where you can network, get trained on the latest tools and hear from thought leaders about the impact of AI in various industries.

It could be the first step toward a rewarding AI career that takes you into 2023 and beyond.

Read More

Meet the Omnivore: Music Producer Remixes the Holidays With Newfound Passion for 3D Content Creation

Meet the Omnivore: Music Producer Remixes the Holidays With Newfound Passion for 3D Content Creation

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Stephen Tong

Stephen Tong, aka Funky Boy, has always loved music and photography. He’s now transferring the skills developed over the years as a music producer — shooting time lapses, creating audio tracks and more — to a new passion of his: 3D content creation.

Tong began creating 3D renders and animations earlier this year, using the NVIDIA Omniverse platform for building and connecting custom 3D pipelines.

Within just a couple months of learning to use Omniverse, Tong created a music video with the platform. The video received honorable mention in the inaugural #MadeInMachinima contest last March, which invited participants to remix popular characters from games like Squad, Mount & Blade II: Bannerlord and MechWarrior Mercenaries 5 using the Omniverse Machinima app.

In September, Tong participated in the first-ever Omniverse developer contest, which he considered the perfect way to learn about extending the platform and coding with the popular Python programming language. He submitted three Omniverse extensions — core building blocks that let anyone create and extend functions of Omniverse apps — aimed at easing creative workflows like his own.

Ringing in the Season the Omniverse Way

The artist also took part in the #WinterArtChallenge this month from NVIDIA Studio, a creative community and platform of NVIDIA RTX and AI-accelerated creator apps. Creatives from around the world shared winter-themed art on social media using the hashtag.

Tong said his scene was inspired by cozy settings he often associates with the holidays.

First, the artist used AI to generate a mood board. Once satisfied with the warm, cozy mood, he modeled a winter chalet — complete with a snowman, wreaths and sleigh — using the Marbles RTX assets, free to use in the Omniverse Launcher, as well as some models from Sketchfab.

Tong collected the assets in Unreal Engine before rendering the 3D scene using the Omniverse Create and Blender apps. The Universal Scene Description (USD) framework allowed him to bring the work from these various applications together.

“USD enables large scenes to be loaded fast and with ease,” he said. “The system of layers makes Omniverse a powerful tool for collaboration and iterations.”

With his festive creativity on a roll, Tong also orchestrated an animated quartet lip-syncing to “Carol of the Bells” using Omniverse Audio2Face, an AI app that quickly and easily generates expressive facial animations from just an audio source, as well as the DaVinci Resolve application for video editing.

Watch to keep up the holiday spirit:

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

To hear the latest made possible by accelerated computing, AI and Omniverse, watch NVIDIA’s special address at CES on Tuesday, Jan. 3, at 8 a.m. PT.

Check out more artwork from Tong and other “Omnivores” in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

NVIDIA to Reveal Consumer, Creative, Auto, Robotics Innovations at CES

NVIDIA to Reveal Consumer, Creative, Auto, Robotics Innovations at CES

NVIDIA executives will share some of the company’s latest innovations Tuesday, Jan. 3, at 8 a.m. Pacific time ahead of this year’s CES trade show in Las Vegas.

Jeff Fisher, senior vice president for gaming products, will be joined by Deepu Talla, vice president of embedded and edge computing, Stephanie Johnson, vice president of consumer marketing, and Ali Kani, vice president of automotive, for a special address that you won’t want to miss.

During the event, which will be streamed on nvidia.com,  the NVIDIA YouTube and Twitch channels, as well as on the GeForce YouTube channel, the executives will reveal exciting gaming, creative, automotive and robotics announcements.

The broadcast is a unique opportunity to get a sneak peek at the future of technology and see what NVIDIA has in store for the coming year.

Don’t miss out on this special address from some of the top executives in the industry.

Tune in on Jan. 3 to get a first look at what’s in store for the future of technology.

Read More

Now Hear This: Top Five AI Podcasts of 2022

Now Hear This: Top Five AI Podcasts of 2022

One of tech’s top talk shows, the NVIDIA AI Podcast has attracted more than 3.6 million listens to date from folks who want to hear the latest in machine learning.

Its 180+ installments so far have included interviews with luminaries like Kai-Fu Lee and explored how AI is advancing everything from monitoring endangered rhinos to analyzing images from the James Webb Space Telescope.

Here’s a sampler of the most-played episodes in 2022:

Waabi CEO Raquel Urtasun on Using AI, Simulation to Teach Autonomous Vehicles to Drive

A renowned expert in machine learning, Urtasun discusses her current work at Waabi using simulation technology to teach trucks how to drive. Urtasun is a professor of computer science at the University of Toronto and the former chief scientist and head of R&D for Uber’s advanced technology group.

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

Automated chatbots ain’t what they used to be — they’re getting a whole lot better, thanks to advances in conversational AI. Entrepreneur, educator and author Jason Mars breaks down the latest techniques giving AI a voice.

Exaggeration Detector Could Lead to More Accurate Health Science Journalism

Dustin Wright, a researcher at the University of Copenhagen, used NVIDIA GPUs to create an “exaggeration detection system.” He pointed it at hyperbole in health science news and explained to the AI Podcast how it works.

Fusing Art and Tech: MORF Gallery CEO Scott Birnbaum on Digital Paintings, NFTs and More

Silicon Valley startup MORF Gallery showcases artists who create with AI, robots and visual effects. Its CEO provides a virtual tour of what’s happening in digital art — including a plug-in device that can turn any TV into an art gallery.

‘AI Dungeon’ Creator Nick Walton Uses AI to Generate Infinite Gaming Storylines

What started as Nick Walton’s college hackathon project grew into “AI Dungeon,” a game with more than 1.5 million users. Now he’s co-founder and CEO of Latitude, a startup using AI to spawn storylines for games.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Read More

These 6 NVIDIA Jetson Users Win Big at CES in Las Vegas

These 6 NVIDIA Jetson Users Win Big at CES in Las Vegas

Six companies with innovative products built using the NVIDIA Jetson edge AI platform will leave CES, one of the world’s largest consumer technology trade shows, as big winners next week.

The CES Innovation Awards each year honor outstanding design and engineering in more than two dozen categories of consumer technology products. The companies to be awarded for their Jetson-enabled products at the conference, which runs Jan. 5-8 in Las Vegas, include:

  • John Deere: Best of Innovation awardee in the robotics category and honoree in the vehicle tech and advanced mobility category for its fully autonomous tractor. The tractor is capable of using GPS guidance, cameras, sensors and AI to perform essential tasks on the farm without an operator inside the cab.
  • AGRIST: Honoree for its robot that automatically harvests bell peppers. The smart agriculture company will be at CES booth 62201.
  • Skydio: Honoree for its Scout drone, which an operator can fly at a set distance and height using the Skydio Enterprise Controller or the Skydio Beacon while on the move, and without having to manually operate the drone. Skydio, at booth 18541 in Central Hall, is a member of NVIDIA Inception, a free, global program for cutting-edge startups.
  • GlüxKind: Honoree for GlüxKind Ella, an AI-powered intelligent baby stroller that offers advanced safety and convenience for busy parents. The NVIDIA Inception member will be at CES booth 61710.
  • Neubility: Honoree for its self-driving delivery robot, Neubie, a cost-effective and sustainable alternative for delivery needs that can help alleviate traffic congestion in urban areas. The NVIDIA Inception member will be at Samsung Electronics C-LAB’s booth 61032 in Venetian Hall.
  • Seoul Robotics: Honoree for its Level 5 Control Tower, which can turn standard vehicles into self-driving cars through a mesh network of sensors and computers installed on infrastructure. The NVIDIA Inception member will be at CES booth 5408.

Also, NVIDIA Inception members and Jetson ecosystem partners, including DriveU, Ecotron, Infineon, Leopard Imaging, Orbecc, Quest Global, Slamcore, Telit, VVDN, Zvision and others, will be at CES, with many announcing systems and demonstrating applications based on the Jetson Orin platform.

Deepu Talla, vice president of embedded and edge computing at NVIDIA, will join a panel discussion, “The Journey to Autonomous Operations,” on Friday, Jan. 6, at 12:30 p.m. PT, at the Accenture Innovation Hub in ballroom F of the Venetian Expo.

And tune in to NVIDIA’s virtual special address at CES on Tuesday, Jan. 3, at 8 a.m. PT, to hear the latest in accelerated computing. NVIDIA executives will unveil products, partnerships and offerings in autonomous machines, robotics, design, simulation and more.

Read More

3D Artist Zhelong Xu Revives Chinese Relics This Week ‘In the NVIDIA Studio’

3D Artist Zhelong Xu Revives Chinese Relics This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Artist Zhelong Xu, aka Uncle Light, brought to life Blood Moon — a 3D masterpiece combining imagination, craftsmanship and art styles from the Chinese Bronze Age — along with Kirin, a symbol of hope and good fortune, using NVIDIA technologies.

Also this week In the NVIDIA Studio, the #WinterArtChallenge is coming to a close. Enter by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on NVIDIA Studio’s social media channels. Be sure to tag #WinterArtChallenge to join.

 

Ring in the season and check out the NVIDIA RTX Winter World in Minecraft — now available in the NVIDIA Omniverse Launcher. Download today to use it in your #WinterArtChallenge scenes.

Tune in to NVIDIA’s special address at CES on Tuesday, Jan. 3, at 8 a.m. PT, when we’ll share the latest innovations made possible by accelerated computing and AI.

Dare to Dragon

Xu is a veteran digital artist who has worked at top game studio Tencent, made key contributions to the third season of Netflix’s Love, Death & Robots, and won ZBrush 2018 Sculpt of the Year award. He carries massive influence in the 3D community in China, and the country’s traditional culture is an inexhaustible treasure of inspiration for the artist.

“Ancient Chinese artisans have created countless unique, aesthetic systems over time that are completely different from Western art,” said Xu. “My dream is to use modern means to reinterpret Chinese culture and aesthetics as I understand them.”

Blood Moon is a tribute to the lost Shu civilization, which existed from 2,800 B.C. to 1,100 B.C. The work demonstrates the creative power of ancient China. During a trip to the Sanxingdui Museum in the Sichuan province, where many relics from this era are housed, Xu became inspired by the mysterious, ancient Shu civilization.

The artist spent around 10 minutes sketching in the Procreate app, looking to capture the general direction and soul of the piece. This conceptual stage is important so that the heart of the artwork doesn’t get lost once 3D is applied, Xu said.

Sketching in Procreate.

He then began sculpting in Maxon’s ZBrush, which is his preferred tool as he says it contains the most convenient sculpting features.

Advanced sculpting in ZBrush.

Next, Xu used Adobe Substance 3D Painter to apply colors and textures directly to 3D models. NVIDIA RTX-accelerated light- and ambient-occlusion features baked and optimized scene assets in mere seconds, giving Xu the option to experiment with visual aesthetics quickly and easily.

Layers baked in Adobe Substance 3D Painter.

NVIDIA Iray technology in the viewport enabled Xu to edit interactively and use ray-traced baking for faster rendering speeds — all accelerated by his GeForce RTX 4090 GPU.

“The RTX 4090 GPU always gives me reliable performance and smooth interaction; plus, the Iray renderer delivers unbiased rendering,” Xu said.

Textures and materials applied in Adobe Substance 3D Painter.

Xu used the Universal Scene Description file framework to export the scene from Blender into the Omniverse Create app, where he used the advanced RTX Renderer, with path tracing, global illumination, reflections and refractions, to create incredibly realistic visuals.

Xu used the Blender USD branch to export the scene into Omniverse Create.

NVIDIA Omniverse — a platform for creating and operating metaverse applications — was incredibly useful for scene modifications, Xu said, as it enabled him to test lighting scenarios with his scene rendering in real time. This provided Xu with the most accurate iteration of final renders, allowing for more meaningful edits in the moment, he said.

 

Further edits included adding fog and volume effects, easily applied in Omniverse Create.

Fog and volume effects applied in Omniverse Create.

Omniverse gives 3D artists their choice of renderer within the viewport, with support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY Octane, Blender Cycles and more. Xu deployed the unbiased NVIDIA Iray renderer to complete the project.

Xu selected the RTX Iray renderer for final renders.

“Omniverse is already an indispensable part of my work,” Xu added.

The artist demonstrated this in another history-inspired piece, Kirin, built in Omniverse Create.

‘Kirin’ by Zhelong Xu.

“Kirin, or Qilin, is always a symbol of hope and good fortune in China, but there are few realistic works in the traditional culture,” said Xu.

He wanted to create a Kirin, a legendary hooved creature in Chinese mythology, with a body structure in line with Western fine art and anatomy, as well as with a sense of peace and the wisdom of silence based on Chinese culture.

“It is not scary,” said Xu. “Instead, it is a creature of great power and majesty.”

Kirin is decorated with jade-like cloud patterns, symbolizing the intersection of tradition and modernity, something the artist wanted to express and explore. Clouds and fogs are difficult to depict in solid sculpture, though they are often carved in classical Chinese sculpture. These were easily brought to life in Xu’s 3D artwork.

‘Kirin’ resembles a cross between a dragon and a horse, with the body of a deer and the tail of an ox.

Check out Zhelong Xu’s website for more inspirational artwork.

3D artist Zhelong Xu.

For the latest creative app updates, download the monthly NVIDIA Studio Driver.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

11 Essential Explainers to Keep You in the Know in 2023

11 Essential Explainers to Keep You in the Know in 2023

The NVIDIA corporate blog has long been a go-to source for information on the latest developments in AI and accelerated computing.

The blog’s series of “explainers” are among our most-read posts, offering a quick way to catch up on the newest technologies.

In this post, we’ve rounded up 11 of the most popular explainers from the blog, providing a beginner’s guide to understanding the concepts and applications of these cutting-edge technologies.

From AI models to quantum computing, these explainers are a must-read for anyone looking to stay informed on the latest tech developments in 2022.

  1. What Is a Pretrained AI Model?” – This post covers the basics of pretrained AI models, including how they work and why they’re useful.
  2. What Is Denoising?” – This piece explains denoising and its use in image and signal processing.
  3. What Are Graph Neural Networks?” – This article introduces graph neural networks, including how they work and are used in various applications.
  4. What Is Green Computing?” – This post explains the basics of green computing, including why it’s important and how it can be achieved.
  5. What is Direct and Indirect Lighting?” – This piece covers the differences between direct and indirect lighting in computer graphics, and how they’re used in different applications.
  6. What Is a QPU?” – This blog introduces the quantum processing unit, including what it is and how they’re used in quantum computing.
  7. What Is an Exaflop?” – This article explains what an exaflop is and why it’s an important measure of computational power.
  8. What Is Zero Trust?” – This post covers the basics of zero trust, including what it is and how it can improve network security.
  9. What Is Extended Reality?” – This piece provides an overview of extended reality — the umbrella term for virtual, augmented and mixed reality — including what it is and how it’s used in different applications.
  10. What Is a Transformer Model?” – This blog explains what transformer models are and how they’re used in AI.
  11. What Is Path Tracing?” – This article covers the basics of path tracing, including how it works and why it’s important for creating realistic computer graphics. It provides examples of its applications in different fields.

Let us know in the comments section below which AI and accelerated computing concepts you’d like explained next on our blog. We’re always looking for suggestions and feedback. 

 

Read More

Top Food Stories From 2022: Meet 4 Startups Putting AI on the Plate

Top Food Stories From 2022: Meet 4 Startups Putting AI on the Plate

This holiday season, feast on the bounty of food-themed stories NVIDIA Blog readers gobbled up in 2022.

Startups in the retail industry — and particularly in quick-service restaurants — are using NVIDIA AI and robotics technology to make it easier to order food in drive-thrus, find beverages on store shelves and have meals delivered. They’re accelerated by NVIDIA Inception, a program that offers go-to-market support, expertise and technology for cutting-edge startups.

For those who prefer eye candy, artists also recreated a ramen restaurant using the NVIDIA Omniverse platform for creating and operating metaverse applications.

AI’ll Take Your Order: Conversational AI at the Drive-Thru

Image credit: Jonathan Borba via Unsplash.

Toronto startup HuEx is developing a conversational AI assistant to handle order requests at the drive-thru speaker box. The real-time voice service, which runs on the NVIDIA Jetson edge AI platform, transcribes voice orders to text for staff members to fulfill.

The technology, integrated with the existing drive-thru headset system, allows for team members to hear the orders and jump in to assist if needed. It’s in pilot tests to help support service at popular Canadian fast-service chains.

Hungry for AI: Automated Ordering Addresses Restaurant Labor Crunch

San Diego-based startup Vistry is tackling a growing labor shortage among quick-service restaurants with an AI-enabled, automated order-taking solution. The system, built with the NVIDIA Riva software development kit, uses natural language processing for menu understanding and speech — plus recommendation systems to enable faster, more accurate order-taking and more relevant, personalized offers.

Vistry is also using the NVIDIA Metropolis application framework to create computer vision applications that can help automate curbside check-ins, speed up drive-thrus and predict the time it takes to prepare a customer’s order. Its tools are powered by NVIDIA Jetson and NVIDIA A2 Tensor Core GPUs.

Dinner Delivered: Cartken Deploys Sidewalk Robots

Oakland-based startup Cartken is deploying NVIDIA Jetson-enabled sidewalk robots for last-mile deliveries of coffee and meals. Its autonomous mobile robot technology is used to deliver Grubhub orders to students at the University of Arizona and Ohio State — and Starbucks goods in malls in Japan.

The Inception member relies on the NVIDIA Jetson AGX Orin module to run six cameras that aid in simultaneous localization and mapping, navigation, and wheel odometry.

AI’s Never Been More Convenient: Restocking Robot Rolls Out in Japan

TX SCARA robot restocks drinksTelexistence, an Inception startup based in Tokyo, is deploying hundreds of NVIDIA AI-powered robots to restock shelves at FamilyMart, a leading Japanese convenience store chain. The robots handle repetitive tasks like refilling beverage displays, which frees up retail staff to interact with customers.

For AI model training, the team relied on NVIDIA DGX systems. The robot uses the NVIDIA Jetson AGX Xavier for AI processing at the edge, and the NVIDIA Jetson TX2 module to transmit video-streaming data.

Bonus: Savor a Virtual Bowlful of NVIDIA Omniverse

NVIDIA technology isn’t just accelerating food-related applications for the restaurant industry — it’s also powering tantalizing virtual scenes complete with mouth-watering, calorie-free dishes.

Two dozen NVIDIA artists and freelancers around the globe showcased the capabilities of NVIDIA Omniverse by recreating a Tokyo ramen shop in delicious detail — including simmering pots of noodles, steaming dumplings and bottled drinks.

The scene, created to highlight NVIDIA RTX-powered real-time rendering and physics simulation capabilities, consists of more than 22 million triangles, 350 unique textured models and 3,000 4K-resolution texture maps.

The post Top Food Stories From 2022: Meet 4 Startups Putting AI on the Plate appeared first on NVIDIA Blog.

Read More

Toy Jensen Rings in Holidays With AI-Powered ‘Jingle Bells’

Toy Jensen Rings in Holidays With AI-Powered ‘Jingle Bells’

In a moment of pure serendipity, Lah Yileh Lee and Xinting Lee, a pair of talented singers who often stream their performances online, found themselves performing in a public square in Taipei when NVIDIA founder and CEO Jensen Huang happened upon them.

Huang couldn’t resist joining in, cheering on their serenade as they recorded Lady Gaga’s “Always Remember Us This Way.”

The resulting video quickly went viral, as did a follow-up video from the pair, who sang Lady Gaga’s “Hold My Hand,” the song Huang originally requested.

Toy Jensen Created Using NVIDIA Omniverse Avatar Cloud Engine

Now, with the help of his AI-driven avatar, Toy Jensen, Huang has come up with a playful holiday-themed response.

NVIDIA’s creative team quickly developed a holiday performance by TJ, a tech demo showcasing core technologies that are part of the NVIDIA Omniverse Avatar Cloud Engine, or ACE, platform.

Omniverse ACE is a collection of cloud-native AI microservices and workflows for developers to easily build, customize and deploy engaging and interactive avatars.

Unlike current avatar development, which requires expertise, specialized equipment, and manually intensive workflows, Omniverse ACE is built on top of the Omniverse platform and NVIDIA’s Unified Compute Framework, or UCF, which makes it possible to quickly create and configure AI pipelines with minimal coding.

“It’s a really amazing technology, and the fact that we can do this is phenomenal,” said Cyrus Hogg, an NVIDIA technical program manager.

To make it happen, NVIDIA’s team used a recently developed voice conversion model to extract the voice of a professional singer from a sample provided by them and turn it into TJ’s voice – originally developed by training on hours of real world recordings. They used the musical notes from that sample and applied them to the digital voice of TJ to make the avatar sing the same notes and with the same rhythm as the original singer.

NVIDIA Omniverse Generative AI – Audio2Face, Audio2Gesture Enable Realistic Facial Expressions, Body Movements

Then the team used NVIDIA Omniverse ACE along with Omniverse Audio2Face and Audio2Gesture technologies to generate realistic facial expressions and body movements for the animated performance based on TJ’s audio alone.

While the team behind Omniverse ACE technologies spent years developing and fine-tuning the technology showcased in the performance, turning the music track they created into a polished video took just hours.

Toy Jensen Delights Fans With ‘Jingle Bells’ Performance

That gave them plenty of time to ensure an amazing performance.

They even collaborated with Jochem van der Saag, a composer and producer who has worked with Michael Bublé and David Foster, to create the perfect backing track for TJ to sing along to.

“We have van der Saag composing the song, and he’s gonna also orchestrate it for us,” said Hogg. “So that’s a really great addition to the team. And we’re really excited to have him on board.”

ACE Could Revolutionize Virtual Experiences

The result is the perfect showcase for NVIDIA Omniverse ACE and the applications it could have in various industries — for virtual events, online education and customer service, as well as in creating personalized avatars for video games, social media and virtual reality experiences. NVIDIA Omniverse ACE will be available soon to early-access partners.

The post Toy Jensen Rings in Holidays With AI-Powered ‘Jingle Bells’ appeared first on NVIDIA Blog.

Read More