NVIDIA Advances Simulation for Intelligent Robots With Major Updates to Isaac Sim

NVIDIA Advances Simulation for Intelligent Robots With Major Updates to Isaac Sim

Demand for intelligent robots is growing as more industries embrace automation to address supply chain challenges and labor force shortages.

The installed base of industrial and commercial robots will grow more than 6.4x — from 3.1 million in 2020 to 20 million in 2030, according to ABI Research. Developing, validating and deploying these new AI-based robots requires simulation technology that places them in realistic scenarios.

At CES, NVIDIA announced major updates to Isaac Sim, its robotics simulation tool to build and test virtual robots in realistic environments across varied operating conditions. Now accessible from the cloud, Isaac Sim is built on NVIDIA Omniverse, a platform for creating and operating metaverse applications.

Powerful AI-Driven Capabilities for Roboticists 

With humans increasingly working side by side with collaborative robots (cobots) or autonomous mobile robots (AMRs), it’s critical that people and their common behaviors are added to simulations.

Isaac Sim’s new people simulation capability allows human characters to be added to a warehouse or manufacturing facility and tasked with executing familiar behaviors— like stacking packages or pushing carts. Many of the most common behaviors are already supported, so simulating them is as simple as issuing a command.

To minimize the difference between results observed in a simulated world versus those seen in the real world, it’s imperative to have physically accurate sensor models.

Using NVIDIA RTX technology, Isaac Sim can now render physically accurate data from sensors in real time. In the case of an RTX-simulated lidar, ray tracing provides more accurate sensor data under various lighting conditions or in response to reflective materials.

Isaac Sim also provides numerous new simulation-ready 3D assets, which are critical to building physically accurate simulated environments. Everything from warehouse parts to popular robots come ready to go, so developers and users can quickly start building.

Significant new capabilities for robotics researchers include advances in Isaac Gym for reinforcement learning and Isaac Cortex for collaborative robot programming. Additionally, a new tool, Isaac ORBIT, provides simulation operating environments and benchmarks for robot learning and motion planning.

For the large community of Robot Operating System (ROS) developers, Isaac Sim upgrades support for ROS 2 Humble and Windows. All of the Isaac ROS software can now be used in simulation.

Expanding Isaac Platform Capabilities and Ecosystem Drives Adoption 

The large and complex robotics ecosystem spans multiple industries, from logistics and manufacturing to retail, energy, sustainable farming and more.

The end-to-end Isaac robotics platform provides advanced AI and simulation software as well as accelerated compute capabilities to the robotics ecosystem. Over a million developers and more than a thousand companies rely on one or many parts of it. This includes many companies that have deployed physical robots developed and tested in the virtual world using Isaac Sim.

Telexistence has deployed beverage restocking robots across 300 convenience stores in Japan. To improve safety, Deutsche Bahn is training AI models to handle very important but unexpected corner cases that happen rarely in the real world — like luggage falling on a train track. Sarcos Robotics is developing robots to pick and place solar panels in renewable energy installations.

Festo uses Isaac Cortex to simplify programming for cobots and transfer simulated skills to the physical robots. Fraunhofer is developing advanced AMRs using the physically accurate and full-fidelity visualization features of Isaac Sim. Flexiv is using Isaac Replicator for synthetic data generation to train AI models.

While training robots is important, simulation is playing a critical role in training the human operators to work with and program robots. Ready Robotics is teaching programming of industrial robots with Isaac Sim. Universal Robotics is using Isaac Sim for workforce development to train end operators from the cloud.

Cloud Access Puts Isaac Platform Within Reach Everywhere

With Isaac Sim available in the cloud, global, multidisciplinary teams working on robotics projects can collaborate with increased accessibility, agility and scalability for testing and training virtual robots.

A lack of adequate training data often hinders deployment when building new facilities with robotics systems or scaling existing autonomous systems. Isaac Sim taps into Isaac Replicator to enable developers to create massive ground-truth datasets that mimic the physics of real-world environments.

Once deployed, dynamic route planning is required to operate an efficient fleet of hundreds of robots as automation requirements scale. NVIDIA cuOpt, a real-time fleet task-assignment and route-planning engine improves operational efficiencies with automation.

Get Started on Isaac Sim 

Download Isaac Sim today.

Watch NVIDIA’s special address at CES, where its executives unveiled products, partnerships and offerings in autonomous machines, robotics, design, simulation and more.

Read More

NVIDIA Opens Omniverse Portals With Generative AIs for 3D and RTX Remix

NVIDIA Opens Omniverse Portals With Generative AIs for 3D and RTX Remix

Whether creating realistic digital humans that can express raw emotion or building immersive virtual worlds, those in the design, engineering, creative and other industries across the globe are reaching new heights through 3D workflows.

Animators, creators and developers can use new AI-powered tools to reimagine 3D environments, simulations and the metaverse — the 3D evolution of the internet.

Based on the Universal Scene Description (USD) framework, the NVIDIA Omniverse platform — which enables the development of metaverse applications — is expanding with Blender enhancements and a new suite of experimental generative AI tools for 3D artists.

In a special address at CES, NVIDIA announced these features, as well as Omniverse preinstallation on NVIDIA Studio laptops and thousands of new, free USD assets to help accelerate adoption of 3D workflows.

NVIDIA Studio 3D creators Jeremy Lightcap, Edward McEvenue, Rafi Nizam, Jae Solina, Pekka Varis, Shangyu Wang, Ashley Goldstein collaborate across multiple 3D design tools, time zones and RTX systems with Omniverse.

Plus, a new release for Blender, now available in the Omniverse Launcher, is bringing 3D generative AI capabilities to Blender users everywhere. A new panel lets Blender users easily transfer shape keys and rigged characters.  The challenge of reattaching a rigged character’s head can now be solved with a one-button operation from Omniverse Audio2Face — an AI-enabled tool that automatically generates realistic facial expressions from an audio file.

Another new panel for scene optimization lets users create USD scenes within their multi-app 3D workflows more easily and in real time.

In addition, Audio2Face, Audio2Gesture and Audio2Emotion — generative AI tools that enable instant 3D character animation — are getting performance updates that make it easier for developers and creators to integrate into their current 3D pipelines.

Creators can generate facial expressions from an audio file using Audio2Face; realistic emotions ranging from happy and excited to sad and regretful with Audio2Emotion; and realistic upper-body movement using Audio2Gesture. These audio-to-animation tools are game-changers for 3D artists, eliminating the need to perform tedious, manual tasks.

AI-assisted creator tools are expanding to even more communities of creative and technical professionals. When NVIDIA Canvas was introduced, it empowered artists to seamlessly generate landscapes and iterate on them with simple brushstrokes and AI. Coming soon, all RTX users will be able to download an update to Canvas that introduces 360 surround images to create and conceptualize panoramic environments and beautiful images. The AI ToyBox, which features extensions derived from NVIDIA Research, enables creators to generate 3D meshes from 2D inputs.

Omniverse’s powerful AI tools simplify complex tasks. Creators of all levels can tap into these resources to produce high-quality outputs that meet the growing demands for content and virtual worlds in the metaverse.

“The demand for 3D skills is skyrocketing, but learning 3D can be pretty scary to some, and definitely time consuming,” said Jae Solina, aka JSFilmz. “But these new platform developments not only let creatives and technical professionals continue to work in their favorite 3D tools, but also supercharge their craft and even use AI to assist them in their workflows.”

Omniverse Launcher, the portal to download Omniverse content and reference applications, has also been made available to system builders so they can preinstall it, enabling optimized, out-of-the-box experiences for 3D creators on NVIDIA Studio-validated laptops. GIGABYTE and AORUS will be the first laptops launching in 2023 with Omniverse Launcher preinstalled, expanding platform access to a growing number of 3D content creators.

NVIDIA RTX Remix is a free modding platform, built on Omniverse, that enables modders to

quickly create and share #RTXON mods for classic games, each with full ray tracing, enhanced materials, NVIDIA DLSS 3 and NVIDIA Reflex. Its release in early access is coming soon. The jaw-dropping Portal with RTX was built with RTX Remix, and to demonstrate how easy it is for modders to turn RTX ON in their mods, we shared RTX Remix with the original creator of Portal: Prelude, an unofficial Portal prequel released in 2008.

Omniverse users can also access thousands of new, free USD assets, including a USD-based NVIDIA RTX Winter World Minecraft experience, and learn to create their own NVIDIA SimReady assets for complex simulation building. Using Omniverse, creators can supercharge their existing workflows using familiar tools such as Autodesk Maya, Autodesk 3ds Max, Blender, Adobe Substance 3D Painter, and more with AI, simulation tools and real-time RTX-accelerated rendering.

All types of 3D creators can take advantage of these new tools to push the boundaries of 3D simulation and virtual world-building. Users can reimagine digital worlds and animate lifelike characters with new depths of creativity through the bridging of audio-to-animation tools, generative AI and the metaverse.

Latest Omniverse Platform Updates

The latest updates within Omniverse include:

  • Early access for the Unity Omniverse Connector is now available.
  • Blender alpha release, now available in the Omniverse Launcher, enables users to repair geometry, generate automatic UVs and decimate high-resolution CAD data to more usable polycounts.
  • Audio2Face, Audio2Emotion and Audio2Gesture updates better enable instant, realistic animation of characters, now available in Omniverse Audio2Face and Omniverse Machinima.
  • NVIDIA Canvas is coming soon to the Omniverse Launcher with new capabilities that enable the creation of 360-degree landscapes with simple brushstrokes. Users can import the environments into 3D apps to test different settings and lighting.
  • AI ToyBox of experimental tools, built by NVIDIA Research, that include GET3D, an Omniverse extension that generates trainable 3D models from 2D images, letting developers use their own datasets to rapidly create models for 3D virtual worlds, is now available in the Omniverse Extension Manager.
  • Thousands of new, free 3D assets are now available worldwide for users to build and create within Omniverse.

Watch the NVIDIA special address at CES on demand.

Creators can download NVIDIA Omniverse for free, submit their work to the NVIDIA Omniverse gallery, and find resources through forums, Medium, Twitter, YouTube, Twitch, Instagram and Discord.

Follow NVIDIA Studio on Instagram, Twitter and Facebook and access tutorials — including on Omniverse — on the Studio YouTube channel. Get the latest Studio updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

Read More

Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Developers and teams building avatars and virtual assistants can now register to join the early-access program for NVIDIA Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native AI microservices that make it easier to build and deploy intelligent virtual assistants and digital humans at scale.

Omniverse ACE eases avatar development, delivering the AI building blocks necessary to add intelligence and animation to any avatar, built on virtually any engine and deployed on any cloud. These AI assistants can be designed for organizations across industries, enabling organizations to enhance existing workflows and unlock new business opportunities.

ACE is one of several generative AI applications that will help creators accelerate the development of 3D worlds and the metaverse. Members who join the program will receive access to the prerelease versions of NVIDIA’s AI microservices, as well as the tooling and documentation needed to develop cloud-native AI workflows for interactive avatar applications.

Bring Interactive AI Avatars to Life With Omniverse ACE

Methods for developing avatars often require expertise, specialized equipment and manually intensive workflows. To ease avatar creation, Omniverse ACE enables seamless integration of NVIDIA’s AI technologies — including pre-built models, toolsets and domain-specific reference applications — into avatar applications built on most engines and deployed on public or private clouds.

Since it was unveiled in September, Omniverse ACE has been shared with select partners to capture early feedback. Now, NVIDIA is looking for partners who will provide feedback on the microservices, collaborate to improve the product, and push the limits of what’s possible with lifelike, interactive digital humans.

The early-access program includes access to the prerelease versions of ACE animation AI and conversational AI microservices, including:

  • 3D animation AI microservice for third-party avatars, which uses Omniverse Audio2Face generative AI to bring to life characters in Unreal Engine and other rendering tools by creating realistic facial animation from just an audio file.
  • 2D animation AI microservice, called Live Portrait, enables easy animation of 2D portraits or stylized human faces using live video feeds.
  • Text-to-speech microservice uses NVIDIA Riva TTS to synthesize natural-sounding speech from raw transcripts without any additional information, such as patterns or rhythms of speech.

Program members will also get access to tooling, sample reference applications and supporting resources to help get started.

Avatars Make Their Mark Across Industries

Omniverse ACE can help teams build interactive, digital humans that elevate experiences across industries, providing:

  • Easy animation of characters, so users can bring them to life with minimal expertise.
  • The ability to deploy on cloud, which means avatars will be usable virtually anywhere, such as a quick-service restaurant kiosk, a tablet or a virtual-reality headset.
  • A plug-and-play suite, built on NVIDIA Unified Compute Framework (UCF), which enables interoperability between NVIDIA AI and other solutions, ensuring state-of-the-art AI that fits each use case.

Partners such as Ready Player Me and Epic Games have experienced how Omniverse ACE can enhance workflows for AI avatars.

The Omniverse ACE animation AI microservice supports 3D characters from Ready Player Me, a platform for building cross-game avatars.

“Digital avatars are becoming a significant part of our daily lives. People are using avatars in games, virtual events and social apps, and even as a way to enter the metaverse,” said Timmu Tõke, CEO and co-founder of Ready Player Me. “We spent seven years building the perfect avatar system, making it easy for developers to integrate in their apps and games and for users to create one avatar to explore various worlds — with NVIDIA Omniverse ACE, teams can now more easily bring these characters to life.”

Epic Games’ advanced MetaHuman technology transformed the creation of realistic, high-fidelity digital humans. Omniverse ACE, combined with the MetaHuman framework, will make it even easier for users to design and deploy engaging 3D avatars.

Digital humans don’t just have to be conversational. They can be singers, as well — just like the AI avatar Toy Jensen. NVIDIA’s creative team quickly created a holiday performance by TJ, using Omniverse ACE to extract the voice of a singer and turn it into TJ’s voice. This enabled the avatar to sing at the same pitch and with the same rhythm as the original artist.

Many creators are venturing into VTubing, a new way of livestreaming. Users embody a 2D avatar and interact with viewers. With Omniverse ACE, creators can move their avatars into 3D from 2D animation, including photos and stylistic faces. Users can render the avatars from the cloud and animate the characters from anywhere.

Additionally, the NVIDIA Tokkio reference application is expanding, with early partners building cloud-native customer service avatars for industries such as telco, banking and more.

Join the Early-Access Program

Early access to Omniverse ACE is available to developers and teams building avatars and virtual assistants.

Watch the NVIDIA special address at CES on demand. Learn more about NVIDIA Omniverse ACE and register to join the early-access program.

Read More

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

Those looking to join the ranks of AI trailblazers or chart a new course in their careers need look no further.

At NVIDIA’s latest GTC conference, industry leaders in a panel called “5 Paths to a Career in AI” shared tips and insights on how to make a mark in this rapidly evolving field.

Representing diverse sectors such as healthcare, automotive, augmented and virtual reality, climate and energy, and manufacturing, these experts offered valuable advice for all seeking to build a career in AI.

Here are five key takeaways from the discussion:

  1. Be curious and constantly learn: “I think in order to break into this field, you’ve got to be curious. It’s so important to always be learning [and] always be asking questions,” emphasized Chelsea Sumner, healthcare AI startups lead for North and Latin America at NVIDIA. “If we’re not asking questions, and we’re not learning, we’re not growing.”
  2. Tell your story effectively to different audiences: “Your ability to tell your story to a variety of different audiences is essential,” noted Justin Taylor, vice president of AI at Lockheed Martin. “So for them to understand what you’re doing [with AI], how you’re doing it, why you’re doing it is essential.”
  3. Embrace challenges and be resilient: “When you have all of these different experiences, you understand that it’s not always going to be perfect,” advised Laura Leal-Taixé, professor at the Technical University of Munich and principal scientist at Argo AI. “And when things aren’t always perfect, you’re able to have competence because [you know that you] did that really hard thing and was able to get through it.”
  4. Understand the purpose behind your work: “Understand the baseline, how do you collect the data baseline — understand the physical, the bottom line. What’s the purpose, what do you want to do?” advised Jay Lee, Ohio eminent scholar of the University of Cincinnati and board member of Foxconn.
  5. Collaborate and seek support from others: “It’s so important for resiliency to find people across different domains and really tap into that,” said Carrie Gotch, creator and content strategy for 3D/AR at Adobe. “No one does it alone, right? You’re always part of a system, part of a team of people.”

The panelists stressed the importance of staying up to date and curious, gaining practical experience, collaborating with others and taking risks when building a career in AI.

Start your journey to an AI career by signing up for NVIDIA GTC, running in March, where you can network, get trained on the latest tools and hear from thought leaders about the impact of AI in various industries.

It could be the first step toward a rewarding AI career that takes you into 2023 and beyond.

Read More

Meet the Omnivore: Music Producer Remixes the Holidays With Newfound Passion for 3D Content Creation

Meet the Omnivore: Music Producer Remixes the Holidays With Newfound Passion for 3D Content Creation

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Stephen Tong

Stephen Tong, aka Funky Boy, has always loved music and photography. He’s now transferring the skills developed over the years as a music producer — shooting time lapses, creating audio tracks and more — to a new passion of his: 3D content creation.

Tong began creating 3D renders and animations earlier this year, using the NVIDIA Omniverse platform for building and connecting custom 3D pipelines.

Within just a couple months of learning to use Omniverse, Tong created a music video with the platform. The video received honorable mention in the inaugural #MadeInMachinima contest last March, which invited participants to remix popular characters from games like Squad, Mount & Blade II: Bannerlord and MechWarrior Mercenaries 5 using the Omniverse Machinima app.

In September, Tong participated in the first-ever Omniverse developer contest, which he considered the perfect way to learn about extending the platform and coding with the popular Python programming language. He submitted three Omniverse extensions — core building blocks that let anyone create and extend functions of Omniverse apps — aimed at easing creative workflows like his own.

Ringing in the Season the Omniverse Way

The artist also took part in the #WinterArtChallenge this month from NVIDIA Studio, a creative community and platform of NVIDIA RTX and AI-accelerated creator apps. Creatives from around the world shared winter-themed art on social media using the hashtag.

Tong said his scene was inspired by cozy settings he often associates with the holidays.

First, the artist used AI to generate a mood board. Once satisfied with the warm, cozy mood, he modeled a winter chalet — complete with a snowman, wreaths and sleigh — using the Marbles RTX assets, free to use in the Omniverse Launcher, as well as some models from Sketchfab.

Tong collected the assets in Unreal Engine before rendering the 3D scene using the Omniverse Create and Blender apps. The Universal Scene Description (USD) framework allowed him to bring the work from these various applications together.

“USD enables large scenes to be loaded fast and with ease,” he said. “The system of layers makes Omniverse a powerful tool for collaboration and iterations.”

With his festive creativity on a roll, Tong also orchestrated an animated quartet lip-syncing to “Carol of the Bells” using Omniverse Audio2Face, an AI app that quickly and easily generates expressive facial animations from just an audio source, as well as the DaVinci Resolve application for video editing.

Watch to keep up the holiday spirit:

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

To hear the latest made possible by accelerated computing, AI and Omniverse, watch NVIDIA’s special address at CES on Tuesday, Jan. 3, at 8 a.m. PT.

Check out more artwork from Tong and other “Omnivores” in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

NVIDIA to Reveal Consumer, Creative, Auto, Robotics Innovations at CES

NVIDIA to Reveal Consumer, Creative, Auto, Robotics Innovations at CES

NVIDIA executives will share some of the company’s latest innovations Tuesday, Jan. 3, at 8 a.m. Pacific time ahead of this year’s CES trade show in Las Vegas.

Jeff Fisher, senior vice president for gaming products, will be joined by Deepu Talla, vice president of embedded and edge computing, Stephanie Johnson, vice president of consumer marketing, and Ali Kani, vice president of automotive, for a special address that you won’t want to miss.

During the event, which will be streamed on nvidia.com,  the NVIDIA YouTube and Twitch channels, as well as on the GeForce YouTube channel, the executives will reveal exciting gaming, creative, automotive and robotics announcements.

The broadcast is a unique opportunity to get a sneak peek at the future of technology and see what NVIDIA has in store for the coming year.

Don’t miss out on this special address from some of the top executives in the industry.

Tune in on Jan. 3 to get a first look at what’s in store for the future of technology.

Read More

Now Hear This: Top Five AI Podcasts of 2022

Now Hear This: Top Five AI Podcasts of 2022

One of tech’s top talk shows, the NVIDIA AI Podcast has attracted more than 3.6 million listens to date from folks who want to hear the latest in machine learning.

Its 180+ installments so far have included interviews with luminaries like Kai-Fu Lee and explored how AI is advancing everything from monitoring endangered rhinos to analyzing images from the James Webb Space Telescope.

Here’s a sampler of the most-played episodes in 2022:

Waabi CEO Raquel Urtasun on Using AI, Simulation to Teach Autonomous Vehicles to Drive

A renowned expert in machine learning, Urtasun discusses her current work at Waabi using simulation technology to teach trucks how to drive. Urtasun is a professor of computer science at the University of Toronto and the former chief scientist and head of R&D for Uber’s advanced technology group.

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

Automated chatbots ain’t what they used to be — they’re getting a whole lot better, thanks to advances in conversational AI. Entrepreneur, educator and author Jason Mars breaks down the latest techniques giving AI a voice.

Exaggeration Detector Could Lead to More Accurate Health Science Journalism

Dustin Wright, a researcher at the University of Copenhagen, used NVIDIA GPUs to create an “exaggeration detection system.” He pointed it at hyperbole in health science news and explained to the AI Podcast how it works.

Fusing Art and Tech: MORF Gallery CEO Scott Birnbaum on Digital Paintings, NFTs and More

Silicon Valley startup MORF Gallery showcases artists who create with AI, robots and visual effects. Its CEO provides a virtual tour of what’s happening in digital art — including a plug-in device that can turn any TV into an art gallery.

‘AI Dungeon’ Creator Nick Walton Uses AI to Generate Infinite Gaming Storylines

What started as Nick Walton’s college hackathon project grew into “AI Dungeon,” a game with more than 1.5 million users. Now he’s co-founder and CEO of Latitude, a startup using AI to spawn storylines for games.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Read More

These 6 NVIDIA Jetson Users Win Big at CES in Las Vegas

These 6 NVIDIA Jetson Users Win Big at CES in Las Vegas

Six companies with innovative products built using the NVIDIA Jetson edge AI platform will leave CES, one of the world’s largest consumer technology trade shows, as big winners next week.

The CES Innovation Awards each year honor outstanding design and engineering in more than two dozen categories of consumer technology products. The companies to be awarded for their Jetson-enabled products at the conference, which runs Jan. 5-8 in Las Vegas, include:

  • John Deere: Best of Innovation awardee in the robotics category and honoree in the vehicle tech and advanced mobility category for its fully autonomous tractor. The tractor is capable of using GPS guidance, cameras, sensors and AI to perform essential tasks on the farm without an operator inside the cab.
  • AGRIST: Honoree for its robot that automatically harvests bell peppers. The smart agriculture company will be at CES booth 62201.
  • Skydio: Honoree for its Scout drone, which an operator can fly at a set distance and height using the Skydio Enterprise Controller or the Skydio Beacon while on the move, and without having to manually operate the drone. Skydio, at booth 18541 in Central Hall, is a member of NVIDIA Inception, a free, global program for cutting-edge startups.
  • GlüxKind: Honoree for GlüxKind Ella, an AI-powered intelligent baby stroller that offers advanced safety and convenience for busy parents. The NVIDIA Inception member will be at CES booth 61710.
  • Neubility: Honoree for its self-driving delivery robot, Neubie, a cost-effective and sustainable alternative for delivery needs that can help alleviate traffic congestion in urban areas. The NVIDIA Inception member will be at Samsung Electronics C-LAB’s booth 61032 in Venetian Hall.
  • Seoul Robotics: Honoree for its Level 5 Control Tower, which can turn standard vehicles into self-driving cars through a mesh network of sensors and computers installed on infrastructure. The NVIDIA Inception member will be at CES booth 5408.

Also, NVIDIA Inception members and Jetson ecosystem partners, including DriveU, Ecotron, Infineon, Leopard Imaging, Orbecc, Quest Global, Slamcore, Telit, VVDN, Zvision and others, will be at CES, with many announcing systems and demonstrating applications based on the Jetson Orin platform.

Deepu Talla, vice president of embedded and edge computing at NVIDIA, will join a panel discussion, “The Journey to Autonomous Operations,” on Friday, Jan. 6, at 12:30 p.m. PT, at the Accenture Innovation Hub in ballroom F of the Venetian Expo.

And tune in to NVIDIA’s virtual special address at CES on Tuesday, Jan. 3, at 8 a.m. PT, to hear the latest in accelerated computing. NVIDIA executives will unveil products, partnerships and offerings in autonomous machines, robotics, design, simulation and more.

Read More

3D Artist Zhelong Xu Revives Chinese Relics This Week ‘In the NVIDIA Studio’

3D Artist Zhelong Xu Revives Chinese Relics This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Artist Zhelong Xu, aka Uncle Light, brought to life Blood Moon — a 3D masterpiece combining imagination, craftsmanship and art styles from the Chinese Bronze Age — along with Kirin, a symbol of hope and good fortune, using NVIDIA technologies.

Also this week In the NVIDIA Studio, the #WinterArtChallenge is coming to a close. Enter by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on NVIDIA Studio’s social media channels. Be sure to tag #WinterArtChallenge to join.

 

Ring in the season and check out the NVIDIA RTX Winter World in Minecraft — now available in the NVIDIA Omniverse Launcher. Download today to use it in your #WinterArtChallenge scenes.

Tune in to NVIDIA’s special address at CES on Tuesday, Jan. 3, at 8 a.m. PT, when we’ll share the latest innovations made possible by accelerated computing and AI.

Dare to Dragon

Xu is a veteran digital artist who has worked at top game studio Tencent, made key contributions to the third season of Netflix’s Love, Death & Robots, and won ZBrush 2018 Sculpt of the Year award. He carries massive influence in the 3D community in China, and the country’s traditional culture is an inexhaustible treasure of inspiration for the artist.

“Ancient Chinese artisans have created countless unique, aesthetic systems over time that are completely different from Western art,” said Xu. “My dream is to use modern means to reinterpret Chinese culture and aesthetics as I understand them.”

Blood Moon is a tribute to the lost Shu civilization, which existed from 2,800 B.C. to 1,100 B.C. The work demonstrates the creative power of ancient China. During a trip to the Sanxingdui Museum in the Sichuan province, where many relics from this era are housed, Xu became inspired by the mysterious, ancient Shu civilization.

The artist spent around 10 minutes sketching in the Procreate app, looking to capture the general direction and soul of the piece. This conceptual stage is important so that the heart of the artwork doesn’t get lost once 3D is applied, Xu said.

Sketching in Procreate.

He then began sculpting in Maxon’s ZBrush, which is his preferred tool as he says it contains the most convenient sculpting features.

Advanced sculpting in ZBrush.

Next, Xu used Adobe Substance 3D Painter to apply colors and textures directly to 3D models. NVIDIA RTX-accelerated light- and ambient-occlusion features baked and optimized scene assets in mere seconds, giving Xu the option to experiment with visual aesthetics quickly and easily.

Layers baked in Adobe Substance 3D Painter.

NVIDIA Iray technology in the viewport enabled Xu to edit interactively and use ray-traced baking for faster rendering speeds — all accelerated by his GeForce RTX 4090 GPU.

“The RTX 4090 GPU always gives me reliable performance and smooth interaction; plus, the Iray renderer delivers unbiased rendering,” Xu said.

Textures and materials applied in Adobe Substance 3D Painter.

Xu used the Universal Scene Description file framework to export the scene from Blender into the Omniverse Create app, where he used the advanced RTX Renderer, with path tracing, global illumination, reflections and refractions, to create incredibly realistic visuals.

Xu used the Blender USD branch to export the scene into Omniverse Create.

NVIDIA Omniverse — a platform for creating and operating metaverse applications — was incredibly useful for scene modifications, Xu said, as it enabled him to test lighting scenarios with his scene rendering in real time. This provided Xu with the most accurate iteration of final renders, allowing for more meaningful edits in the moment, he said.

 

Further edits included adding fog and volume effects, easily applied in Omniverse Create.

Fog and volume effects applied in Omniverse Create.

Omniverse gives 3D artists their choice of renderer within the viewport, with support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY Octane, Blender Cycles and more. Xu deployed the unbiased NVIDIA Iray renderer to complete the project.

Xu selected the RTX Iray renderer for final renders.

“Omniverse is already an indispensable part of my work,” Xu added.

The artist demonstrated this in another history-inspired piece, Kirin, built in Omniverse Create.

‘Kirin’ by Zhelong Xu.

“Kirin, or Qilin, is always a symbol of hope and good fortune in China, but there are few realistic works in the traditional culture,” said Xu.

He wanted to create a Kirin, a legendary hooved creature in Chinese mythology, with a body structure in line with Western fine art and anatomy, as well as with a sense of peace and the wisdom of silence based on Chinese culture.

“It is not scary,” said Xu. “Instead, it is a creature of great power and majesty.”

Kirin is decorated with jade-like cloud patterns, symbolizing the intersection of tradition and modernity, something the artist wanted to express and explore. Clouds and fogs are difficult to depict in solid sculpture, though they are often carved in classical Chinese sculpture. These were easily brought to life in Xu’s 3D artwork.

‘Kirin’ resembles a cross between a dragon and a horse, with the body of a deer and the tail of an ox.

Check out Zhelong Xu’s website for more inspirational artwork.

3D artist Zhelong Xu.

For the latest creative app updates, download the monthly NVIDIA Studio Driver.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

11 Essential Explainers to Keep You in the Know in 2023

11 Essential Explainers to Keep You in the Know in 2023

The NVIDIA corporate blog has long been a go-to source for information on the latest developments in AI and accelerated computing.

The blog’s series of “explainers” are among our most-read posts, offering a quick way to catch up on the newest technologies.

In this post, we’ve rounded up 11 of the most popular explainers from the blog, providing a beginner’s guide to understanding the concepts and applications of these cutting-edge technologies.

From AI models to quantum computing, these explainers are a must-read for anyone looking to stay informed on the latest tech developments in 2022.

  1. What Is a Pretrained AI Model?” – This post covers the basics of pretrained AI models, including how they work and why they’re useful.
  2. What Is Denoising?” – This piece explains denoising and its use in image and signal processing.
  3. What Are Graph Neural Networks?” – This article introduces graph neural networks, including how they work and are used in various applications.
  4. What Is Green Computing?” – This post explains the basics of green computing, including why it’s important and how it can be achieved.
  5. What is Direct and Indirect Lighting?” – This piece covers the differences between direct and indirect lighting in computer graphics, and how they’re used in different applications.
  6. What Is a QPU?” – This blog introduces the quantum processing unit, including what it is and how they’re used in quantum computing.
  7. What Is an Exaflop?” – This article explains what an exaflop is and why it’s an important measure of computational power.
  8. What Is Zero Trust?” – This post covers the basics of zero trust, including what it is and how it can improve network security.
  9. What Is Extended Reality?” – This piece provides an overview of extended reality — the umbrella term for virtual, augmented and mixed reality — including what it is and how it’s used in different applications.
  10. What Is a Transformer Model?” – This blog explains what transformer models are and how they’re used in AI.
  11. What Is Path Tracing?” – This article covers the basics of path tracing, including how it works and why it’s important for creating realistic computer graphics. It provides examples of its applications in different fields.

Let us know in the comments section below which AI and accelerated computing concepts you’d like explained next on our blog. We’re always looking for suggestions and feedback. 

 

Read More