Meet NANA, Moonshine Studio’s AI-Powered Receptionist Avatar

Meet NANA, Moonshine Studio’s AI-Powered Receptionist Avatar

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The creative team at Moonshine Studio — an artist-focused visual effects (VFX) studio specializing in animation and motion design — was tasked to solve a problem.

At their Taiwan office, receptionists were constantly engaged in meeting and greeting guests, preventing them from completing other important administrative work. To make matters worse, the automated kiosk greeting system wasn’t working as expected.

Senior Moonshine Studio 3D artist and this week’s In the NVIDIA Studio creator Eric Chiang stepped up to the challenge. He created a realistic, interactive 3D model that would serve as the foundation of a new AI-powered virtual assistant — NANA. The avatar can welcome guests and provide basic company info, easing the strain on the receptionist team.

Chiang built NANA using GPU-accelerated features in his favorite creative apps — powered by his NVIDIA Studio-badged MSI MEG Trident X2 PC, which is equipped with a GeForce RTX 4090 graphics card.

His creative workflow was enhanced by the Tensor Cores in his GPU, which supercharged AI-specific tasks — saving him time and elevating the quality of his work. RTX and AI also improve performance in gaming, boost productivity and more.

These advanced features are supported by NVIDIA Studio Drivers, — free for RTX GPU owners — which add performance and reliability. The December Studio Driver provides support for the Reallusion iClone AccuFACE plugin, GPU audio enhancements, AV1 in HandBrake and more — and is now ready for download.

Contests and Challenges Calling All Creators

Creative community The Rookies is hosting Meet Mat 3 — the 3D digital painting contest. Open to students and professionals with no more than a year of industry experience, it challenges contestants to use Adobe Substance 3D Painter to texture a blank character, MAT, in their own unique style. Prizes include GeForce RTX GPUs, Wacom Cintiq displays and more. Register today — entries close Jan. 5, 2024.

MAT, textured by artist Cino Lai in Adobe Substance 3D Painter.

And though temperatures continue to drop, the #WinterArtChallenge is heating up with un-brrrrrr-lievable entries like this extraordinary #InstantNeRF by @RadianceFields.

Be sure to include the #WinterArtChallenge hashtag for a chance to be featured on the @NVIDIAStudio, @NVIDIAOmniverse or @NVIDIAAIDev social channels.

An AI on the Future

Chiang began in Blender, sculpting intricate 3D models that served as the building blocks for NANA. Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport unlocked interactive, photorealistic modeling.

 

He then used the Marvelous Designer software for making, editing and reusing 3D clothes to create realistic clothing for NANA. This streamlined the design and simulation process, ensuring that the avatar is not only structurally sound but impeccably dressed.

NANA’s casual day outfit.

Chiang deployed Quixel Mixer and Adobe Substance 3D Painter for shading, adding depth, texture and realism to the 3D models.

 

He then used the Blender plug-in AccuRIG to efficiently create precise, adaptable character rigs.

 

Chiang put everything together in Unreal Engine, where he seamlessly integrated 3D objects into the scene, leveraging real-time rendering to create visually stunning results.

 

NVIDIA DLSS further increased viewport interactivity by using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail. All of this was powered by his GeForce RTX 4090 GPU.

The NVIDIA Studio-badged MSI MEG Trident X2 PC, equipped with a GeForce RTX 4090 graphics card.

Chiang is excited about what AI can do for creators and society at large.

“What was once science fiction is now becoming reality, opening the door to a whole new stage of scientific and technological development,” he said. “We are fortunate to participate in and witness this new stage.”

Moonshine Studio digital 3D artist Eric Chiang.

Visit Moonshine Studio and say hello to NANA.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

How NVIDIA Fuels the AI Revolution With Investments in Game Changers and Market Makers

How NVIDIA Fuels the AI Revolution With Investments in Game Changers and Market Makers

Great companies thrive on stories. Sid Siddeek, who runs NVIDIA’s venture capital arm, knows this well.

Siddeek still remembers one of his first jobs, schlepping presentation materials from one investor meeting to another, helping the startup’s CEO and management team get the story out while working from a trailer that “shook when the door opened,” he said.

That CEO was Jensen Huang. The startup was NVIDIA.

Siddeek, who has worked as an investor and an entrepreneur, knows how important it is to find the right people to share your company’s story with early on, whether they’re customers or partners, employees or investors.

It’s this very principle that underpins NVIDIA’s multifaceted approach to investing in the next wave of innovation, a strategy also championed by Vishal Bhagwati, who leads NVIDIA’s corporate development efforts.

It’s an effort that’s resulted in more than two dozen investments so far this year, accelerating as the pace of innovation in AI and accelerated computing quickens.

NVIDIA’s Three-Pronged Strategy to Support the AI Ecosystem

There are three ways that NVIDIA invests in the ecosystem, driving the transformation unleashed by accelerated computing. First, through NVIDIA’s corporate investments, overseen by Bhagwati. Second, through NVentures, our venture capital arm, led by Siddeek. And finally, through NVIDIA Inception, our vehicle for supporting startups and connecting them to venture capital.

There couldn’t be a better time to support companies harnessing NVIDIA technologies. AI alone could contribute more than $15 trillion to the global economy by 2030, according to PwC.

And if you’re working in AI and accelerated computing right now, NVIDIA stands ready to help. Developers across every industry in every country are building accelerated computing applications. And they’re just getting going.

The result is a collection of companies that are advancing the story of AI every day. They include Cohere, CoreWeave, Hugging Face, Inflection, Inceptive and many more. And we’re right alongside them.

“Partnering with NVIDIA is a game-changer,” said Ed Mehr, CEO of Machina Labs. “Their unmatched expertise will supercharge our AI and simulation capabilities.”

Corporate Investments: Growing Our Ecosystem

NVIDIA’s corporate investments arm focuses on strategic collaborations. These partnerships stimulate joint innovation, enhance the NVIDIA platform and expand the ecosystem. Since the beginning of 2023, announcements have been made about 14 investments.

These target companies include Ayar Labs, specializing in chip-to-chip optical connectivity, and Hugging Face, a hub for advanced AI models.

The portfolio also includes next-generation enterprise solutions. Databricks offers an industry-leading data platform for machine learning, while Cohere provides enterprise automation through AI. Other notable companies are Recursion, Kore.ai and Utilidata, each contributing unique solutions in drug discovery, conversational AI and smart electricity grids, respectively.

Consumer services are another investment focus. Inflection is crafting a personal AI for creative expression, while Runway serves as a platform for art and creativity through generative AI.

The investment strategy extends to autonomous machines. Ready Robotics is developing an operating system for industrial robotics, and Skydio builds autonomous drones.

NVIDIA’s most recent investments are in cloud service providers like CoreWeave. These platforms cater to a diverse clientele, from startups to Fortune 500 companies seeking to build next-generation AI services.

NVentures: Investing Alongside Entrepreneurs

Through NVentures, we support innovators who are deeply relevant to NVIDIA. We aim to generate strong financial returns and expand the ecosystem by funding companies that use our platforms across a wide range of industries.

To date, NVentures has made 19 investments in companies in healthcare, manufacturing and other key verticals. Some examples of our portfolio companies include:

  • Genesis Therapeutics, Inceptive, Terray, Charm, Evozyne, Generate, Superluminal: revolutionizing drug discovery
  • Machina Labs, Seurat Technologies: disrupting industrial processes to improve manufacturing
  • PassiveLogic: automating building systems with AI
  • MindsDB: for developers that need to connect enterprise data to AI
  • Moon Surgical: improving laparoscopic surgery with AI
  • Twelve Labs: developing multimodal foundation models for video understanding
  • Flywheel: accelerating medical imaging data development
  • Luma AI: developers of visual and multimodal models
  • Outrider: automating logistics hub operation
  • Synthesia: AI Video for the enterprise
  • Replicate: developer platform for open-source and custom models

All these companies are building on work being done inside and outside NVIDIA.

“NVentures has a network, not just within NVIDIA, but throughout the industry, to make sure we have access to the best technology and the best people to build all the different modules that have to come together to define the distribution and supply chain of the future,” said Andrew Smith, CEO of Outrider.

NVIDIA Inception: Supporting Startups and Connecting Them to Investors

In addition, we’re continuing to support startups with NVIDIA Inception. Launched in 2016, this free global program offers technology and marketing support to over 17,000 startups across multiple industries and over 125 countries.

And, as part of Inception, we’re partnering with venture capitalists through our VC Alliance, a program that offers benefits to our valued network of venture capital firms, including connecting startups with potential investors.

Partnering With Innovators in Every Industry

Whatever our relationship, whether as a partner or investor, we can offer companies unique forms of support.

NVIDIA has the technology. NVIDIA has the richest set of libraries and the deepest understanding of the frameworks needed to optimize training and inference pipelines.

We have the go-to-market skills. NVIDIA has tremendous field sales, solution architect and developer relations organizations with a long track record of working with the most innovative startups and the largest companies in the world.

We know how to grow. We have people throughout our organization who are recognized leaders in their respective fields and can offer expert advice to companies of all sizes and industries.

“Partnering with NVIDIA was an easy choice,” said Victor Riparbelli, cofounder and CEO of Synthesia. “We use their hardware, benefit from their AI expertise and get valuable insights, allowing us to build better products faster.”

Accelerating the Greatest Breakthroughs of Our Time

In turn, these investments augment our R&D in the software, systems and semiconductors undergirding this ecosystem.

With NVIDIA’s technologies poised to accelerate the work of researchers and scientists, entrepreneurs, startups and Fortune 500 companies, finding ways to support companies that rely on our technologies— with engineering resources, marketing support and capital — is more vital than ever.

Read More

500 Games and Apps Now Powered by RTX: A DLSS and Ray-Tracing Milestone

500 Games and Apps Now Powered by RTX: A DLSS and Ray-Tracing Milestone

We’re celebrating a milestone this week with 500 RTX games and applications utilizing NVIDIA DLSS, ray tracing or AI technologies. It’s an achievement anchored by NVIDIA’s revolutionary RTX technology, which has transformed gaming graphics and performance.

The journey began in 2018 at an electrifying event in Cologne. In a steel and concrete music venue amidst the city’s gritty industrial north side, over 1,200 gamers, breathless and giddy, erupted as NVIDIA founder and CEO Jensen Huang introduced NVIDIA RTX and declared, “This is a historic moment … Computer graphics has been reinvented.”

This groundbreaking launch, set against the backdrop of the world’s largest gaming expo, Gamescom, marked the introduction of the GeForce RTX 2080 Ti, 2080 and 2070 GPUs.

Launched in 2018, NVIDIA RTX has redefined visual fidelity and performance in modern gaming and creative applications.
Launched in 2018, NVIDIA RTX has redefined visual fidelity and performance in modern gaming and creative applications.

The most technically advanced games now rely on the techniques that RTX technologies have unlocked.

Ray tracing, enabled by dedicated RT Cores, delivers immersive, realistic lighting and reflections in games.

The technique has evolved from games with only a single graphics element executed in ray tracing to games such as Alan Wake 2, Cyberpunk 2077, Minecraft RTX and Portal RTX that use ray tracing for all the light in the game.

And NVIDIA DLSS, powered by Tensor Cores, accelerates AI graphics, now boosting performance with DLSS Frame Generation and improving RT effects with DLSS Ray Reconstruction in titles like Cyberpunk 2077: Phantom Liberty.

Beyond gaming, these technologies revolutionize creative workflows, enabling real-time, ray-traced previews in applications that once required extensive processing time.

Ray tracing, a technique first described in 1969 by Arthur Appel, mirrors how light interacts with objects to create lifelike images.

Ray tracing was once limited to high-end movie production. NVIDIA’s RTX graphics cards have made this cinematic quality accessible in real-time gaming, enhancing experiences with dynamic lighting, reflections and shadows.

High engagement rates in titles like Cyberpunk 2077, NARAKA: BLADEPOINT, Minecraft with RTX, Alan Wake 2 and Diablo IV, where 96% or higher of RTX 40 Series t gamers use RTX ON, underscore this success.

To commemorate this milestone, 20 $500 Green Man Gaming gift cards and exclusive #RTXON keyboard keycaps are up for grabs. Participants must follow GeForce’s social channels and comply with the sweepstakes rules.

Stay tuned for more RTX 500 giveaways.

NVIDIA’s advancement from the first RTX graphics card to powering 500 RTX games and applications with advanced technologies heralds a new gaming and creative tech era. And NVIDIA continues to lead, offering unparalleled experiences in gaming and creativity.

Stay tuned to GeForce News for more updates on RTX games and enhancements.

Read More

Meet the Omnivore: SiBORG Lab Elevates Approach to Accessibility Using OpenUSD and NVIDIA Omniverse

Meet the Omnivore: SiBORG Lab Elevates Approach to Accessibility Using OpenUSD and NVIDIA Omniverse

Accessibility is a key element that all designers must consider before constructing a space or product — but the evaluation process has traditionally been tedious and time-consuming.

Mathew Schwartz, an assistant professor in architecture and design at the New Jersey Institute of Technology, is using the NVIDIA Omniverse platform and the Universal Scene Description framework, aka OpenUSD, to help architects, interior designers and industrial designers address this challenge.

Schwartz’s research and design lab SiBORG — which stands for simulation, biomechanics, robotics and graphics — focuses on understanding and improving design workflows, especially in relation to accessibility, human factors and automation. Schwartz and his team develop algorithms for research projects and turn them into usable products.

Using Omniverse  — a development platform that enables multi-app workflows and real-time collaboration — the team developed open-source, OpenUSD-based code that automatically generates a complex accessibility graph for building design. This code is based on Schwartz’s research paper, “Human centric accessibility graph for environment analysis.”

The graph provides feedback related to human movement, such as the estimated energy expenditure required for taking a certain path, the number of steps it takes to complete the path, or the angles of any inclines along it.

With Omniverse, teams can use Schwartz’s code to visualize the graph and the paths that it creates. This can help designers better evaluate building code and safety for occupants while providing important accessibility insights.


The Power of OpenUSD

Traditionally, feedback on accessibility and environmental conditions during the building design process has been limited to building code analysis. Schwartz’s work enables designers to overcome this obstacle by seamlessly integrating Omniverse and OpenUSD.

Previously, he had to switch between multiple applications to achieve different aspects of his simulation and modeling projects. His workflows were often split between tools such as Unity, which supports simulations with people, and McNeel Rhino3D, which offers 3D modeling features.

With OpenUSD, he can now combine his research, Python code, 3D environments and renders, and favorite tools into Omniverse.

“What got me hooked on Omniverse was how it allows me to combine the Python application programming interface with powerful physics, rendering and animation software,” he said. “My team took full advantage of the flexible Python APIs in Omniverse to develop almost the entire user interface.”

Schwartz’s team uses Omniverse to visualize and interact with existing open-source Python code in ways that don’t require external work, like seamlessly linking to a third-party app. The lab’s versatile data analysis tool can interact with any program that’s compatible with OpenUSD.

“With OpenUSD and Omniverse, we’ve been able to expand the scope of our research, as we can easily combine data analysis and visualization with the design process,” said Schwartz.

Running Realistic Renderings and Simulations

Schwartz also uses Omniverse to simulate crowd movement and interactions.

He accelerates large crowd simulations and animations using two NVIDIA RTX A4500 GPUs, which enable real-time visualization. These accelerated simulations can help designers gain valuable insights into how people with reduced mobility can navigate and interact in spaces.

“We can also show what locations will offer the best areas to place signage so that it’s most visible,” said Schwartz. “Our simulation work can be used to visualize paths taken in an early-stage design — this provides feedback on accessibility to prevent problems with building code, while allowing users to create designs that go beyond the minimum requirements.”

Schwartz also taps the feedback and assistance of many developers and researchers who actively engage on the Omniverse Discord channel. This collaborative environment has been instrumental in Schwartz’s journey, he said, as well as to the platform’s continuous improvement.

Schwartz’s open-source code is available for designers to use and enhance their design workflows. Learn more about his work and how NVIDIA Omniverse can revolutionize building design.

Join In on the Creation

Anyone can build their own Omniverse extension or Connector to enhance 3D workflows and tools.

Check out artwork from other “Omnivores” and submit projects in the Omniverse gallery. See how creators are using OpenUSD to accelerate a variety of 3D workflows in the latest OpenUSD All Stars.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and Twitter. For more, join the Omniverse community on the  forums, Discord server, Twitch and YouTube channels.

Read More

Good Fortunes: ‘The Day Before’ Leads 17 Games on GeForce NOW

Good Fortunes: ‘The Day Before’ Leads 17 Games on GeForce NOW

It’s a fortuitous GFN Thursday with 17 new games joining the GeForce NOW library, including The Day Before, Avatar: Frontiers of Pandora and the 100th PC Game Pass title to join the cloud Ori and the Will of the Wisps.

This week also marks a milestone: over 500 games and applications now support RTX ON. GeForce NOW Ultimate and Priority members can experience cinematic ray tracing on nearly any device thanks to NVIDIA RTX-powered gaming rigs in the cloud. Check out the RTX ON game row in the GeForce NOW app to play even more titles featuring this stunning graphics technology.

Stayin’ Alive

The Day Before on GeForce NOW
Take a trip to the big city and say hi to the locals.

The Day Before, Fntastic’s new open-world horror massively multiplayer online game, is a uniquely reimagined journey of survival set on the east coast of the present-day U.S. after the world has been overrun by zombies. Priority and Ultimate members can stream the game on nearly any device with support for RTX ON.

Explore the beautifully detailed New Fortune City, filled with skyscrapers, massive malls and grand stadiums, with a variety of vehicles. Fight against other players and those infected by a deadly virus. Survive by collecting loot, completing quests and building houses — all on a day-and-night cycle.

Help rebuild society from the comfort of the couch and across devices, streaming from the cloud. Priority members can build and survive at up to 1080p and 60 frames per second. Ultimate members can take advantage of longer session lengths, gain support for ultrawide resolutions and stream at up to 4K 120 fps. Both memberships offer support for real-time ray tracing, bringing cinematic lighting to every zombie encounter.

A New Adventure in the Clouds

You can fly, you can fly.

Fight for the future of the Na’vi in Ubisoft’s Avatar: Frontiers of Pandora. Expanding on the stories told in the hit Avatar films, the open-world action-adventure game explores a never-before-seen region of Pandora called the Western Frontier, with all-new environments, creatures and enemies.

Discover what it means to be Na’vi and join other clans to protect Pandora from the RDA, a corporation looking to exploit Pandora’s resources. Harness incredible strength and agility with character customization, craft new gear, and upgrade skills and weapons. Members can enjoy soaring across the skies with their Banshees, dragon-like creatures useful for exploring the vast Western Frontier and engaging the RDA in aerial combat.

Experience the epic adventure on nearly any device with a GeForce NOW Ultimate membership, streaming from GeForce RTX 4080-powered servers in the cloud. Ultimate members can save the Na’vi at up to 4K resolution or take in Pandora’s beautiful vistas at ultrawide resolution for the most cinematic, immersive gameplay.

A Journey of Courage

Ori and the Will of the Wisps on GeForce NOW
Ori-nge you glad you streamed it from the cloud?

Ori and the Blind Forest: Definitive Edition and Ori and the Will of the Wisps are the newest Xbox PC games to join GeForce NOW, which now includes 100 PC Game Pass titles. Developed by Moon Studios and published by Xbox Game Studios, the award-winning adventure series follows a spirit guardian named Ori as he explores beautiful, dangerous worlds.

In Ori and the Blind Forest, members must help Ori restore balance to the forest. Separated from his home during a storm and adopted by a creature called Naru, Ori must team up with a spirit named Sein to find his true destiny when calamity strikes the world of Nibel.

The sequel, Ori and the Will of the Wisps, brings Ori’s journey to a new world, Niwen, a hidden land of wonders and dangers. Ori must help a young, broken-winged owl named Ku and heal the land from dark corruption — all while encountering new friends, foes and mysteries that will test the spirit guardian’s courage and skills.

Members can take the adventure with them, streaming Ori’s adventures across nearly all of their devices thanks to the cloud. GeForce NOW Ultimate members can also light their journeys with high dynamic range on supported devices for an unparalleled visual experience.

Explore the Ori series and other games on GeForce NOW with PC Game Pass. Give the gift of cloud gaming with the latest membership bundle and get three months of PC Game Pass for free with the purchase of a six-month GeForce NOW Ultimate membership.

Building Blocks

LEGO Fortnite on GeForce NOW
Blast off to the cloud.

The magic of LEGOs and Fortnite collide in Epic Games’ LEGO Fortnite, launching in the cloud today. ​Get creative in building and ​customizing​​​ the ultimate home base​ using collected LEGO elements. Recruit villagers to gather materials and survive the night. Gear up and drop into deep caves to search for rare resources in hidden areas.

Don’t miss the 17 newly supported games joining the GeForce NOW library this week:

  • World War Z: Aftermath (New release on Xbox, available on PC Game Pass, Dec. 5)
  • Avatar: Frontiers of Pandora (New release on Ubisoft, Dec. 7)
  • Warhammer 40,000: Rogue Trader (New release on Steam, Dec. 7)
  • The Day Before (New release on Steam, Dec. 7)
  • Goat Simulator 3 (New release on Xbox, available on PC Game Pass, Dec. 7)
  • LEGO Fortnite (New release on Epic Games Store, Dec. 7)
  • Against the Storm (New release on Xbox, available on PC Game Pass, Dec. 8)
  • Rocket Racing (New release on Epic Games Store, Dec. 8)
  • Fortnite Festival (New release on Epic Games Store, Dec. 9)
  • Agatha Christie – Murder on the Orient Express (Steam)
  • BEAST (Steam)
  • Dungeons 4  (Xbox, available on PC Game Pass)
  • Farming Simulator 22 (Xbox, available on PC Game Pass)
  • Hollow Knight (Xbox, available on PC Game Pass)
  • Ori and the Will of the Wisps (Steam, Xbox and available on PC Game Pass)
  • Ori and the Blind Forest: Definitive Edition (Steam)
  • Spirittea (Xbox, available on PC Game Pass)

Halo Infinite was planned to join the cloud in September but encountered some technical issues. The GeForce NOW team is working with Microsoft and game developer 343 Industries to bring the game to the service in the coming months. Stay tuned to GFN Thursday for further updates.

What are you planning to play this weekend? Let us know on Twitter or in the comments below. Bonus points if it includes #RTXON.

Read More

Visual AI Takes Flight at Canada’s Largest, Busiest Airport

Visual AI Takes Flight at Canada’s Largest, Busiest Airport

Toronto Pearson International Airport, in Ontario, Canada, is the country’s largest and busiest airport, serving some 50 million passengers each year.

To enhance traveler experiences, the airport in June deployed the Zensors AI platform, which uses anonymized footage from existing security cameras to generate spatial data that helps optimize operations in real time.

A member of the NVIDIA Metropolis vision AI partner ecosystem, Zensors helped the Toronto Pearson operations team significantly reduce wait times in customs lines, decreasing the average time it took passengers to go through the arrivals process from an estimated 30 minutes during peak periods in 2022 to just under six minutes last summer.

“Zensors is making visual AI easy for all to use,” said Anuraag Jain, the company’s cofounder and head of product and technology.

Scaling multimodal, transformer-based AI isn’t easy for most organizations, Jain added, so airports have often defaulted to traditional, less effective solutions based on hardware sensors, lidar or 3D stereo cameras, or look to improve their operations by renovating or building new terminals instead — which can be multibillion-dollar projects.

“We provide a platform that allows airports to instead think more like software companies, deploying quicker, cheaper and more accurate solutions using their existing cameras and the latest AI technologies,” Jain said.

Speeding Airport Operations

To meet the growing travel demands, Toronto Pearson needed a way to improve its operations in a matter of weeks, rather than the months or years it would normally take to upgrade or build new terminal infrastructure.

The Zensors AI platform — deployed to monitor 20+ customs lines in two of the airport’s terminals — delivered such a solution. It converts video feeds from the airport’s existing camera systems into structured data.

Using anonymized footage, the platform counts how many travelers are in a line, identifies congested areas and predicts passenger wait times, among other tasks — and it alerts staff in real time to speed operations.

The platform also offers analytical reports that enable operations teams to assess performance, plan more effectively and redeploy staff for optimal efficiency.

In addition to providing airport operators data-driven insights, live wait-time statistics from Zensors AI are published on Toronto Pearson’s online dashboard, as well as on electronic displays in the terminals. This lets passengers easily access accurate information about how long customs or security processes will take. And it increases customer satisfaction overall and reduces potential anxieties about whether they’ll be able to make connecting flights.

“The analyses we get from the Zensors platform are proving to be very accurate,” said Zeljko Cakic, director of airport IT planning and development at the Greater Toronto Airport Authority, Toronto Pearson’s managing company. “Our goal is to improve overall customer experience and reduce wait times, and the data gathered through the Zensors platform is one of the key contributors for decision-making to drive these results.”

Accurate AI Powered by NVIDIA

Zensors AI — built with vision transformer models — offers insights with an impressive accuracy of about 96% compared to when humans validate the information manually. It’s all powered by NVIDIA technology.

“The Zensors model development and inference run-time stack is effectively the NVIDIA AI stack,” Jain said.

The company uses NVIDIA GPUs and the CUDA parallel computing platform to train its AI models, along with the cuDNN accelerated library of primitives for deep neural networks and the NVIDIA DALI library for decoding and augmenting images and videos.

With checkpoints at Toronto Pearson open 24/7, Zensors AI inference runs around the clock on NVIDIA Triton Inference Server, an open-source software available through the NVIDIA AI Enterprise platform.

The company estimates that using NVIDIA Triton to optimize its inference run-time decreased its monthly cloud GPU spending by more than 20%. In this way, NVIDIA technology enables Zensors to provide a high-availability, production-grade, fully managed service for Toronto Pearson and other clients, Jain said.

“Today, lots of companies and organizations want to adopt AI, but the hard part is figuring out how to go about it,” he added. “Being a part of NVIDIA Metropolis gives us the best tools and enables more visibility for potential end users of Zensors technology, which ultimately lets users deploy AI with ease.”

Zensors is also a member of NVIDIA Inception, a free program that nurtures cutting-edge startups.

Visual AI for the Future of Transportation

Among many other customers who use Zensors AI is Ireland’s Cork Airport, which uses the platform to optimize its operations from curb to gate. In June, Zensors AI was deployed across the airport in just 20 days and, in less than four months, the platform helped save about 90 hours of congestion time through proactive curbside traffic management.

“Aviation is just one part of mobility,” Jain said. “We’re expanding to rail, bus and multimodal transit — and we believe Zensors will provide the layer of intelligence to eventually bring AI to all types of brick-and-mortar operators.”

Looking forward, the company is working to incorporate generative AI and large language models into the question-answering capabilities of its platform in a safe, reliable way.

Learn more about the NVIDIA Metropolis platform and how it’s used to build smarter, safer travel hubs, including at Bengaluru Airport, one of India’s busiest airports.

Read More

17 Predictions for 2024: From RAG to Riches to Beatlemania and National Treasures

17 Predictions for 2024: From RAG to Riches to Beatlemania and National Treasures

Move over, Merriam-Webster: Enterprises this year found plenty of candidates to add for word of the year. “Generative AI” and “generative pretrained transformer” were followed by terms such as “large language models” and “retrieval-augmented generation” (RAG) as whole industries turned their attention to transformative new technologies.

Generative AI started the year as a blip on the radar but ended with a splash. Many companies are sprinting to harness its ability to ingest text, voice and video to churn out new content that can revolutionize productivity, innovation and creativity.

Enterprises are riding the trend. Deep learning algorithms like OpenAI’s ChatGPT, further trained with corporate data, could add the equivalent of $2.6 trillion to $4.4 trillion annually across 63 business use cases, according to McKinsey & Company.

Yet managing massive amounts of internal data often has been cited as the biggest obstacle to scaling AI. Some NVIDIA experts in AI predict that 2024 will be all about phoning a friend — creating partnerships and collaborations with cloud service providers, data storage and analytical companies, and others with the know-how to handle, fine-tune and deploy big data efficiently.

Large language models are at the center of it all. NVIDIA experts say advancements in LLM research will increasingly be applied in business and enterprise applications. AI capabilities like RAG, autonomous intelligent agents and multimodal interactions will become more accessible and more easily deployed via virtually any platform.

Hear from NVIDIA experts on what to expect in the year ahead:

MANUVIR DAS
Vice President of Enterprise Computing

One size doesn’t fit all: Customization is coming to enterprises. Companies won’t have one or two generative AI applications — many will have hundreds of customized applications using proprietary data that is suited to various parts of their business.

Once running in production, these custom LLMs will feature RAG capabilities to connect data sources to generative AI models for more accurate, informed responses. Leading companies like Amdocs, Dropbox, Genentech, SAP, ServiceNow and Snowflake are already building new generative AI services built using RAG and LLMs.

Open-source software leads the charge: Thanks to open-source pretrained models, generative AI applications that solve specific domain challenges will become part of businesses’ operational strategies.

Once companies combine these headstart models with private or real-time data, they can begin to see accelerated productivity and cost benefits across the organization. AI computing and software are set to become more accessible on virtually any platform, from cloud-based computing and AI model foundry services to the data center, edge and desktop.

Off-the-shelf AI and microservices: Generative AI has spurred the adoption of application programming interface (API) endpoints, which make it easier for developers to build complex applications.

In 2024, software development kits and APIs will level up as developers customize off-the-shelf AI models using AI microservices such as RAG as a service. This will help enterprises harness the full potential of AI-driven productivity with intelligent assistants and summarization tools that can access up-to-date business information.

Developers will be able to embed these API endpoints directly into their applications without having to worry about maintaining the necessary infrastructure to support the models and frameworks. End users can in turn experience more intuitive, responsive and tailored applications that adapt to their needs.

IAN BUCK
Vice President of Hyperscale and HPC

National treasure: AI is set to become the new space race, with every country looking to create its own center of excellence for driving significant advances in research and science and improving GDP.

With just a few hundred nodes of accelerated computing, countries will be able to quickly build highly efficient, massively performant, exascale AI supercomputers. Government-funded generative AI centers of excellence will boost countries’ economic growth by creating new jobs and building stronger university programs to create the next generation of scientists, researchers and engineers.

Quantum leaps and bounds: Enterprise leaders will launch quantum computing research initiatives based on two key drivers: the ability to use traditional AI supercomputers to simulate quantum processors and the availability of an open, unified development platform for hybrid-classical quantum computing. This enables developers to use standard programming languages instead of needing custom, specialized knowledge to build quantum algorithms.

Once considered an obscure niche in computer science, quantum computing exploration will become more mainstream as enterprises join academia and national labs in pursuing rapid advances in materials science, pharmaceutical research, subatomic physics and logistics.

KARI BRISKI
Vice President of AI Software

From RAG to riches: Expect to hear a lot more about retrial-augmented generation as enterprises embrace these AI frameworks in 2024.

As companies train LLMs to build generative AI applications and services, RAG is widely seen as an answer to the inaccuracies or nonsensical replies that sometimes occur when the models don’t have access to enough accurate, relevant information for a given use case.

Using semantic retrieval, enterprises will take open-source foundation models, ingest their own data so that a user query can retrieve the relevant data from the index and then pass it to the model at run time.

The upshot is that enterprises can use fewer resources to achieve more accurate generative AI applications in sectors such as healthcare, finance, retail and manufacturing. End users should expect to see more sophisticated, context-sensitive and multimodal chatbots and personalized content recommendation systems that allow them to talk to their data naturally and intuitively.

Multimodality makes its mark: Text-based generative AI is set to become a thing of the past. Even as generative AI remains in its infancy, expect to see many industries embrace multimodal LLMs that allow consumers to use a combination of text, speech and images to deliver more contextually relevant responses to a query about tables, charts or schematics.

Companies such as Meta and OpenAI will look to push the boundaries of multimodal generative AI by adding greater support for the senses, which will lead to advancements in the physical sciences, biological sciences and society at large. Enterprises will be able to understand their data not just in text format but also in PDFs, graphs, charts, slides and more.

NIKKI POPE
Head of AI and Legal Ethics

Target lock on AI safety: Collaboration among leading AI organizations will accelerate the research and development of robust, safe AI systems. Expect to see emerging standardized safety protocols and best practices that will be adopted across industries, ensuring a consistent and high level of safety across generative AI models.

Companies will heighten their focus on transparency and interpretability in AI systems — and use new tools and methodologies to shed light on the decision-making processes of complex AI models. As the generative AI ecosystem rallies around safety, anticipate AI technologies becoming more reliable, trustworthy and aligned with human values.

RICHARD KERRIS
Vice President of Developer Relations, Head of Media and Entertainment

The democratization of development: Virtually anyone, anywhere will soon be set to become a developer. Traditionally, one had to know and be proficient at using a specific development language to develop applications or services. As computing infrastructure becomes increasingly trained on the languages of software development, anyone will be able to prompt the machine to create applications, services, device support and more.

While companies will continue to hire developers to build and train AI models and other professional applications, expect to see significantly broader opportunities for anyone with the right skill set to build custom products and services. They’ll be helped by text inputs or voice prompts, making interactions with computers as simple as verbally instructing it.

“Now and Then” in film and song: Just as the “new” AI-augmented song by the Fab Four spurred a fresh round of Beatlemania, the dawn of the first feature-length generative AI movie will send shockwaves through the film industry.

Take a filmmaker who shoots using a 35mm film camera. The same content can soon be transformed into a 70mm production using generative AI, reducing the significant costs involved in film production in the IMAX format and allowing a broader set of directors to participate.

Creators will transform beautiful images and videos into new types and forms of entertainment by prompting a computer with text, images or videos. Some professionals worry their craft will be replaced, but those issues will fade as generative AI gets better at being trained on specific tasks. This, in turn, will free up hands to tackle other tasks and provide new tools with artist-friendly interfaces.

KIMBERLY POWELL
Vice President of Healthcare 

AI surgical assistants: The day has come when surgeons can use voice to augment what they see and understand inside and outside the surgical suite.

Combining instruments, imaging, robotics and real-time patient data with AI will lead to better surgeon training, more personalization during surgery and better safety with real-time feedback and guidance even during remote surgery. This will help close the gap on the 150 million surgeries that are needed yet do not occur, particularly in low- and middle-income countries.

Generative AI drug discovery factories: A new drug discovery process is emerging, where generative AI molecule generation, property prediction and complex modeling will drive an intelligent lab-in-the-loop flywheel, shortening the time to discover and improving the quality of clinically viable drug candidates.

These AI drug discovery factories employ massive healthcare datasets using whole genomes, atomic-resolution instruments and robotic lab automation capable of running 24/7. For the first time, computers can learn patterns and relationships within enormous and complex datasets and generate, predict and model complex biological relationships that were only previously discoverable through time-consuming experimental observation and human synthesis.

CHARLIE BOYLE
Vice President of DGX Platforms

Enterprises lift bespoke LLMs into the cloud: One thing enterprises learned from 2023 is that building LLMs from scratch isn’t easy. Companies taking this route are often daunted by the need to invest in new infrastructure and technology and they experience difficulty in figuring out how and when to prioritize other company initiatives.

Cloud service providers, colocation providers and other businesses that handle and process data for other businesses will help enterprises with full-stack AI supercomputing and software. This will make customizing pretrained models and deploying them easier for companies across industries.

Fishing for LLM gold in enterprise data lakes: There’s no shortage of statistics on how much information the average enterprise stores — it can be anywhere in the high hundreds of petabytes for large corporations. Yet many companies report that they’re mining less than half that information for actionable insights.

In 2024, businesses will begin using generative AI to make use of that untamed data by putting it to work building and customizing LLMs. With AI-powered supercomputing, business will begin mining their unstructured data — including chats, videos and code — to expand their generative AI development into training multimodal models. This leap beyond the ability to mine tables and other structured data will let companies deliver more specific answers to questions and find new opportunities. That includes helping detect anomalies on health scans, uncovering emerging trends in retail and making business operations safer.

AZITA MARTIN
Vice President of Retail, Consumer-Packaged Goods and Quick-Service Restaurants 

Generative AI shopping advisors: Retailers grapple with the dual demands of connecting customers to the products they desire while delivering elevated, human-like, omnichannel shopping experiences that align with their individual needs and preferences.

To meet these goals, retailers are gearing up to introduce cutting-edge, generative AI-powered shopping advisors, which will undergo meticulous training on the retailers’ distinct brand, products and customer data to ensure a brand-appropriate, guided, personalized shopping journey that mimics the nuanced expertise of a human assistant. This innovative approach will help set brands apart and increase customer loyalty by providing personalized help.

Setting up for safety: Retailers across the globe are facing a mounting challenge as organized retail crime grows increasingly sophisticated and coordinated. The National Retail Federation reported that retailers are experiencing a staggering 26.5% surge in such incidents since the post-pandemic uptick in retail theft.

To enhance the safety and security of in-store experiences for both customers and employees, retailers will begin using computer vision and physical security information management software to collect and correlate events from disparate security systems. This will enable AI to detect weapons and unusual behavior like the large-scale grabbing of items from shelves. It will also help retailers proactively thwart criminal activities and maintain a safer shopping environment.

REV LEBAREDIAN
Vice President of Omniverse and Simulation Technology

Industrial digitalization meets generative AI: The fusion of industrial digitalization with generative AI is poised to catalyze industrial transformation.Generative AI will make it easier to turn aspects of the physical world — such as geometry, light, physics, matter and behavior — into digital data. Democratizing the digitalization of the physical world will accelerate industrial enterprises, enabling them to design, optimize, manufacture and sell products more efficiently. It also enables them to more easily create virtual training grounds and synthetic data to train a new generation of AIs that will interact and operate within the physical world, such as autonomous robots and self-driving cars.

3D interoperability takes off: From the drawing board to the factory floor, data for the first time will be interoperable.

The world’s most influential software and practitioner companies from the manufacturing, product design, retail, e-commerce and robotics industries are committing to the newly established Alliance for OpenUSD. OpenUSD, the universal language between 3D tools and data, will break down data siloes, enabling industrial enterprises to collaborate across data lakes, tool systems and specialized teams easier and faster than ever to accelerate the digitalization of previously cumbersome, manual industrial processes.

XINZHOU WU
Vice President and General Manager of Automotive

Modernizing the vehicle production lifecycle: The automotive industry will further embrace generative AI to deliver physically accurate, photorealistic renderings that show exactly how a vehicle will look inside and out — while speeding design reviews, saving costs and improving efficiencies.

More automakers will embrace this technology within their smart factories, connecting design and engineering tools to build digital twins of production facilities. This will reduce costs and streamline operations without the need to shut down factory lines.

Generative AI will make consumer research and purchasing more interactive. From car configurators and 3D visualizations to augmented reality demonstrations and virtual test drives, consumers will be able to have a more engaging and enjoyable shopping experience.

Safety is no accident: Beyond the automotive product lifecycle, generative AI will also enable breakthroughs in autonomous vehicle (AV) development, including turning recorded sensor data into fully interactive 3D simulations. These digital twin environments, as well as synthetic data generation, will be used to safely develop, test and validate AVs at scale virtually before they’re deployed in the real world.

Generative AI foundational models will also support a vehicle’s AI systems to enable new personalized user experiences, capabilities and safety features inside and outside the car.

The behind-the-wheel experience is set to become safer, smarter and more enjoyable.

BOB PETTE
Vice President of Enterprise Platforms

Building anew with generative AI: Generative AI will allow organizations to design cars by simply speaking to a large language model or create cities from scratch using new techniques and design principles.

The architecture, engineering, construction and operations (AECO) industry is building the future using generative AI as its guidepost. Hundreds of generative AI startups and customers in AECO and manufacturing will focus on creating solutions for virtually any use case, including design optimization, market intelligence, construction management and physics prediction. AI will accelerate a manufacturing evolution that promises increased efficiency, reduced waste and entirely new approaches to production and sustainability.

Developers and enterprises are focusing in particular on point cloud data analysis, which uses lidar to generate representations of built and natural environments with precise details. This could lead to high-fidelity insights and analysis through generative AI-accelerated workflows.

GILAD SHAINER
Vice President of Networking 

AI influx ignites connectivity demand: A renewed focus on networking efficiency and performance will take off as enterprises seek the necessary network bandwidth for accelerated computing using GPUs and GPU-based systems.

Trillion-parameter LLMs will expose the need for faster transmission speeds and higher coverage. Enterprises that want to quickly roll out generative AI applications will need to invest in accelerated networking technology or choose a cloud service provider that does. The key to optimal connectivity is baking it into full-stack systems coupled with next-generation hardware and software.

The defining element of data center design: Enterprises will learn that not all data centers need to be alike. Determining the purpose of a data center is the first step toward choosing the appropriate networking to use within it. Traditional data centers are limited in terms of bandwidth, while those capable of running large AI workloads require thousands of GPUs to work at very deterministic, low-tail latency.

What the network is capable of when under a full load at scale is the best determinant of performance. The future of enterprise data center connectivity requires separate management (aka north-south) and AI (aka east-west) networks, where the AI network includes in-network computing specifically designed for high performance computing, AI and hyperscale cloud infrastructures.

DAVID REBER JR.
Chief Security Officer

Clarity in adapting the security model to AI: The pivot from app-centric to data-centric security is in full swing. Data is the fundamental supply chain for LLMs and the future of generative AI. Enterprises are just now seeing the problem unfold at scale. Companies will need to reevaluate people, processes and technologies to redefine the secure development lifecycle (SDLC). The industry at large will redefine its approach to trust and clarify what transparency means.

A new generation of cyber tools will be born. The SDLC of AI will be defined with new market leaders of tools and expectations to address the transition from the command line interface to the human language interface. The need will be especially important as more enterprises shift toward using open-source LLMs like Meta’s Llama 2 to accelerate generative AI output.

Scaling security with AI: Applications of AI to the cybersecurity deficit will detect never-before-seen threats. Currently, a fraction of global data is used for cyber defense. Meanwhile, attackers continue to take advantage of every misconfiguration.

Experimentation will help enterprises realize the potential of AI in identifying emergent threats and risks. Cyber copilots will help enterprise users navigate phishing and configuration. For the technology to be effective, companies will need to tackle privacy issues inherent in the intersection of work and personal life to enable collective defense in data-centric environments.

Along with democratizing access to technology, AI will also enable a new generation of cyber defenders as threats continue to grow. As soon as companies gain clarity on each threat, AI will be used to generate massive amounts of data that train downstream detectors to defend and detect these threats.

RONNIE VASISHTA
Senior Vice President of Telecoms

Running to or from RAN: Expect to see a major reassessment of investment cases for 5G.

After five years of 5G, network coverage and capacity have boomed — but revenue growth is sluggish and costs for largely proprietary and inflexible infrastructure have risen. Meantime, utilization for 5G RAN is stuck below 40%.

The new year will be about aggressively pursuing new revenue sources on existing spectrum to uncover new monetizable applications. Telecoms also will rethink the capex structure, focusing more on a flexible, high-utilization infrastructure built on general-purpose components. And expect to see a holistic reduction of operating expenses as companies leverage AI tools to increase performance, improve efficiency and eliminate costs. The outcome of these initiatives will determine how much carriers will invest in 6G technology.

From chatbots to network management: Telcos are already using generative AI for chatbots and virtual assistants to improve customer service and support. In the new year they’ll double down, ramping up their use of generative AI for operational improvements in areas such as network planning and optimization, fault and fraud detection, predictive analytics and maintenance, cybersecurity operations and energy optimization.

Given how pervasive and strategic generative AI is becoming, building a new type of AI factory infrastructure to support its growth also will become a key imperative. More and more telcos will build AI factories for internal use, as well as deploy these factories as a platform as a service for developers. That same infrastructure will be able to support RAN as an additional tenant.

MALCOLM DEMAYO
Vice President of Financial Services 

AI-first financial services: With AI advancements growing exponentially, financial services firms will bring the compute power to the data, rather than the other way around.

Firms will undergo a strategic shift toward a highly scalable, hybrid combination of on-premises infrastructure and cloud-based computing, driven by the need to mitigate concentration risk and maintain agility amid rapid technological advancements. Firms that handle their most mission-critical workloads, including AI-powered customer service assistants, fraud detection, risk management and more, will lead.

MARC SPIELER
Senior Director of Energy

Physics-ML for faster simulation: Energy companies will increasingly turn to physics-informed machine learning (physics-ML) to accelerate simulations, optimize industrial processes and enhance decision-making.

Physics-ML integrates traditional physics-based models with advanced machine learning algorithms, offering a powerful tool for the rapid, accurate simulation of complex physical phenomena. For instance, in energy exploration and production, physics-ML can quickly model subsurface geologies to aid in identification of potential exploration sites and assessment of operational and environmental risks.

In renewable energy sectors, such as wind and solar, physics-ML will play a crucial role in predictive maintenance, enabling energy companies to foresee equipment failures and schedule maintenance proactively to reduce downtimes and costs. As computational power and data availability continue to grow, physics-ML is poised to transform how energy companies approach simulation and modeling tasks, leading to more efficient and sustainable energy production.

LLMs — the fix for better operational outcomes: Couple with physics-ML, LLMs will analyze extensive historical data and real-time sensor inputs from energy equipment to predict potential failures and maintenance needs before they occur. This proactive approach will reduce unexpected downtime and extend the lifespan of turbines, generators, solar panels and other critical infrastructure. LLMs will also help optimize maintenance schedules and resource allocation, ensuring that repairs and inspections are efficiently carried out. Ultimately, LLM use in predictive maintenance will save costs for energy companies and contribute to a more stable energy supply for consumers.

Deepu Talla
Vice President of Embedded and Edge Computing

The rise of robotics programmers: LLMs will lead to rapid improvements for robotics engineers. Generative AI will develop code for robots and create new simulations to test and train them.

LLMs will accelerate simulation development by automatically building 3D scenes, constructing environments and generating assets from inputs. The resulting simulation assets will be critical for workflows like synthetic data generation, robot skills training and robotics application testing.

In addition to helping robotics engineers, transformer AI models, the engines behind LLMs, will make robots themselves smarter so that they better understand complex environments and more effectively execute a breadth of skills within them.

For the robotics industry to scale, robots have to become more generalizable — that is, they need to acquire skills more quickly or bring them to new environments. Generative AI models — trained and tested in simulation — will be a key enabler in the drive toward more powerful, flexible and easier-to-use robots.

Read More

AV 2.0, the Next Big Wayve in Self-Driving Cars

AV 2.0, the Next Big Wayve in Self-Driving Cars

A new era of autonomous vehicle technology, known as AV 2.0, has emerged, marked by large, unified AI models that can control multiple parts of the vehicle stack, from perception and planning to control.

Wayve, a London-based autonomous driving technology company, is leading the surf.

In the latest episode of NVIDIA’s AI Podcast, host Katie Burke Washabaugh spoke with the company’s cofounder and CEO, Alex Kendall, about what AV 2.0 means for the future of self-driving cars.

Unlike AV 1.0’s focus on perfecting a vehicle’s perception capabilities using multiple deep neural networks, AV 2.0 calls for comprehensive in-vehicle intelligence to drive decision-making in real-world, dynamic environments.

Embodied AI — the concept of giving AI a physical interface to interact with the world — is the basis of this new AV wave.

Kendall pointed out that it’s a “hardware/software problem — you need to consider these things separately,” even as they work together. For example, a vehicle can have the highest-quality sensors, but without the right software, the system can’t use them to execute the right decisions.

Generative AI plays a key role, enabling synthetic data generation so AV makers can use a model’s previous experiences to create and simulate novel driving scenarios.

It can “take crowds of pedestrians and snow and bring them together” to “create a snowy, crowded pedestrian scene” that the vehicle has never experienced before.

According to Kendall, that will “play a huge role in both learning and validating the level of performance that we need to deploy these vehicles safely” — all while saving time and costs.

In June, Wayve unveiled GAIA-1, a generative world model for developing autonomous vehicles.

The company also recently announced LINGO-1, an AI model that allows passengers to use natural language to enhance the learning and explainability of AI driving models.

Looking ahead, the company hopes to scale and further develop its solutions, improving the safety of AVs to deliver value, build public trust and meet customer expectations. Kendall views embodied AI as playing a definitive role in the future of the AI landscape, pushing pioneers to “build better” and “build further” to achieve the “next big breakthroughs.”

You Might Also Like

Driver’s Ed: How Waabi Uses AI Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience—the road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

SUBHEAD: Subscribe to the AI Podcast, Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

‘Christmas Rush’ 3D Scene Brings Holiday Cheer This Week ‘In the NVIDIA Studio’

‘Christmas Rush’ 3D Scene Brings Holiday Cheer This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. 

‘Tis the season for friends, family and beautifully rendered Santa animations from this week’s In the NVIDIA Studio artist, 3D expert Božo Balov.

This week also marks an incredible milestone, with over 500 NVIDIA RTX-powered games and creative apps now available with support for ray tracing and AI-powered technologies like NVIDIA DLSS. Over 120 of the most popular apps — including the Adobe Creative Cloud suite, Autodesk Maya, Blender, Blackmagic Design’s Davinci Resolve, OBS, Unity and more — use RTX to accelerate workflows by orders of magnitude, power new AI tools and enhancements and enable real-time, ray-traced previews.

To celebrate, NVIDIA GeForce is hosting a giveaway for gift cards, rare, sought-after #RTXON keyboard keycaps and more. Follow GeForce on Facebook, Instagram, TikTok or X (formerly known as Twitter) for instructions on how to enter.

Say it ain’t snow: the NVIDIA Studio #WinterArtChallenge is back. Through the end of the year, share winter-themed art on Facebook, Instagram or X for a chance to be featured on NVIDIA Studio social media channels. Be sure to tag #WinterArtChallenge to join.

Finally, 80 Level — the creative community for digital artists, animators and computer-generated imagery specialists — is hosting its Community Metasites Challenge. Artists can showcase their creativity by applying unique aesthetics to a simple block level via characters, game mechanics, visual effects and more — with a chance to win a new NVIDIA Studio laptop. Register today.

Wrapper’s Delight

Balov’s Christmas Rush 3D animation reimagines Santa as a resident of the coastal city of Split, Croatia — but with a harsher, less jolly edge.

 

Balov jumped straight into modeling edgy Saint Nick in the virtual-reality modeling software Quill. He deployed vertex-painting techniques and used a photogrammetry scan of a Vespa as a base, adding brushstrokes to blend it with the rest of the scene.

 

To achieve a flickering effect on Santa’s clothing, Balov created a custom texture with different brush strokes in Adobe Photoshop. The texture doubles as an alpha map, which intentionally clips the geometry.

 

“When it comes to rendering 3D graphics, nothing really comes close to NVIDIA GPUs.” — Božo Balov

He then used Adobe Photoshop to paint monochromatic background layers. Balov’s GeForce RTX 3080 Ti GPU unlocked over 30 GPU-accelerated features, including blur gallery, liquify, smart sharpen and perspective warp.

Balov then converted the files to the FBX adaptable file format for 3D software before importing them into Blender, where he animated the layers to move in the opposite direction of the character to create a sense of speed. He kept the lighting fairly simple, with one light source as the base and a few supplemental ones to emphasize specific parts of the scene.

 

Balov prefers working in Blender’s real-time engine EEVEE to animate his scene, cutting wait times. RTX-accelerated OptiX ray tracing in the viewport enabled greater interactivity with smoother movement, speeding his ideation and creative workflow.

Extraordinary detail.

“Rendering is a joy on NVIDIA RTX cards,” said Balov. “Since OptiX made its debut, rendering times have been cut in half or more — Blender Cycles feels like a real-time engine.”

When asked for advice to give aspiring artists, Balov emphasized the importance of individual passion.

“Pursue what matters to you,” he said. “Don’t spend time fulfilling other people’s ideas of what art should be.”

 

Check out Balov’s art portfolio on Instagram.

Follow NVIDIA Studio on Facebook, Instagram and X. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

Bringing Personality to Pixels, Inworld Levels Up Game Characters Using Generative AI

Bringing Personality to Pixels, Inworld Levels Up Game Characters Using Generative AI

To enhance the gaming experience, studios and developers spend tremendous effort creating photorealistic, immersive in-game environments.

But non-playable characters (NPCs) often get left behind. Many behave in ways that lack depth and realism, making their interactions repetitive and forgettable.

Inworld AI is changing the game by using generative AI to drive NPC behaviors that are dynamic and responsive to player actions. The Mountain View, Calif.-based startup’s Character Engine, which can be used with any character design, is helping studios and developers enhance gameplay and improve player engagement.

Elevate Gaming Experiences: Achievement Unlocked

The Inworld team aims to develop AI-powered NPCs that can learn, adapt and build relationships with players while delivering high-quality performance and maintaining in-game immersion.

To make it easier for developers to integrate AI-based NPCs into their games, Inworld built Character Engine, which uses generative AI running on NVIDIA technology to create immersive, interactive characters. It’s built to be production-ready, scalable and optimized for real-time experiences.

The Character Engine comprises three layers: Character Brain, Contextual Mesh and Real-Time AI.

Character Brain orchestrates a character’s performance by syncing to its multiple personality machine learning models, such as for text-to-speech, automatic speech recognition, emotions, gestures and animations.

The layer also enables AI-based NPCs to learn and adapt, navigate relationships and perform motivated actions. For example, users can create triggers using the “Goals and Action” feature to program NPCs to behave in a certain way in response to a given player input.

Contextual Mesh allows developers to set parameters for content and safety mechanisms, custom knowledge and narrative controls. Game developers can use the “Relationships” feature to create emergent narratives, such that an ally can turn into an enemy or vice versa based on how players treat an NPC.

One big challenge developers face when using generative AI is keeping NPCs in-world and on-message. Inworld’s Contextual Mesh layer helps overcome this hurdle by rendering characters within the logic and fantasy of their worlds, effectively avoiding the hallucinations that commonly appear when using large language models (LLMs).

The Real-Time AI layer ensures optimal performance and scalability for real-time experiences.

Powering Up AI Workflows With NVIDIA 

Inworld, a member of the NVIDIA Inception program, which supports startups through every stage of their development, uses NVIDIA A100 Tensor Core GPUs and NVIDIA Triton Inference Server as integral parts of its generative AI training and deployment infrastructure.

Inworld used the open-source NVIDIA Triton Inference Server software to standardize other non-generative machine learning model deployments required to power Character Brain features, such as emotions. The startup also plans to use the open-source NVIDIA TensorRT-LLM library to optimize inference performance. Both NVIDIA Triton Inference Server and TensorRT-LLM are available with the NVIDIA AI Enterprise software platform, which provides security, stability and support for production AI.

Inworld also used NVIDIA A100 GPUs within Slurm-managed bare-metal machines for its production training pipelines. Similar machines wrapped in Kubernetes help manage character interactions during gameplay. This setup delivers real-time generative AI at the lowest possible cost.

“We chose to use NVIDIA A100 GPUs because they provided the best, most cost-efficient option for our machine learning workloads compared to other solutions,” said Igor Poletaev, vice president of AI at Inworld.

“Our customers and partners are looking to find novel and innovative ways to drive player engagement metrics by integrating AI NPC functionalities into their gameplay,” said Poletaev. “There’s no way to achieve real-time performance without hardware accelerators, which is why we required GPUs to be integrated into our backend architecture from the very beginning.”

Inworld’s generative AI-powered NPCs have enabled dynamic, evergreen gaming experiences that keep players coming back. Developers and gamers alike have reported enhanced player engagement, satisfaction and retention.

Inworld has powered AI-based NPC experiences from Niantic, LG UPlus, Alpine Electronics and more. One open-world virtual reality game using the Inworld Character Engine saw a 5% increase in playtime, while a detective-themed indie game garnered over $300,000 in free publicity after some of the most popular Twitch streamers discovered it.

Learn more about Inworld AI and NVIDIA technologies for game developers.

Read More