Meet the Omnivore: Startup Develops App Letting Users Turn Objects Into 3D Models With Just a Smartphone

Meet the Omnivore: Startup Develops App Letting Users Turn Objects Into 3D Models With Just a Smartphone

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who accelerate 3D workflows and create virtual worlds using NVIDIA Omniverse, a development platform built on Universal Scene Description, aka OpenUSD.

As augmented reality (AR) becomes more prominent and accessible across the globe, Kiryl Sidarchuk is helping to erase the border between the real and virtual worlds.

Kiryl Sidarchuk

Co-founder and CEO of AR-Generation, which is a member of the NVIDIA Inception program for cutting-edge startups, Sidarchuk with his company developed MagiScan, an AI-based 3D scanner app.

It lets users capture any object with their smartphone camera and quickly creates a high-quality, detailed 3D model of it for use in any AR or metaverse application.

AR-Generation now offers an extension that enables direct export of 3D models from MagiScan to NVIDIA Omniverse, a development platform for connecting and building 3D tools and metaverse applications.

It’s made possible with speed and ease by Universal Scene Description, aka OpenUSD, an extensible framework that serves as a common language between digital content-creation tools.

“Augmented reality will become an integral part of everyday life,” said Sidarchuk, who’s based in Nicosia, Cyprus. “We customized our app to allow export of 3D models based on real-world objects directly to Omniverse, enabling users to showcase the models in AR and integrate them into any metaverse or game.”

Omniverse extensions are core building blocks that let anyone create and extend functions of Omniverse apps using the popular Python or C++ programming languages.

It was simple and convenient for AR-Generation to build the extension, Sidarchuk said, thanks to easily accessible documentation, as well as technical guidance from NVIDIA teams, free AWS credits and networking opportunities with other AI-driven companies — all benefits of being a part of NVIDIA Inception.

Capture, Click and Create 3D Models From Real-World Objects 

Sidarchuk estimates that MagiScan can create 3D models from objects 10x faster and at up to 100x less cost than it would take a designer to do so manually.

This frees creators up to focus on fine-tuning their work and makes AR more accessible to all through a simple app.

AR-Generation chose to build an extension for Omniverse because the platform “provides a convenient environment that integrates all the tools for working with 3D and generative AI,” said Sidarchuk. “Plus, we can collaborate and exchange ideas with colleagues in real time.”

Export 3D models from MagiScan to Omniverse with OpenUSD.

Sidarchuk’s favorite feature of Omniverse is its OpenUSD compatibility, which enables seamless interchange of 3D data between creative applications. “OpenUSD is the format of the future,” he said.

Based on this framework, the MagiScan extension for Omniverse enables fast, affordable creation of high-quality 3D models for any object. MagiScan is available for download on iOS and Android devices.

“It can help everyone from individuals to large corporations save time and money in digitalization,” said Sidarchuk, who claims his first word as a toddler was “money.”

The business-oriented developer started his first company at age 16. It was a one-man endeavor, buying fresh fruits and vegetables from a small village and selling them in Minsk, the capital of Belarus. “That’s how I earned enough to buy my first car,” he mused.

More than a dozen years later, when he’s not working to “enhance human capabilities through augmented-reality technologies,” he said, Sidarchuk now spends his free time with his five-year-old daughter, Aurora.

Watch Sidarchuk discuss 3D modeling, AI and AR on a replay of his Omniverse livestream on demand, and learn more about the MagiScan extension for Omniverse.

Join In on the Creation

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

Read More

Quicker Cures: How Insilico Medicine Uses Generative AI to Accelerate Drug Discovery

Quicker Cures: How Insilico Medicine Uses Generative AI to Accelerate Drug Discovery

While generative AI is a relatively new household term, drug discovery company Insilico Medicine has been using it for years to develop new therapies for debilitating diseases.

The company’s early bet on deep learning is bearing fruit — a drug candidate discovered using its AI platform is now entering Phase 2 clinical trials to treat idiopathic pulmonary fibrosis, a relatively rare respiratory disease that causes progressive decline in lung function.

Insilico used generative AI for each step of the preclinical drug discovery process: to identify a molecule that a drug compound could target, generate novel drug candidates, gauge how well these candidates would bind with the target, and even predict the outcome of clinical trials.

Doing this using traditional methods would have cost more than $400 million and taken up to six years. But with generative AI, Insilico accomplished them for one-tenth of the cost and one-third of the time — reaching the first phase of clinical trials just two and a half years after beginning the project.

“This first drug candidate that’s going to Phase 2 is a true highlight of our end-to-end approach to bridge biology and chemistry with deep learning,” said Alex Zhavoronkov, CEO of Insilico Medicine. “This is a significant milestone not only for us, but for everyone in the field of AI-accelerated drug discovery.”

Insilico is a premier member of NVIDIA Inception, a free program that provides cutting-edge startups with technical training, go-to-market support and AI platform guidance. The company uses NVIDIA Tensor Core GPUs in its generative AI drug design engine, Chemistry42, to generate novel molecular structures — and was one of the first adopters of an early precursor to NVIDIA DGX systems in 2015.

AI Enables End-to-End Preclinical Drug Discovery

Insilico’s Pharma.AI platform includes multiple AI models trained on millions of data samples for a range of tasks. One AI tool, PandaOmics, rapidly identifies and prioritizes targets that play a significant role in a disease’s effectiveness — like the infamous spike protein on the virus that causes COVID-19.

The Chemistry42 engine can design within days new potential drug compounds that target the protein identified using PandaOmics. The generative chemistry tool uses deep learning to come up with drug-like molecular structures from scratch.

“Typically, AI companies in drug discovery focus either on biology or on chemistry,” said Petrina Kamya, head of AI platforms at Insilico. “From the start, Insilico has been applying the same deep learning approach to both fields, using AI both to discover drug targets and generate chemical structures of small molecules.”

Over the years, the Insilico team has adopted different kinds of deep neural networks for drug discovery, including generative adversarial networks and transformer models. They’re now using NVIDIA BioNeMo to accelerate the early drug discovery process with generative AI.

Finding the Needle in the AI Stack

To develop its pulmonary fibrosis drug candidate, Insilico used Pharma.AI to design and synthesize about 80 molecules, achieving unprecedented success rates for preclinical drug candidates. The process — from identifying the target to nominating a promising drug candidate for trials — took under 18 months.

During Phase 2 clinical trials, Insilico’s pulmonary fibrosis drug will be tested in several hundred people with the condition in the U.S. and China. The process will take several months — but in parallel, the company has more than 30 programs in the pipeline to target other diseases, including a number of cancer drugs.

“When we first presented our results, people just did not believe that generative AI systems could achieve this level of diversity, novelty and accuracy,” said Zhavoronkov. “Now that we have an entire pipeline of promising drug candidates, people are realizing that this actually works.”

Learn more about Insilico Medicine’s Chemistry42 platform for AI-accelerated drug candidate screening in this talk from NVIDIA GTC.

Subscribe to NVIDIA healthcare news and generative AI news.

Read More

Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Researchers at Yamagata University in Japan have harnessed AI to uncover four previously unseen geoglyphs — images on the ground, some as wide as 1,200 feet, made using the land’s elements — in Nazca, a seven-hour drive south of Lima, Peru.

The geoglyphs — a humanoid, a pair of legs, a fish and a bird — were revealed using a deep learning model, making the discovery process significantly faster than traditional archaeological methods.

The team’s deep learning model training was executed on an IBM Power Systems server with an NVIDIA GPU.

Using open-source deep learning software, the researchers analyzed high-resolution aerial photographs, a technique that was part of a study that began in November 2019.

Published this month in the Journal of Archaeological Science, the study confirms the deep learning model’s findings through onsite surveys and highlights the potential of AI in accelerating archaeological discoveries.

The deep learning techniques that comprise the hallmark of modern AI are used for various archeological efforts, whether analyzing ancient scrolls discovered across the Mediterranean or categorizing pottery sherds from the American Southwest.

The Nazca lines, a series of ancient geoglyphs that date from 500 B.C. to 500 A.D. — primarily likely from 100 B.C. to 300 A.D. — were created by removing darker stones on the desert floor to reveal lighter-colored sand beneath.

The drawings — depicting animals, plants, geometric shapes and more — are thought to have had religious or astronomical significance to the Nazca people who created them.

The discovery of these new geoglyphs indicates the possibility of more undiscovered sites in the area.

And it underscores how technology like deep learning can enhance archaeological exploration, providing a more efficient approach to uncovering hidden archaeological sites.

Read the full paper.

Featured image courtesy of Wikimedia Commons.

Read More

Scientists Improve Delirium Detection Using AI and Rapid-Response EEGs

Scientists Improve Delirium Detection Using AI and Rapid-Response EEGs

Detecting delirium isn’t easy, but it can have a big payoff: speeding essential care to patients, leading to quicker and surer recovery.

Improved detection also reduces the need for long-term skilled care, enhancing the quality of life for patients while decreasing a major financial burden. In the U.S., caring for those suffering from delirium costs up to $64,000 a year per patient, according to the National Institutes of Health.

In a paper published last month in Nature, researchers describe how they used a deep learning model called Vision Transformer, accelerated by NVIDIA GPUs, alongside a rapid-response electroencephalogram, or EEG, device to detect delirium in critically ill older adults.

The paper, called “Supervised deep learning with vision transformer predicts delirium using limited lead EEG,” is authored by Malissa Mulkey of the University of South Carolina, Huyunting Huang of Purdue University, Thomas Albanese and Sunghan Kim of the University of East Carolina, and Baijian Yang of Purdue.

Their innovative approach achieved a testing accuracy rate of 97%, promising a potential breakthrough in forecasting dementia. And by harnessing AI and EEGs, the researchers could objectively evaluate prevention and treatment methods, leading to better care.

This impressive result is due in part to the accelerated performance of NVIDIA GPUs, enabling the researchers to accomplish their tasks in half the time compared to CPUs.

Delirium affects up to 80% of critically ill patients. Yet conventional clinical detection methods identify fewer than 40% of cases — representing a significant gap in patient care. Presently, screening ICU patients involves a subjective bedside assessment.

The introduction of handheld EEG devices could make screening more accurate and affordable, but the lack of skilled technicians and neurologists poses a challenge.

The use of AI, however, can eliminate the need for a neurologist to interpret findings and allow for the detection of changes associated with delirium roughly two days before symptom onset, when patients are more receptive to treatment. It also makes it possible to use EEGs with minimal training.

The researchers applied an AI model called ViT, initially created for natural language processing and accelerated by NVIDIA GPUs, to EEG data — offering a fresh approach to data interpretation.

The use of a handheld rapid-response EEG device, which doesn’t require large EEG machines or specialized technicians, was another noteworthy study finding.

This practical tool, combined with advanced AI models for interpreting the data they collect, could streamline delirium screenings in critical care units.

The research presents a promising method for delirium detection that could shorten hospital stays, increase discharge rates, decrease mortality rates and reduce the financial burden associated with delirium.

By integrating the power of NVIDIA GPUs with innovative deep learning models and practical medical devices, this study underlines the transformative potential of technology in enhancing patient care.

As AI grows and develops, medical professionals are increasingly likely to rely on it to forecast conditions like dementia and intervene early, revolutionizing the future of critical care.

Read the full paper.

Read More

A Golden Age: ‘Age of Empires III’ Joins GeForce NOW

A Golden Age: ‘Age of Empires III’ Joins GeForce NOW

Conquer the lands in Microsoft’s award-winning Age of Empires III: Definitive Edition. It leads 10 new games supported today on GeForce NOW.

At Your Command

Age of Empires III on GeForce NOW
Stream battles all from the cloud.

Age of Empires III: Definitive Edition is a remaster of one of the most beloved real-time strategy franchises featuring improved visuals, enhanced gameplay, cross-platform multiplayer and more. Command mighty civilizations from across Europe and the Americas or jump to the battlefields of Asia. Members can experience two new game modes: Historical Battles and The Art of War Challenge Missions. Two new nations also join this edition — Sweden and the Inca — each with advantages for conquering the New World.

Build an empire today and stream across devices in glorious 4K resolution with an Ultimate membership.

Conquer Your Games List

Conqueror's Blade on GeForce NOW
Master the art of siege tactics in “Conqueror’s Blade” this week.

The GeForce NOW library is always expanding. Take a look at the 10 newly supported games this week.

  • Aliens: Dark Descent (New release on Steam, June 20)
  • Trepang2 (New release on Steam, June 21)
  • Forever Skies (New release on Steam, June 22)
  • Age of Empires III: Definitive Edition (Steam)
  • A.V.A Global (Steam)
  • Bloons TD 6 (Steam)
  • Conqueror’s Blade (Steam)
  • Layers of Fear (Steam)
  • Park Beyond (Steam)
  • Tom Clancy’s Rainbow Six Extraction (Steam)

Before diving into the weekend, let us know your answer to our question of the week on Twitter or in the comments below. Happy streaming!

Read More

Shell-e-brate Good Times in 3D With ‘Kingsletter’ This Week ‘In the NVIDIA Studio’

Shell-e-brate Good Times in 3D With ‘Kingsletter’ This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Amir Anbarestani, an accomplished 3D artist who goes by the moniker Kingsletter, had a “shell of a good time” creating his Space Turtle scene this week In the NVIDIA Studio.

Kingsletter has always harbored a fascination with 3D art, he said. As a child, he often enjoyed exploring and crafting within immersive environments. Whether it was playing with plasticine — putty-like modeling material — or creating pencil drawings, his innate inclination for self-expression always found resonance within the expansive domain of 3D.

Below, he shares his inspiration and creative process using ZBrush, Adobe Substance 3D Painter and Blender.

An NVIDIA DLSS 3 plug-in is now available in Unreal Engine 5, offering select benefits including AI upscaling for high frame rates, super resolution and more for GeForce RTX 40 Series owners.

And 3D creative app Marvelous Designer launches Into the Omniverse its NVIDIA Omniverse Connector this month. Learn how talented artists are using the Connector, along with the Universal Scene Description (“OpenUSD”) framework, to elevate their creative workflows.

NVIDIA DLSS 3 Plug-In Is Unreal — Engine 5

NVIDIA Studio released a DLSS 3 plug-in compatible with Unreal Engine 5. The Play in Editor tool is useful for game developers to quickly review gameplay in a level while editing — and DLSS 3 AI upscaling will unlock significantly higher frame rates on GeForce RTX 40 Series GPUs for even smoother previewing.

NVIDIA DLSS 3 plug-in unlocks incredible visual details with DLSS 3 in Unreal Engine 5.

Plus, select Unreal Engine viewports offer DLSS 2 Super Resolution and upscaling benefits in typical content-creation workflows like modeling, lighting, animation and more.

Download DLSS 3 for Unreal Engine 5.2, available now. Learn more about NVIDIA technologies supported by Unreal Engine 5.

Turtle Recall 

The process began with sketching and initial sculpting in the ZBrush tool, where the concept of a floating turtle in space took shape and evolved into a dynamic shot of the creature soaring toward the camera.

“It’s remarkable how something as simple as shaping an idea’s basic form can be so immensely gratifying,” said Kingsletter on the blockout phase. “There’s a unique joy in starting with a blank canvas and gradually bringing the essence of a concept to life.”

Sketching and initial sculpting in ZBrush.

After finalizing the model in ZBrush, Kingsletter used ZRemesher to retopologize it, or generate a low-poly version suitable for the intended scene. This is useful for removing artifacts and other mesh issues before animation and rigging.

“NVIDIA graphics cards are industry leading in the creative community. I don’t think I know anyone that uses other GPUs.” — Kingsletter

The RIZOMUV UV mapping 3D software was then deployed for unwrapping the model, the process of opening a mesh to make a 2D texture that covers a 3D object. This is effective for adding textures to objects with precision, a common need for professional artists.

Next, Kingsletter applied surface details, from subtle dusting to extreme wear and tear, with materials mimicking real-world behaviors such as sheen, subsurface scattering and more in Adobe Substance 3D Painter. RTX-accelerated light and ambient occlusion enabled fully baked models in mere seconds.

Textures added and baked rapidly in Adobe Substance 3D Painter.

Kingsletter then moved to Blender to animate the scene, setting up simple rigs and curves to bring the turtle’s flapping limbs and flight to life. Harnessing the potential of his MSI Creator Z17 HX Studio A13V NVIDIA Studio laptop from MSI with GeForce RTX 4070 graphics turtle-ly exceeded the artist’s lofty expectations.

The MSI Creator Z17 HX Studio laptop with GeForce RTX 4070 graphics.

“As a digital creative professional, I always strive to work with the best creative tools available,” Kingsletter said. “Choosing the MSI Creator laptop allowed me to exceed my creative professional needs and indulge in my passionate gaming hobby.”

He enriched the cosmic environment using Blender’s particle system, which scattered random debris, asteroids and a small, rotating planet throughout the outer-space scene. AI-powered RTX-accelerated OptiX ray tracing in the viewport unlocked buttery-smooth interactive animations in the viewport.

Create magnificent worlds in Blender accelerated by GeForce RTX graphics.

“Simulating smoke proved to be the most challenging aspect,” said Kingsletter about his first foray into this form of animation. “Through numerous trials and errors, I persevered until I achieved a truly satisfactory result.”

Realistic smoke elevated the 3D animation.

His RTX 4070 GPU facilitated smoother, more efficient rendering of the final visuals with RTX-accelerated OptiX ray tracing in Blender Cycles, ensuring the fastest final frame render.

When asked what he’d advise his younger artist self, Kingsletter said, “I’d enhance my observation skills. By immersing myself in the intricacies of form and paying careful attention to the world around me, I would have laid a stronger foundation for my creative journey.”

Wise words for all creators.

Digital 3D artist Kingsletter.

Check out Kingsletter’s beautiful 3D creations on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Into the Omniverse: Universal Scene Description Support for Marvelous Designer Lets Users Tailor Digital Assets, Clothes for 3D Characters

Into the Omniverse: Universal Scene Description Support for Marvelous Designer Lets Users Tailor Digital Assets, Clothes for 3D Characters

Editor’s note: This post is part of Into the Omniverse, a monthly series focused on how artists, developers and enterprises can transform their workflows using the latest advances in Universal Scene Description and NVIDIA Omniverse.

Whether animating fish fins or fashioning chic outfits for digital characters, creators can tap Marvelous Designer software to compose and tailor assets, clothes and other materials for their 3D workflows.

Marvelous Designer recently launched an Omniverse Connector, a tool that enhances collaborative workflows that take place between its software and NVIDIA Omniverse, a development platform for connecting and building 3D tools and applications.

The Connector enables users to significantly speed and ease their design processes, thanks to its support for the Universal Scene Description framework, known as OpenUSD, which serves as a common language between 3D tools.

In a typical computer graphics pipeline, an artist needs to go back and forth between software in finalizing their work. The new Omniverse Connector enables creators to save time with Marvelous Designer’s improved import and export capabilities through OpenUSD.

In a recent livestream, 3D designer Brandon Yu shared how he’s using the new Connector and OpenUSD to improve his collaborative workflow, enhance productivity, expand creative possibilities and streamline his design process.

Mike Shawbrook, who has more than 150,000 subscribers on his MH Tutorials YouTube channel, walks through using the new Connector in the tutorial below. Shawbrook demonstrates how he set up a live session between Marvelous Designer and Omniverse to create a simple cloth blanket.

For more, check out this tutorial on using the new Connector and see how OpenUSD can improve 3D workflows:

Improved USD Compatibility

With the Marvelous Designer Omniverse Connector, users can harness the real-time rendering capabilities of Omniverse to visualize their garments in an interactive environment. This integration empowers creators to make informed design decisions, preview garments’ reactions to different lighting conditions and simulate realistic fabric behavior in real time.

The Connector’s expanded support for OpenUSD enables seamless interchange of 3D data between creative applications.

In the graphic above, an artist uses the new connector to adjust 3D-animated fish fins, a key digital material in an underwater scene.

Get Plugged Into the Omniverse 

To learn more about how OpenUSD can improve 3D workflows, check out a new video series on the file framework. The first installment covers four OpenUSD “superpowers.”

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools.

Share your Marvelous Designer and Omniverse creations to the Omniverse gallery for a chance to be featured on NVIDIA social media channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of Marvelous Designer.

Read More

NVIDIA CEO: Creators Will Be “Supercharged” by Generative AI

NVIDIA CEO: Creators Will Be “Supercharged” by Generative AI

Generative AI will “supercharge” creators across industries and content types, NVIDIA founder and CEO Jensen Huang said today at the Cannes Lions Festival, on the French Riviera.

“For the very first time, the creative process can be amplified in content generation, and the content generation could be in any modality — it could be be text, images, 3D, videos,” Huang said in a conversation with Mark Read, CEO of WPP — the world’s largest marketing and communications services company.

Huang and Read backstage at Cannes Lions

At the event attended by thousands of creators, marketers and brand execs from around the world, Huang outlined the impact of AI on the $700 billion digital advertising industry. He also touched on the ways AI can enhance creators’ abilities, as well as the importance of responsible AI development.

“You can do content generation at scale, but infinite content doesn’t imply infinite creativity,” he said. “Through our thoughts, we have to direct this AI to generate content that has to be aligned to your values and your brand tone.”

The discussion followed Huang’s recent keynote at COMPUTEX, where NVIDIA and WPP announced a collaboration to develop a content engine powered by generative AI and the NVIDIA Omniverse platform for building and operating metaverse applications.

Driving Forces of the Generative AI Era

NVIDIA has been pushing the boundaries of graphics technology for 30 years and been at the forefront of the AI revolution for a decade. This combination of expertise in graphics and AI uniquely positions the company to enable the new era of generative AI applications.

Huang said that “the biggest moment of modern AI” can be traced back to an academic contest in 2012, when a team of University of Toronto researchers led by Alex Krizhevsky showed that NVIDIA GPUs could train an AI model that recognized objects better than any computer vision algorithm that came before it.

Since then, developers have taught neural networks to recognize images, videos, speech, protein structures, physics and more.

“You could learn the language of almost anything,” Huang said. “Once you learn the language, you can apply the language — and the application of language is generation.”

Generative AI models can create text, pixels, 3D objects and realistic motion, giving professionals superpowers to more quickly bring their ideas to life. Like a creative director working with a team of artists, users can direct AI models with prompts, and fine-tune the output to align with their vision.

“You have to give the machine feedback like the best creative director,” Read said.

These tools aren’t a replacement for human creativity, Huang emphasized. They augment the skills of artists and marketing professionals to help them feed demand from clients by producing content more quickly and in multiple forms tailored to different audiences.

“We will democratize content generation,” Huang said.

Reimagining How We Live, Work and Create With AI

Generative AI’s key benefit for the creative industry is its ability to scale up content generation, rapidly generating options for text and visuals that can be used in advertising, marketing and film.

“In the old days, you’d create hundreds of different ad options that are retrieved based on the medium,” Huang said. “In the future, you won’t retrieve — you’ll generate billions of different ads. But every single one of them has to be tone appropriate, has to be brand perfect.”

For use by professional creators, these AI tools must also produce high-quality visuals that meet or exceed the standard of content captured through traditional methods.

It all starts with a digital twin, a true-to-reality simulation of a real-world physical asset. The NVIDIA Omniverse platform enables the creation of stunning, photorealistic visuals that accurately represent physics and materials — whether for images, videos, 3D objects or immersive virtual worlds.

“Omniverse is a virtual world,” Huang said. “We created a virtual world where AI could learn how to create an AI that’s physically based and grounded by physics.”  

“This virtual world has the ability to ingest assets and content that’s created by any tool, because we have this interface called USD,” he said, referring to the Universal Scene Description framework for collaborating in 3D. With it, artists and designers can combine assets developed using popular tools from companies like Adobe and Autodesk with virtual worlds developed using generative AI.

NVIDIA Picasso, a foundry for custom generative AI models for visual design unveiled earlier this year, also supports best-in-class image, video and 3D generative AI capabilities developed in collaboration with partners including Adobe, Getty Images and Shutterstock.

“We created a platform that makes it possible for our partners to train from data that was licensed properly from, for example, Getty, Shutterstock, Adobe,” Huang said. “They’re respectful of the content owners. The training data comes from that source, and whatever economic benefits come from that could accrete back to the creators.”

Like any groundbreaking technology, it’s critical that AI is developed and deployed thoughtfully, Read and Huang said. Technology to watermark AI-generated assets and to detect whether a digital asset was modified or counterfeited will support these goals.

We have to put as much energy into the capabilities of AI as we do the safety of AI,” Huang said. “In the world of advertising, safety is brand alignment, brand integrity, appropriate tone and truth.”

Collaborating on Content Engine for Digital Advertising

As a leader in digital advertising, WPP is embracing AI as a tool to boost creativity and personalization, helping creators across the industry craft compelling messages that reach the right consumer.

“From the creative process to the customer, there’s going to have to be ad agencies in the middle that understand the technology,” Huang said. “That entire process in the middle requires humans in the loop. You have to understand the voice of the brand you’re trying to represent.”

Using Omniverse Cloud, WPP’s creative professionals can build physically accurate digital twins of products using a brand’s specific product-design data. This real-world data can be combined with AI-generated objects and digital environments — licensed through partners such as Adobe and Getty Images — to create virtual sets for marketing content.

“WPP is going to unquestionably become an AI company,” Huang said. “You’ll create an AI factory where the input is creativity, thoughts and prompts, and what comes out of it is content.”

Enhanced by responsibly trained, NVIDIA-accelerated generative AI, this content engine will boost creative teams’ speed and efficiency, helping them quickly render brand-accurate advertising content at scale.

“The type of content you’ll be able to help your clients generate will be practically infinite,” Huang said. “From the days of hundreds of examples of content that you create for a particular brand or for a particular campaign, it’s going to eventually become billions of generated content for every individual.”

Learn more about NVIDIA’s collaboration with WPP.

Read More

NVIDIA Research Wins Autonomous Driving Challenge, Innovation Award at CVPR

NVIDIA Research Wins Autonomous Driving Challenge, Innovation Award at CVPR

NVIDIA will be showcased next week as the winner of the fiercely contested 3D Occupancy Prediction Challenge for autonomous driving development at the Computer Vision and Pattern Recognition Conference (CVPR), in Vancouver, Canada.

The competition had more than 400 submissions from nearly 150 teams across 10 regions.

3D occupancy prediction is the process of forecasting the status of each voxel in a scene, that is, each data point on a 3D bird’s-eye-view grid. Voxels can be identified as free, occupied or unknown.

Critical to the development of safe and robust self-driving systems, 3D occupancy grid prediction provides information to autonomous vehicle (AV) planning and control stacks using state-of-the-art convolutional neural networks and transformer models, which are enabled by the NVIDIA DRIVE platform.

“NVIDIA’s winning solution features two important AV advancements,” said Zhiding Yu, senior research scientist for learning and perception at NVIDIA. “It demonstrates a state-of-the-art model design that yields excellent bird’s-eye-view perception. It also shows the effectiveness of visual foundation models with up to 1 billion parameters and large-scale pretraining in 3D occupancy prediction.”

Perception for autonomous driving has evolved over the past years from handling 2D tasks, such as detecting objects or free spaces in images, to reasoning about the world in 3D with multiple input images.

This now provides a flexible and precise fine-grained representation of objects in complex traffic scenes, which is “critical for achieving the safety perception requirements for autonomous driving,” according to Jose Alvarez, director of AV applied research and distinguished scientist at NVIDIA.

Yu will present the NVIDIA Research team’s award-winning work at CVPR’s End-to-End Autonomous Driving Workshop on Sunday, June 18, at 10:20 a.m. PT, as well as at the Vision-Centric Autonomous Driving Workshop on Monday, June 19, at 4:00 p.m. PT.

In addition to winning first place in the challenge, NVIDIA will receive at the event an Innovation Award, recognizing its “fresh insights into the development of view transformation modules,” with “substantially improved performance” compared to previous approaches, according to the CVPR workshop committee.

Read NVIDIA’s technical report on the submission.

Safer Vehicles With 3D Occupancy Prediction

While traditional 3D object detection — detecting and representing objects in a scene, often using 3D bounding boxes — is a core task in AV perception, it has its limitations. For example, it lacks expressiveness, meaning the bounding boxes might not represent enough real-world information. It also requires defining taxonomies and ground truths for all possible objects, even ones rarely seen in the real world, such as road hazards that may have fallen off a truck.

In contrast, 3D occupancy prediction provides rich information about the world to a self-driving vehicle’s planning stack, which is necessary for end-to-end autonomous driving.

Software-defined vehicles can be continuously upgraded with new developments that are proven and validated over time. State-of-the-art software updates that evolve from research initiatives, such as the ones recognized at CVPR, are enabling new features and safer driving capabilities.

The NVIDIA DRIVE platform offers a path to production for automakers, providing full-stack hardware and software for safe and secure AV development, from the car to the data center.

More on the CVPR Challenge

The 3D Occupancy Prediction Challenge at CVPR required participants to develop algorithms that solely used camera input during inference. Participants could use open-source datasets and models, facilitating the exploration of data-driven algorithms and large-scale models. The organizers provided a baseline sandbox for the latest state-of-the-art 3D occupancy prediction algorithms in real-world scenarios.

NVIDIA at CVPR

NVIDIA is presenting nearly 30 papers and presentations at CVPR. Experts who’ll discuss autonomous driving include:

View other talks on the agenda and learn more about NVIDIA at CVPR, which runs June 18-22.

Featured image courtesy of OccNet and Occ3D.

Read More

Do Pass Go, Do Collect More Games: Xbox Game Pass Coming to GeForce NOW

Do Pass Go, Do Collect More Games: Xbox Game Pass Coming to GeForce NOW

Xbox Game Pass support is coming to GeForce NOW.

Members will soon be able to play supported PC games from the Xbox Game Pass catalog through NVIDIA’s cloud gaming servers. Learn more about how support for Game Pass and Microsoft Store will roll out in the coming months.

Plus, Age of Empires IV: Anniversary Edition is the first from the world’s most popular real-time strategy franchise to arrive on GeForce NOW.

A Game Pass-tic Partnership

Announced over the weekend, Game Pass members will soon be able to play supported PC games from the Game Pass catalog with GeForce NOW.

We’re working closely with Microsoft to enable members to play select PC titles from Microsoft Store, just as they can today on GeForce NOW with their Steam, Epic Games Store, Ubisoft Connect and GOG.com accounts. Members who are subscribed to PC Game Pass or Xbox Game Pass Ultimate will be able to stream these select PC titles from the Game Pass library — without downloads or additional purchases for instant gaming from the cloud.

With hundreds of PC titles available in the Game Pass catalog, Xbox and PC gamers together can look forward to future GFN Thursdays to see what’s next. PC games from Xbox Game Studios and Bethesda on Steam and Epic Games Store will continue to be released, giving members more ways to play their favorite Xbox titles.

And with the ability for GeForce NOW members to stream at high performance across devices, including PCs, Macs, mobile devices, smart TVs, gaming handheld devices and more, gamers everywhere will be able to take their Xbox PC games wherever they go, along with the over 1,600 titles in the GeForce NOW library.

For an even more upgraded experience, upgrade to Ultimate and Priority memberships to skip the waiting lines over free members and get into gaming even faster.

Build Your Empire — and Library

Age of Empires IV on GeForce NOW
Siege the moment!

Conquer the lands in Microsoft’s award-winning Age of Empires franchise this week.

Age of Empires IV: Anniversary Edition takes the world’s most popular real-time strategy game to the next level with familiar and new ways for players to expand their empire. The Anniversary Edition brings all the latest updates, including new civilizations — the Ottomans and Malians — maps, languages, challenges and more. Choose the path to greatness and become a part of history through Campaign Story Mode with a tutorial designed for first-time players, or challenge the world in competitive or cooperative online matches that include ranked seasons.

Ultimate members can rule the kingdom in stunning 4K or ultrawide resolutions, and settle in with up to eight-hour streaming sessions.

What to Play This Week

Dordogne on GeForce NOW
Hand-painted nostalgia in the cloud this summer.

Take a look at the two new games available to stream this week:

  • Dordogne (New release on Steam)
  • Age of Empires IV: Anniversary Edition (Steam)

Before the weekend arrives, check out our question of the week. Let us know your answer on Twitter or in the comments below.

Read More