3D Artist SouthernShotty Creates Wholesome Characters This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

This week In the NVIDIA Studio, we’re highlighting 3D and motion graphics artist SouthernShotty — and scenes from his soon-to-be released short film, Watermelon Girl.

“The theme of the film is that it’s more rewarding to give to others than to receive yourself,” said the artist. Watermelon Girl aims to create joy and invoke youth, he said, inspiring artists and viewers to raise each other’s spirits and be a positive force in the world.

“I really hope it encourages people to reach out and help each other through hard times,” SouthernShotty said.

 

SouthernShotty learned to model in 3D as a faster alternative to his favorite childhood art form, claymation.

“Growing up, I did a lot of arts and crafts with my mom and dad, so I loved creating little worlds,” he said.

The Watermelon King’s throne room in ‘Watermelon Girl.’

SouthernShotty brainstormed characters using the mood board app Milanote, which allows users to drag and reposition cards to organize out-of-order thoughts. He also experimented with AI image generators to develop ideas and create reference material for his own artwork.

Once his vision was set, SouthernShotty began creating characters and scenes in Blender. Using an NVIDIA Studio laptop housing a GeForce RTX 3080 GPU, he deployed Blender’s Cycles renderer with RTX-accelerated OptiX ray tracing in the viewport, unlocking interactive photorealistic rendering for modeling.

RTX-accelerated ray tracing and AI denoising in Blender unlock interactivity in large and complex scenes.

To make volume rendering more GPU memory-efficient, SouthernShotty took advantage of the baked-in NVIDIA NanoVBD technology, allowing him to quickly adjust large and complex scenes with smooth interactivity. He then added animations to his characters and scenes before exporting renders in lightning speed using Blender Cycles.

SouthernShotty animated ‘Watermelon Girl’ in Blender.

Next the artist moved into Substance 3D Painter to build textures characteristic of his custom look, which, he said, is “a tactile vibe that conveys an interesting mix of unconventional materials.”

 

NVIDIA Iray technology and the RTX GPU played a critical role, with RTX-accelerated light and ambient occlusion baking photorealistic textures in mere seconds.

Have to get the lighting just right.

SouthernShotty then imported renders in Substance 3D Stager to apply textures and experiment with colors. Substance 3D Stager’s latest update added support for SBSAR to enable faster exports and custom textures that are easy to plug and play, along with new options to apply blending modes and opacity.

Preset lighting options helped him light the scene with ease. With RTX-accelerated denoising, SouthernShotty could tweak and tinker the scene in a highly interactive viewport with virtually no slowdown — allowing him to focus on creating without the waiting.

He quickly exported final passes in Blender before reaching the composition stage, where he applied various GPU-accelerated effects in Adobe Photoshop, After Effects, Illustrator and Premiere Pro.

“GeForce RTX GPUs revolutionized the way I work. I no longer spend hours optimizing my scenes, waiting on preview renders, or packaging files for an expensive online render farm,” SouthernShotty said.

As SouthernShotty continues to refine Watermelon Girl, he’ll now have the powerful GeForce RTX 4090 at his disposal. The same GPU that TechRadar said “is more powerful than we even thought possible.”

When it’s time to export the final film, the RTX 40 Series NVIDIA dual AV1 encoders via the popular Voukoder plugin for Adobe Premiere Pro will see him instantly slash his export times and reduce his file size.

SouthernShotty recently tested the GeForce RTX 4090 GPU to see if it’s the best card for Blender and 3D.

Watch below.

In testing, render speeds in Blender are 70% faster than the previous generation.

Performance testing conducted by NVIDIA in September 2022 with desktops equipped with Intel Core i9-12900K with UHD 770, 64 GB RAM. NVIDIA Driver 521.58, Windows 11. Blender 2.93 measures render time of various scenes using Blender OpenData benchmark, with the OptiX render engine.

Check out SouthernShotty’s linktree for Blender tutorials, social media links and more.

3D motion graphics artist SouthernShotty.

Join the #From2Dto3D Challenge 

NVIDIA Studio wants to see your 2D to 3D progress.

Join the #From2Dto3D challenge this month for a chance to be featured on NVIDIA Studio’s social media channels, like @Rik_Vasquez:

Entering is easy: Simply post a piece of 2D art next to a corresponding 3D rendition on Instagram, Twitter or Facebook — and be sure to tag #From2Dto3D.

The post 3D Artist SouthernShotty Creates Wholesome Characters This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Keep On Trucking: SenSen Harnesses Drones, NVIDIA Jetson, Metropolis to Inspect Trucks

Sensor AI solutions specialist SenSen has turned to the NVIDIA Jetson edge AI platform to help regulators track heavy vehicles moving across Australia.

Australia’s National Heavy Vehicle Regulator, or NHVR, has a big job — ensuring the safety of truck drivers across some of the world’s most sparsely populated regions.

They’re now harnessing AI to improve safety and operational efficiency for the trucking industry — even as the trucks are on the move — using drones as well as compact, portable solar-powered trailers and automatic number-plate recognition cameras in the vehicle.

That’s a big change from current systems, which gather information after the fact, when it’s too late to use it to disrupt high-risk journeys and direct on-the-road compliance in real time.

Current license plate recognition systems are often fixed in place and can’t be moved to areas with the most traffic.

NHVR is developing and deploying real-time mobile cameras on multiple platforms to address this challenge, including vehicle-mounted, drone-mounted and roadside trailer-mounted systems.

The regulator turned to the Australia-based SenSen, an NVIDIA Metropolis partner, to build these systems for the pilot program, including two trailers, a pair of vehicles and a drone.

“SenSen technology helps the NHVR support affordable, adaptable and accurate road safety in Australia,” said Nathan Rogers, director of smart city solutions for Asia Pacific at SenSen.

NVIDIA Jetson helps SenSen create lightweight systems that have low energy needs and a small footprint, while being able to handle multiple camera streams integrated with lidar and inertial sensors. These systems operate solely on solar and battery power and are rapidly deployable.

NVIDIA technologies also play a vital role in the systems’ ability to intelligently analyze data fused from multiple cameras and sensors.

To train the AI application, SenSen relies on NVIDIA GPUs and the NVIDIA TAO Toolkit to fast-track the AI model development using transfer learning by refining the accuracy and optimizing the model performance to power the object-detection application.

To run the AI app, SenSen relies on the NVIDIA DeepStream software development kit for highly optimized video analysis in real time on NVIDIA Jetson Nano- and AGX Xavier-based systems.

These mobile systems promise to help safety and compliance officers identify and disrupt high-risk journeys real time.

This allows clients to get accurate data reliably and quickly identify operators who obey road rules, and to help policymakers make better decisions about road safety over the long term.

“Using this solution to obtain real-time heavy vehicle sightings from any location in Australia allows us to further digitize our operations and create a more efficient and safer heavy-vehicle industry in Australia,” said Paul Simionato, director of the southern region at NHVR.

The ultimate goal: waste less time tracking repeat compliant vehicles, present clearer information on vehicles and loads, and use vehicles as a mobile intelligence tool.

And perhaps best of all, operators who are consistently compliant can expect to be less regularly intercepted, creating a strong incentive for the industry to increase compliance.

The post Keep On Trucking: SenSen Harnesses Drones, NVIDIA Jetson, Metropolis to Inspect Trucks appeared first on NVIDIA Blog.

Read More

What Are Graph Neural Networks?

When two technologies converge, they can create something new and wonderful — like cellphones and browsers were fused to forge smartphones.

Today, developers are applying AI’s ability to find patterns to massive graph databases that store information about relationships among data points of all sorts. Together they produce a powerful new tool called graph neural networks.

What Are Graph Neural Networks?

Graph neural networks apply the predictive power of deep learning to rich data structures that depict objects and their relationships as points connected by lines in a graph.

In GNNs, data points are called nodes, which are linked by lines — called edges — with elements expressed mathematically so machine learning algorithms can make useful predictions at the level of nodes, edges or entire graphs.

What Can GNNs Do?

An expanding list of companies is applying GNNs to improve drug discovery, fraud detection and recommendation systems. These applications and many more rely on finding patterns in relationships among data points.

Researchers are exploring use cases for GNNs in computer graphics, cybersecurity, genomics and materials science. A recent paper reported how GNNs used transportation maps as graphs to improve predictions of arrival time.

Many branches of science and industry already store valuable data in graph databases. With deep learning, they can train predictive models that unearth fresh insights from their graphs.

Example uses of graph neural networks
Knowledge from many fields of science and industry can be expressed as graphs.

“GNNs are one of the hottest areas of deep learning research, and we see an increasing number of applications take advantage of GNNs to improve their performance,” said George Karypis, a senior principal scientist at AWS, in a talk earlier this year.

Others agree. GNNs are “catching fire because of their flexibility to model complex relationships, something traditional neural networks cannot do,” said Jure Leskovec, an associate professor at Stanford, speaking in a recent talk, where he showed the chart below of AI papers that mention them.

Recent papers on graph neural networks

Who Uses Graph Neural Networks?

Amazon reported in 2017 on its work using GNNs to detect fraud. In 2020, it rolled out a public GNN service that others could use for fraud detection, recommendation systems and other applications.

To maintain their customers’ high level of trust, Amazon Search employs GNNs to detect malicious sellers, buyers and products. Using NVIDIA GPUs, it’s able to explore graphs with tens of millions of nodes and hundreds of millions of edges while reducing training time from 24 to five hours.

For its part, biopharma company GSK maintains a knowledge graph with nearly 500 billion nodes that is used in many of its machine-language models, said Kim Branson, the company’s global head of AI, speaking on a panel at a GNN workshop.

LinkedIn uses GNNs to make social recommendations and understand the relationships between people’s skills and their job titles, said Jaewon Yang, a senior staff software engineer at the company, speaking on another panel at the workshop.

“GNNs are general-purpose tools, and every year we discover a bunch of new apps for them,” said Joe Eaton, a distinguished engineer at NVIDIA who is leading a team applying accelerated computing to GNNs. “We haven’t even scratched the surface of what GNNs can do.”

In yet another sign of the interest in GNNs, videos of a course on them that Leskovec teaches at Stanford have received more than 700,000 views.

How Do GNNs Work?

To date, deep learning has mainly focused on images and text, types of structured data that can be described as sequences of words or grids of pixels. Graphs, by contrast, are unstructured. They can take any shape or size and contain any kind of data, including images and text.

Using a process called message passing, GNNs organize graphs so machine learning algorithms can use them.

Message passing embeds into each node information about its neighbors. AI models employ the embedded information to find patterns and make predictions.

Message passing in GNNs
Example dataflows in three types of GNNs.

For example, recommendation systems use a form of node embedding in GNNs to match customers with products. Fraud detection systems use edge embeddings to find suspicious transactions, and drug discovery models compare entire graphs of molecules to find out how they react to each other.

GNNs are unique in two other ways: They use sparse math, and the models typically only have two or three layers. Other AI models generally use dense math and have hundreds of neural-network layers.

Example pipeline for a graph neural network
A GNN pipeline has a graph as an input and predictions as outputs.

What’s the History of GNNs?

A 2009 paper from researchers in Italy was the first to give graph neural networks their name. But it took eight years before two researchers in Amsterdam demonstrated their power with a variant they called a graph convolutional network (GCN), which is one of the most popular GNNs today.

The GCN work inspired Leskovec and two of his Stanford grad students to create GraphSage, a GNN that showed new ways the message-passing function could work. He put it to the test in the summer of 2017 at Pinterest, where he served as chief scientist.

The GraphSage graph neural network
GraphSage pioneered powerful aggregation techniques for message passing in GNNs.

Their implementation, PinSage, was a recommendation system that packed 3 billion nodes and 18 billion edges to outperform other AI models at that time.

Pinterest applies it today on more than 100 use cases across the company. “Without GNNs, Pinterest would not be as engaging as it is today,” said Andrew Zhai, a senior machine learning engineer at the company, speaking on an online panel.

Meanwhile, other variants and hybrids have emerged, including graph recurrent networks and graph attention networks. GATs borrow the attention mechanism defined in transformer models to help GNNs focus on portions of datasets that are of greatest interest.

Variations of graph neural networks
One overview of GNNs depicted a family tree of their variants.

Scaling Graph Neural Networks

Looking forward, GNNs need to scale in all dimensions.

Organizations that don’t already maintain graph databases need tools to ease the job of creating these complex data structures.

Those who use graph databases know they’re growing in some cases to have thousands of features embedded on a single node or edge. That presents challenges of efficiently loading the massive datasets from storage subsystems through networks to processors.

“We’re delivering products that maximize the memory and computational bandwidth and throughput of accelerated systems to address these data loading and scaling issues,” said Eaton.

As part of that work, NVIDIA announced at GTC it is now supporting PyTorch Geometric (PyG) in addition to the Deep Graph Library (DGL). These are two of the most popular GNN software frameworks.

NVIDIA tools for creating graph neural networks
NVIDIA provides multiple tools to accelerate building GNNs.

NVIDIA-optimized DGL and PyG containers are performance-tuned and tested for NVIDIA GPUs. They provide an easy place to start developing applications using GNNs.

To learn more, watch a talk on accelerating and scaling GNNs with DGL and GPUs by Da Zheng, a senior applied scientist at AWS. In addition, NVIDIA engineers hosted separate talks on accelerating GNNs with DGL and PyG.

To get started today, sign up for our early access program for DGL and PyG.

The post What Are Graph Neural Networks? appeared first on NVIDIA Blog.

Read More

Get in Touch With New Mobile Gaming Controls on GeForce NOW

GeForce NOW expands touch control support to 13 more games this GFN Thursday. That means it’s easier than ever to take PC gaming on the go using mobile devices and tablets. The new “Mobile Touch Controls” row in the GeForce NOW app is the easiest way for members to find which games put the action right at their fingertips.

For a new way to play, members can soon experience these enhanced mobile games and more streaming on the newly announced Razer Edge 5G handheld gaming device.

And since GFN Thursday means more games every week, get ready for eight new titles in the GeForce NOW library, including A Plague Tale: Requiem.

Plus, the latest GeForce NOW Android app update is rolling out now, adding Adaptive VSync support in select games to improve frame stuttering and screen tearing.

Victory at Your Fingertips

Gamers on the go, rejoice! Enhanced mobile touch controls are now available for more than a dozen additional GeForce NOW games when playing on mobile devices and tablets.

Mobile Touch Row GeForce NOW
Make your gaming mobile with the new row of touch-control titles on the cloud.

These games join Fortnite and Genshin Impact as touch-enabled titles in the GeForce NOW library, removing the need to bring a controller when away from your battlestation.

Here’s the full list of games with touch-control support streaming on GeForce NOW on mobile devices and tablets:

Mobile and Tablet

Tablet Only

To get right into gaming, use the new “Mobile Touch Controls” row in the GeForce NOW app to find your next adventure.

The Razer Edge of Glory

Announced last week at RazerCon, the new Razer Edge 5G handheld device launches in January 2023 with the GeForce NOW app installed right out of the box.

Razer Edge GeForce NOW
Stunning visuals, console-quality control and 1,400+ games through GeForce NOW.

The Razer Edge 5G is a dedicated 5G console, featuring a 6.8-inch AMOLED touchscreen display that pushes up to a 144Hz refresh rate at 1080p — perfect for GeForce NOW RTX 3080 members who can stream at ultra-low latency and 120 frames per second.

The Razer Edge 5G is powered by the latest Snapdragon G3x Gen 1 Gaming Platform and runs on Verizon 5G Ultra Wideband. With a beautiful screen and full connectivity, gamers will have another great way to stream their PC gaming libraries from Steam, Epic, Ubisoft, Origin and more using GeForce NOW. Members can reserve the upcoming Razer Edge 5G ahead of its January 2023 release.

Razer’s new handheld joins a giant list of devices that support GeForce NOW, including PCs, Macs, Chromebooks, iOS Safari, Android mobile and TV devices, and NVIDIA SHIELD TV.

Members can also stream their PC libraries on the Logitech Cloud G handheld and Cloud Gaming Chromebooks from Asus, Acer and Lenovo, all available beginning this week.

Oh, Look – More Games!

That’s not all — every GFN Thursday brings a new pack of games.

A Plague Tale Requiem on GeForce NOW
A heart-wrenching tale continues.

Start a new adventure with the newly released A Plague Tale: Requiem, part of eight new titles streaming this week. 

  • A Plague Tale: Requiem (New release on Steam and Epic Games)
  • Batora – Lost Haven (New release on Steam, Oct. 20)
  • Warhammer 40,000: Shootas, Blood & Teef (New release on Steam and Epic Games, Oct. 20)
  • The Tenants (New release on Steam, Oct. 20)
  • FAITH: The Unholy Trinity (New release on Steam, Oct. 21)
  • Evoland Legendary Edition (Free on Epic Games, Oct. 20-27)
  • Commandos 3 – HD Remaster (Steam and Epic Games)
  • Monster Outbreak (Steam and Epic Games)

How are you making your gaming mobile? Let us know what device you’d take on a trip with you on Twitter or in the comments below.

The post Get in Touch With New Mobile Gaming Controls on GeForce NOW appeared first on NVIDIA Blog.

Read More

How Tarteel Uses AI to Help Arabic Learners Perfect Their Pronunciation

There are some 1.8 billion Muslims, but only 16% or so of them speak Arabic, the language of the Quran.

This is in part due to the fact that many Muslims struggle to find qualified instructors to give them feedback on their Quran recitation.

Enter today’s guest and his company Tarteel, a member of the NVIDIA Inception program for startups.

Tarteel was founded with the mission of strengthening the relationship Muslims have with the Quran.

The company is accomplishing this with a fusion of Islamic principles and cutting-edge technology.

AI Podcast host Noah Kravitz spoke with Tarteel CEO Anas Abou Allaban, to learn more.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post How Tarteel Uses AI to Help Arabic Learners Perfect Their Pronunciation appeared first on NVIDIA Blog.

Read More

NVIDIA, Oracle CEOs in Fireside Chat Light Pathways to Enterprise AI

Speeding adoption of enterprise AI and accelerated computing, Oracle CEO Safra Catz and NVIDIA founder and CEO Jensen Huang discussed their companies’ expanding collaboration in a fireside chat live streamed today from Oracle CloudWorld in Las Vegas.

Oracle and NVIDIA announced plans to bring NVIDIA’s full accelerated computing stack to Oracle Cloud Infrastructure (OCI). It includes NVIDIA AI Enterprise, NVIDIA RAPIDS for Apache Spark and NVIDIA Clara for healthcare.

In addition, OCI will deploy tens of thousands more NVIDIA GPUs to its cloud service, including A100 and upcoming H100 accelerators.

“I’m unbelievably excited to announce our renewed partnership and the expanded capabilities our cloud has,” said Catz to a live and online audience of several thousand customers and developers.

“We’re thrilled you’re bringing your AI solutions to OCI,” she told Huang.

The Power of Two

The combination of Oracle’s heritage in data and its powerful infrastructure with NVIDIA’s expertise in AI will give users traction facing tough challenges ahead, Huang said.

“Industries around the world need big benefits from our industry to find ways to do more without needing to spend more or consume more energy,” he said.

OracleWorld audience in Lass Vegas
Panorama of the crowd at OracleWorld in Las Vegas.

AI and GPU-accelerated computing are delivering these benefits at a time when traditional methods of increasing performance are slowing, he added.

“Data that you harness to find patterns and relationships can automate the way you work and the products and services you deliver — the next ten years will be some of the most exciting times in our industry,” Huang said.

“I’m confident all workloads will be accelerated for better performance, to drive costs out and for energy efficiency,” he added.

The capability of today’s software and hardware, coming to the cloud, “is something we’ve dreamed about since our early days,” said Catz, who joined Oracle in 1999 and has been its CEO since 2014.

Benefits for Healthcare and Every Industry

“One of the most critical areas is saving lives,” she added, pointing to the two companies’ work in healthcare.

A revolution in digital biology is transforming healthcare from a science-driven industry to one powered by both science and engineering, And NVIDIA Clara provides a platform for that work, used by healthcare experts around the world, Huang said.

“We can now use AI to understand the language of proteins and chemicals, all the way to gene screening and quantum chemistry —  amazing breakthroughs are happening now,” he said.

AI promises similar advances for every business. The automotive industry, for example, is becoming a tech industry as it discovers its smartphone moment, he said.

“We see this all over with big breakthroughs in natural language processing and large language models that can encode human knowledge to apply to all kinds of skills they were never trained to do,” he said.

The post NVIDIA, Oracle CEOs in Fireside Chat Light Pathways to Enterprise AI appeared first on NVIDIA Blog.

Read More

Meta’s Grand Teton Brings NVIDIA Hopper to Its Data Centers

Meta today announced its next-generation AI platform, Grand Teton, including NVIDIA’s collaboration on design.

Compared to the company’s previous generation Zion EX platform, the Grand Teton system packs in more memory, network bandwidth and compute capacity, said Alexis Bjorlin, vice president of Meta Infrastructure Hardware, at the 2022 OCP Global Summit, an Open Compute Project conference.

AI models are used extensively across Facebook for services such as news feed, content recommendations and hate-speech identification, among many other applications.

“We’re excited to showcase this newest family member here at the summit,” Bjorlin said, adding her thanks to NVIDIA for its deep collaboration on Grand Teton’s design and continued support of OCP.

Designed for Data Center Scale

Named after the 13,000-foot mountain that crowns one of Wyoming’s two national parks, Grand Teton uses NVIDIA H100 Tensor Core GPUs to train and run AI models that are rapidly growing in their size and capabilities, requiring greater compute.

The NVIDIA Hopper architecture, on which the H100 is based, includes a Transformer Engine to accelerate work on these neural networks, which are often called foundation models because they can address an expanding set of applications from natural language processing to healthcare, robotics and more.

The NVIDIA H100 is designed for performance as well as energy efficiency. H100-accelerated servers, when connected with NVIDIA networking across thousands of servers in hyperscale data centers, can be 300x more energy efficient than CPU-only servers.

“NVIDIA Hopper GPUs are built for solving the world’s tough challenges, delivering accelerated computing with greater energy efficiency and improved performance, while adding scale and lowering costs,” said Ian Buck, vice president of hyperscale and high performance computing at NVIDIA. “With Meta sharing the H100-powered Grand Teton platform, system builders around the world will soon have access to an open design for hyperscale data center compute infrastructure to supercharge AI across industries.”

Mountain of a Machine

Grand Teton sports 2x the network bandwidth and 4x the bandwidth between host processors and GPU accelerators compared to Meta’s prior Zion system, Meta said.

The added network bandwidth enables Meta to create larger clusters of systems for training AI models, Bjorlin said. It also packs more memory than Zion to store and run larger AI models.

Simplified Deployment, Increased Reliability

Packing all these capabilities into one integrated server “dramatically simplifies deployment of systems, allowing us to install and provision our fleet much more rapidly, and increase reliability,” said Bjorlin.

The post Meta’s Grand Teton Brings NVIDIA Hopper to Its Data Centers appeared first on NVIDIA Blog.

Read More

Adobe MAX Kicks Off With Creative App Updates and 3D Artist Anna Natter Impresses This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Adobe MAX is inspiring artists around the world to bring their ideas to life. The leading creative conference runs through Thursday, Oct. 20, in person and virtually.

With the recent release of the NVIDIA GeForce RTX 4090 GPU and its third-generation RT Cores, fourth-generation Tensor Cores and eighth-generation NVIDIA Dual AV1 Encoder, NVIDIA is ready to elevate creative workflows for Adobe artists.

Plus, artist Anna Natter transforms 2D photos into full-fidelity 3D assets using the power of AI and state-of-the-art photogrammetry technology this week In the NVIDIA Studio.

The new Adobe features, the latest NVIDIA Studio laptops and more are backed by the October NVIDIA Studio Driver available for download today.

Unleash MAXimum Performance

Press and content creators have been putting the new GeForce RTX 4090 GPU through a wide variety of creative workflows — here’s a sampling of their reviews:

The new GeForce RTX 4090 GPU.

“NVIDIA’s new flagship graphics card brings massive gains in rendering and GPU compute-accelerated content creation.”Forbes

“GeForce RTX 4090 just puts on a clinic, by absolutely demolishing every other card here. In a lot of cases it’s almost cutting rendering times in half.”Hardware Canucks

“If you care about rendering performance to the point that you always lock your eyes on a top-end target, then the RTX 4090 is going to prove to be an absolute screamer..”Tech Gage

“The NVIDIA GeForce RTX 4090 is more powerful than we even thought possible.”TechRadar

“As for the 3D performance of Blender and V-Ray, it delivers a nearly 2x performance increase, which makes it undoubtedly the most powerful weapon for content creators.”XFastest

“NVIDIA has been providing Studio drivers for GeForce series graphics cards, they added dual hardware encoders and other powerful tools to help creators maximize their creativity. We can say it’s a new-gen GPU king suitable for top-notch gamers and creators.”Techbang 

Pick up the GeForce RTX 4090 GPU or a pre-built system today by heading to our Product Finder.

Enjoy MAXimum Creativity

Adobe is all in on the AI revolution, adopting AI-powered features across its lineup of Adobe Creative Cloud and Substance 3D apps. The updates simplify repetitive tasks and make advanced effects accessible.

Creators equipped with GeForce RTX GPUs, especially those part of the new RTX 40 Series, are primed to benefit from remarkable GPU acceleration of AI features in Adobe Creative Cloud.

Adobe Premiere Pro

Adobe Premiere Pro is getting RTX acceleration for AI features, resulting in significant performance boosts on AI effects. For example, the Unsharp Mask filter will see an increase of 4.5x, and the Posterize Time effect of over 2x compared to running them on a CPU (performance measured on RTX 3090 Ti and Intel i9 12900K).

Adobe Photoshop

The new beta Photo Restoration feature uses AI-powered neural filters to process imagery, add tone and minimize the effects of film grain. Photo Restoration can be applied to a single image or batches of imagery to quickly and conveniently improve the picture quality of an artist’s portfolio.

In The NVIDIA Studio Anna Natter Before After
Photo Restoration adds tone and minimizes the effects of film grain in Adobe Photoshop.

Photoshop’s AI-powered Object Selection Tool allows artists to apply a selection to a particular object within an image. The user can manipulate the selected object, add filters and fine-tune details.

IThe NVIDIA Studio Object Select
The AI-powered Object Selection Tool in Adobe Photoshop saves artists the trouble of tedious masking.

This saves the huge amount of time it takes artists to mask imagery — and in beta on the GeForce RTX 3060 Ti is 3x faster than the Intel UHD Graphics 700 and 4x faster than the Apple M1 Ultra.

Adobe Photoshop Lightroom Classic

The latest version of Adobe Photoshop Lightroom Classic makes it easy for users to create stunning final  images with powerful new AI-powered masking tools.

With just a few clicks, these AI masks can identify and mask key elements within an image, including the main subject, sky and background, and can even select individuals within an image and apply masks to adjust specific areas, such as hair, face, eyes or lips.

Adobe Substance 3D

Substance 3D Modeler is now available in general release. Modeler can help create concept art — it’s perfect for sketching and prototyping, blocking out game levels, crafting detailed characters and props, or sculpting an entire scene in a single app. Its ability to switch between desktop and virtual reality is especially useful, depending on project needs and the artist’s preferred style of working. 

In the NVIDIA Studio Adobe Substance 3D Modeler
The ability to switch between desktop and virtual reality is especially useful in Adobe Substance 3D Modeler.

Substance 3D Sampler added its photogrammetry feature, currently in private beta, which automatically converts photos of real-world objects into textured 3D models without the need to fiddle with sliders or tweak values. With a few clicks, the artist can now create 3D assets. This feature serves as a bridge for 2D artists looking to make the leap to 3D.

Adobe Creative Cloud and Substance 3D

These advancements join the existing lineup of GPU-accelerated and AI-enhanced Adobe apps, with features that continue to evolve and improve:

  • Adobe Camera RAW — AI-powered Select Objects and Select People masking tools
  • After Effects — Improved AI-powered Scene Edit Detection and H.264 rendering for faster exports with hardware-accelerated output
  • Illustrator — Substance 3D materials plugin for faster access to assets and direct export of Universal Scene Description (USD) files
  • Lightroom Classic — AI-powered Select Background and Select Sky masking tools
  • Photoshop — Substance 3D materials plugin
  • Photoshop Elements — AI-powered Moving Elements add motion to a still image
  • Premiere Elements — AI-powered Artistic Effects transform clips with effects inspired by famous works of art or popular art styles
  • Premiere Pro — Adds Auto Color to apply intelligent color corrections to video clips such as exposure, white balance and contrast that enhance footage,  GPU-accelerated Lumetri scopes and faster Motion Graphics Templates
  • Substance 3D Painter — SBSAR Exports for faster exports and custom textures that are easy to plug and play, plus new options to apply blending modes and opacity

Try these features on an NVIDIA Studio system equipped with a GeForce RTX GPU, and experience the ease and speed of RTX-accelerated creation.

October NVIDIA Studio Driver 

This NVIDIA Studio Driver provides optimal support for the latest new creative applications including Topaz Sharpen AI and DXO Photo. In addition, this NVIDIA Studio Driver supports the new application updates announced at Adobe MAX including Premiere Pro, Photoshop, Photoshop Lightroom Classic and more.

Receive Studio Driver notifications by downloading GeForce Experience or NVIDIA RTX Experience, and by subscribing to the NVIDIA Studio newsletter.

Download the Studio Driver today.

Embrace MAXimum Inspiration

Anna Natter, this week’s featured In the NVIDIA Studio artist, is a 3D artist at heart that likes to experiment with different mediums. She has a fascination with AI — both the technology it’s built on and its ever-expanding role in content creation.

“It’s an interesting debate where the ‘art’ starts when it comes to AI,” said Natter. “After almost a year of playing with AI, I’ve been working on developing my own style and figuring out how I can make it mine.”

In the NVIDIA Studio Photoshop Neural Filters
AI meets RTX-accelerated Photoshop Neural Filters.

In the image above, Natter applied Photoshop Neural Filters, which were accelerated by her GeForce RTX 3090 GPU. “It’s always a good idea to use your own art for filters, so you can give everything a unique touch. So if you ask me if this is my art or not, it 100% is!” said the artist.

Natter has a strong passion for photogrammetry, she said, as virtually anything can be preserved in 3D. Photogrammetry features have the potential to save 3D artists countless hours. “I create hyperrealistic 3D models of real-life objects which I could not have done by hand,” she said. “Well, maybe I could’ve, but it would’ve taken forever.”

The artist even scanned her sweet pup Szikra to create a virtual 3D copy of her that will last forever.

In The NVIDIA Studio Anna Natter Szikra
Szikra is forever memorialized in 3D, thanks to the beta photogrammetry feature in Sampler.

To test the private beta photogrammetry feature in Substance 3D Sampler, Natter created this realistic tree model with a single series of images.

In The NVIDIA Studio Anna Natter Tree
2D to 3D made easy with Substance 3D Sampler.

Natter captured a video of a tree in a nearby park in her home country of Germany. The artist then uploaded the footage to Adobe After Effects, exporting the frames into an image sequence. After Effects contains over 30 features accelerated by RTX GPUs, which improved Natter’s workflow.

Once she was happy with the 3D image quality, Natter dropped the model from Substance 3D Sampler into Substance 3D Stager. The artist then applied true-to-life materials and textures to the scene and color matched the details to the scanned model with the Stager color picker.

In The NVIDIA Studio Anna Natter Tree Bark
Selecting areas to apply textures in Adobe Substance 3D Stager.

Natter then lit the scene with a natural outdoor High Dynamic Range Image (HDRI), one of the pre-built environment-lighting options in 3D Stager. “What I really like about the Substance 3D suite is that it cuts the frustration out of my workflow, and I can just do my thing in a flow state, without interruption, because everything is compatible and works together so well,” she said.

In The NVIDIA Studio Anna Natter Adobe Stock Bugs
Fine details like adding bugs from Adobe Stock helped Natter nail the scene.

The GeForce RTX 3090 GPU accelerated her workflow within 3D Stager, with RTX-accelerated and AI-powered denoising in the viewport unlocking interactivity and smooth movement. When it came time to render, RTX-accelerated ray tracing quickly delivered photorealistic 3D renders, up to 7x faster than with CPU alone.

“I’ve always had an NVIDIA GPU since I’ve been working in video editing for the past decade and wanted hardware that works best with my apps. The GeForce RTX 3090 has made my life so much easier, and everything gets done so much faster.” — 3D artist Anna Natter

In The NVIDIA Studio Anna Natter Tree Bark
Captions can be easily applied in Adobe Substance 3D Stager.

Natter can’t contain her excitement for the eventual general release of the Sampler photogrammetry feature. “As someone who has invested so much in 3D design, I literally can’t wait to see what people are going to create with this,” she said.

In The NVIDIA Studio Anna Natter
3D designer and creative explorer Anna Natter.

Check out Natter’s Behance page.

MAXimum Exposure in the #From2Dto3D Challenge 

NVIDIA Studio wants to see your 2D to 3D progress!

Join the #From2Dto3D challenge this month for a chance to be featured on the NVIDIA Studio social media channels, like @JennaRambles, whose goldfish sketch was transformed into a beautiful 3D image.

Entering is easy. Simply post a 2D piece of art next to a 3D rendition of it on Instagram, Twitter or Facebook. And be sure to tag #From2Dto3D.

The post Adobe MAX Kicks Off With Creative App Updates and 3D Artist Anna Natter Impresses This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Souped-Up Auto Quotes: ProovStation Delivers GPU-Driven AI Appraisals

Vehicle appraisals are getting souped up with a GPU-accelerated AI overhaul.

ProovStation, a four-year-old startup based in Lyon, France, is taking on the ambitious computer-vision quest of automating vehicle inspection and repair estimates, aiming AI-driven super-high-resolution stations at businesses worldwide.

It recently launched three of its state-of-the-art vehicle inspection scanners at French retail giant Carrefour’s Montesson, Vénissieux and Aix-en-Provence locations. The ProovStation drive-thru vehicle scanners are deployed at Carrefour parking lots for drivers to pull in to experience the free service.

The self-serve stations are designed for users to provide vehicle info and ride off with a value report and repair estimate in under two minutes. It also enables drivers to obtain a dealer offer to buy their car as quickly as within just seconds — which holds promise for consumers, as well as used car dealers and auctioneers.

Much is at play across cameras and sensors, high-fidelity graphics, multiple damage detection models, and models and analytics to turn damage detection into repair estimates and purchase offers.

“People often ask me how I’ve gotten so much AI going in this, and I tell them it’s because I work with NVIDIA Inception,” said Gabriel Tissandier, general manager and chief product officer at ProovStation.

Tapping into NVIDIA GPUs and NVIDIA Metropolis software development kits enables ProovStation to scan 5GB of image and sensor data per car and apply multiple vision AI detection models simultaneously, among other tasks.

ProovStation uses the NVIDIA DeepStream SDK to build its sophisticated vision AI pipeline and optimizes AI inference throughput using Triton Inference Server.

The setup enables ProovStation to run inference for the quick vehicle analysis turnarounds on this groundbreaking industrial edge AI application.

Driving Advances: Bernard Groupe Dealerships 

ProovStation is deploying its stations at a quick clip. That’s been possible because founder Gabriel Tissandier in the early stages connected with an ideal ally in Cedric Bernard, whose family’s Groupe Bernard car dealerships and services first invested in 2017 to boost its own operations.

Groupe Bernard has collected massive amounts of image data from its own businesses for ProovStation prototypes. Bernard left the family business to join Tissandier as the startup’s co-founder and CEO, and co-founder Anton Komyza joined them, and it’s been a wild ride of launches since.

ProovStation is a member of NVIDIA Inception, a program that accelerates cutting-edge startups with access to hardware and software platforms, technical training, as well as AI ecosystem support.

“People often ask me how I’ve gotten so much AI going in this, and I tell them it’s because I work with NVIDIA Inception,” said Tissandier, general manager and chief product officer at ProovStation.

Launching AI Stations Across Markets

ProovStation has deployed 35 scanning stations into operation so far, and it expects to double that number next year. It has launched its powerful edge AI-driven stations in Europe and the United States.

Early adopters include Groupe Bernard, U.K. vehicle sales site BCA Marketplace, OK Mobility car rentals in Spain and Germany’s Sixt car rentals. It also works with undisclosed U.S. automakers and a major online vehicle seller.

Car rental service Sixt has installed a station at Lyon Saint-Exupery Airport with the aim of making car pickups and returns easier.

“Sixt wants to really change the experience of renting a car,” said Tissandier.

Creating an ‘AI Super Factory’ for Damage Datasets

ProovStation has built up data science expertise and a dedicated team to handle its many specialized datasets for the difficult challenge of damage detection.

“To go from a damage review to a damage estimate can sometimes be really tricky,” said Tissandier.

ProovStation has a team of 10 experts in its AI Super Factory dedicated to labeling data with its own specialized software. They have processed more than 2 million images with labels so far, defining a taxonomy of more than 100 types of damages and more than 100 types of parts.

“We knew we needed this level of accuracy to make it reliable and efficient for businesses. Labeling images is super important, especially for us, so we invented some ways to label specific damages,” he said.

Tissandier said that the data science team members and others are brought up to speed on AI with courses from the NVIDIA Deep Learning Institute.

Delivering Data Collection With NVIDIA Industrial Edge AI

ProovStation scans a vehicle with 10 different cameras in its station and takes 300 images — or 5GB of data — for running on its detection models. NVIDIA GPUs enable ProovStation’s AI inference pipeline in 90 seconds to provide detection, assessment of damages, localization, measurements and estimates. Wheels are scanned with an electromagnetic frequency device from tire company Michelin for wear estimates. All of it runs on the NVIDIA edge AI system.

Using two NVIDIA GPUs in a station allows ProovStation to process all of this in high-resolution image analysis for improved accuracy. That data is also transferred to the cloud so ProovStation’s data science team can use it for further training.

Cameras, lighting and positioning are big issues. Detection models can be thrown off by things like glares on glass-shiney cars. ProovStation uses a defectometry model, which allows it to run detection while projecting lines onto vehicle surfaces, highlighting spots where problems appear in the lines.

It’s a challenging problem to solve that leads to business opportunities.

“All of the automotive industry is inspecting cars to provide services — to sell you new tires, to repair your car or windshield, it always starts with an inspection,”  said Tissandier.

The post Souped-Up Auto Quotes: ProovStation Delivers GPU-Driven AI Appraisals appeared first on NVIDIA Blog.

Read More

AI Supercomputer to Power $200 Million Oregon State University Innovation Complex

As a civil engineer, Scott Ashford used explosives to make the ground under Japan’s Sendai airport safer in an earthquake. Now, as the dean of the engineering college at Oregon State University, he’s at ground zero of another seismic event.

In its biggest fundraising celebration in nearly a decade, Oregon State announced plans today for a $200 million center where faculty and students can plug into resources that will include one of the world’s fastest university supercomputers.

The 150,000-square-foot center, due to open in 2025, will accelerate work at Oregon State’s top-ranked programs in agriculture, computer sciences, climate science, forestry, oceanography, robotics, water resources, materials sciences and more with the help of AI.

A Beacon in AI, Robotics

In honor of a $50 million gift to the OSU Foundation from NVIDIA’s founder and CEO and his wife — who earned their engineering degrees at OSU and met in one of its labs — it will be named the Jen-Hsun and Lori Huang Collaborative Innovation Complex (CIC).

“The CIC and new supercomputer will help Oregon State be recognized as one of the world’s leading universities for AI, robotics and simulation,” said Ashford, whose engineering college includes more than 10,000 of OSU’s 35,000 students.

“We discovered our love for computer science and engineering at OSU,” said Jen-Hsun and Lori Huang. “We hope this gift will help inspire future generations of students also to fall in love with technology and its capacity to change the world.

“AI is the most transformative technology of our time,” they added. “To harness this force, engineering students need access to a supercomputer, a time machine, to accelerate their research. This new AI supercomputer will enable OSU students and researchers to make very important advances in climate science, oceanography, materials science, robotics and other fields.”

A Hub for Students

With an extended-reality theater, robotics and drone playground and a do-it-yourself maker space, the new complex is expected to attract students from across the university. “It has the potential to transform not only the college of engineering, but the entire university, and have a positive economic and environmental impact on the state and the nation,” Ashford said.

The three-story facility will include a clean room, as well as labs for materials scientists, environmental researchers and more.

Oregon State Innovation Complex
Artist’s rendering of the Jen-Hsun and Lori Huang Collaborative Innovation Complex.

Ashford expects that over the next decade the center will attract top researchers, as well as research projects potentially worth hundreds of millions of dollars.

“Our donors and university leaders are excited about investing in a collaborative, transdisciplinary approach to problem solving and discovery — it will revitalize our engineering triangle and be an amazing place to study and conduct research,” he said.

A Forest of Opportunities

He gave several examples of the center’s potential. Among them:

  • Environmental and electronics researchers may collaborate to design and deploy sensors and use AI to analyze their data, finding where in the ocean or forest hard-to-track endangered species are breeding so their habitats can be protected.
  • Students can use augmented reality to train in simulated clean rooms on techniques for making leading-edge chips. Federal and Oregon state officials aim to expand workforce development for the U.S. semiconductor industry, Ashford said.
  • Robotics researchers could create lifelike simulations of their drones and robots to accelerate training and testing. (Cassie, a biped robot designed at OSU, just made Guinness World Records for the fastest 100-meter dash by a bot.)
  • Students at OSU and its sister college in Germany, DHBW-Ravensburg, could use NVIDIA Omniverse — a platform for building and operating metaverse applications and connecting their 3D pipelines — to enhance design of their award-winning, autonomous, electric race cars.
Oregon State's record-breaking robot
Cassie broke a record for a robot running a 100-meter dash.

Building AI Models, Digital Twins

Such efforts will be accelerated with NVIDIA AI and Omniverse, software that can expand the building’s physical labs with simulations and digital twins so every student can have a virtual workbench.

OSU will get state-of-the-art NVIDIA DGX SuperPOD and OVX SuperPOD clusters once the complex’s data center is ready. With an eye on energy efficiency, water that cooled computer racks will then help heat more than 500,000 square feet of campus buildings.

The SuperPOD will likely include a mix of about 60 DGX and OVX systems — powered by next-generation CPUs, GPUs and networking — creating a system powerful enough to train the largest AI models and perform complex digital twin simulations. Ashford notes OSU won a project working with the U.S. Department of Energy because its existing computer center has a handful of DGX systems.

Advancing Diversity, Inclusion

At the Oct. 14 OSU Foundation event announcing the naming of the new complex, Oregon State officials thanked donors and kicked off a university-wide fundraising campaign. OSU has requested support from the state of Oregon for construction of the building and seeks additional philanthropic investments to expand its research and support its hiring and diversity goals.

OSU’s president, Jayathi Murthy, said the complex provides an opportunity to advance diversity, equity and inclusion in the university’s STEM education and research. OSU’s engineering college is already among the top-ranked U.S. schools for tenured or tenure-track engineering faculty who are women.

AI Universities Sprout

Oregon State also is among a small but growing set of universities accelerating their journeys in AI and high performance computing.

A recent whitepaper described efforts at University of Florida to spread AI across its curriculum as part of a partnership with NVIDIA that enabled it to install HiPerGator, a DGX SuperPOD based on NVIDIA DGX A100 systems with NVIDIA A100 Tensor Core GPUs.

Following Florida’s example, Southern Methodist University announced last fall its plans to make the Dallas area a hub of AI development around its new DGX SuperPOD.

“We’re seeing a lot of interest in the idea of AI universities from Asia, Europe and across the U.S.,” said Cheryl Martin, who leads NVIDIA’s efforts in higher education research.

Oregon State autonomous vehicle
One of OSU’s autonomous race cars rounds the track.

The post AI Supercomputer to Power $200 Million Oregon State University Innovation Complex appeared first on NVIDIA Blog.

Read More