Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Novel AI technologies are generating images, stories and, now, new ways to imagine the automotive future.

At NVIDIA GTC, a global conference for the era of AI and the metaverse running online March 20-23, industry luminaries working on these breakthroughs will come together and share their visions to transform transportation.

This year’s slate of in-depth sessions includes leaders from automotive, robotics, healthcare and other industries, as well as trailblazing AI researchers.

Headlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse, a platform for creating and operating metaverse applications, in a keynote address on Tuesday, March 21, at 8 a.m. PT.

Conference attendees will have plenty of opportunities to network and learn from NVIDIA and industry experts about the technologies powering the next generation of automotive.

Here’s what to expect from auto sessions at GTC:

End-to-End Innovation

The entire automotive industry is being transformed by AI and metaverse technologies, whether they’re used for design and engineering, manufacturing, autonomous driving or the customer experience.

Speakers from these areas will share how they’re using the latest innovations to supercharge development:

  • Sacha Vražić, director of autonomous driving R&D at Rimac Technology, discusses how the supercar maker is using AI to teach any driver how to race like a professional on the track.
  • Toru Saito, deputy chief of Subaru Lab at Subaru Corporation, walks through how the automaker is improving camera perception with AI, using large-dataset training on GPUs and in the cloud.
  • Tom Xie, vice president at ZEEKR, explains how the electric vehicle company is rethinking the electronic architecture in EVs to develop a software-defined lineup that is continuously upgradeable.
  • Liz Metcalfe-Williams, senior data scientist, and Otto Fitzke, machine learning engineer at Jaguar Land Rover, cover key learnings from the premium automaker’s research into natural language processing to improve knowledge and systems, and to accelerate the development of high-quality, validated, cutting-edge products.
  • Marco Pavone, director of autonomous vehicle research; Sanja Fidler, vice president of AI research; and Sarah Tariq, vice president of autonomous vehicle software at NVIDIA, show how generative AI and novel, highly integrated system architectures will radically change how AVs are designed and developed.

Develop Your Drive

In addition to sessions from industry leaders, GTC attendees can access talks on the latest NVIDIA DRIVE technologies led by in-house experts.

NVIDIA DRIVE Developer Days consist of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, these talks will highlight the newest DRIVE features and how to apply them.

Topics include high-definition mapping, AV simulation, synthetic data generation for testing and validation, enhancing AV safety with in-system testing, and multi-task models for AV perception.

Access these virtual sessions and more by registering free to attend and see the technologies generating the intelligent future of transportation.

Read More

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

The video above represents one of the first times that a pangolin, one of the world’s most critically endangered species, was detected in real time using artificial intelligence.

A U.K.-based nonprofit called Conservation AI made this possible with the help of NVIDIA technology. Such use of AI can help track even the rarest, most reclusive of species in real time, enabling conservationists to protect them from threats, such as poachers and fires, before it’s too late to intervene.

The organization was founded four years ago by researchers at Liverpool John Moores University — Paul Fergus, Carl Chalmers, Serge Wich and Steven Longmore.

In the past year and a half, Conservation AI has deployed 70+ AI-powered cameras across the world. These help conservationists preserve biodiversity through real-time detection of threats using deep learning models trained with transfer learning.

“It’s very simple — if we don’t protect our biodiversity, there won’t be people on this planet,” said Chalmers, who teaches deep learning and applied AI at Liverpool John Moores University. “And without AI, we’re never going to achieve our targets for protecting endangered species.”

The Conservation AI platform — built using NVIDIA Jetson modules for edge AI and the NVIDIA Triton Inference Server — in just four seconds analyzes footage, identifies species of interest and alerts conservationists and other users of potential threats via email.

It can also rapidly model trends in biodiversity and habitat health using a huge database of images and other metadata that would otherwise take years to analyze. The platform now enables conservationists to identify these trends and species activities in real time.

Conservation AI works with 150 organizations across the globe, including conservation societies, safaris and game reserves. To date, the platform has processed over 2 million images, about half of which were from the past three months.

Saving Time to Save Species

Threats to biodiversity have long been monitored using camera traps — networks of cameras equipped with infrared sensors that are placed in the wild. But camera traps can produce data that is hard to manage, as there’s often much variability in images of the animals and their environments.

“A typical camera trap study can take three years to analyze, so by the time you get the insights, it’s too late to do anything about the threat to those species,” said Fergus, a professor of machine learning at Liverpool John Moores University. “Conservation AI can analyze the same amount of data and send results to conservation teams so that interventions can happen in real time, all enabled by NVIDIA technology.”

Many endangered species occupy remote areas without access to human communication systems. The team uses NVIDIA Jetson AGX Xavier modules to analyze drone footage from such areas streamed to a smart controller that can count species population or alert conservationists when species of interest are detected.

Energy-efficient edge AI provided by the Jetson modules, which are equipped with Triton Inference Server, has sped up deep learning inference by 4x compared to the organization’s previous methods, according to Chalmers.

“We chose Triton because of the elasticity of the framework and the many types of models it supports,” he added. “Being able to train the models on the NVIDIA accelerated computing stack means we can make huge improvements on the models very, very quickly.”

Conservation AI trains and inferences its deep learning models with NVIDIA RTX 8000, T4 and A100 Tensor Core GPUs — along with the NVIDIA CUDA toolkit. Fergus called NVIDIA GPUs “game changers in the world of applied AI and conservation, where there are big-data challenges.”

In addition, the team’s species-detection pipeline is built on the NVIDIA DeepStream software development kit for vision AI applications, which enables real-time video inference in the field.

“Without this technology, helicopters would normally be sent up to observe the animals, which is hugely expensive and bad for the environment as it emits huge amounts of carbon dioxide,” Chalmers said. “Conservation AI technology helps reduce this problem and detects threats to animals before it’s too late to intervene.”

Detecting Pangolins, Rhinos and More

The Conservation AI platform has been deployed by Chester Zoo, a renowned conservation society based in the U.K., to detect poachers in real time, including those hunting pangolins in Uganda.

Since many endangered species, like pangolins, are so elusive, obtaining enough imagery of them to train AI models can be difficult. So, the Conservation AI team is working with NVIDIA to explore the use of synthetic data for model training.

The platform is also deployed at a game reserve in Limpopo, South Africa, where the AI keeps an eye on wildlife in the region, including black and white rhinos.

“Pound for pound, rhino horn is worth more than diamond,” Chalmers said. “We’ve basically created a geofence around these rhinos, so the reserve can intervene as soon as a poacher or another type of threat is detected.”

The organization’s long-term goal, Fergus said, is to create a toolkit that supports conservationists with many types of efforts, including wildlife monitoring through satellite imagery, as well as using deep learning models that analyze audio — like animal cries or the sounds of a forest fire.

“The loss of biodiversity is really a ticking time bomb, and the beauty of NVIDIA AI is that it makes every second count,” Chalmers said. “Without the NVIDIA accelerated computing stack, we just wouldn’t be able to do this — we wouldn’t be able to tackle climate change and reverse biodiversity loss, which is the ultimate dream.”

Read more about how NVIDIA technology helps to boost conservation and prevent poaching.

Featured imagery courtesy of Chester Zoo.

Read More

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Fellow Hunters, get ready! This GFN Thursday welcomes Capcom’s Monster Hunter Rise and the expansion Sunbreak to the cloud, arriving soon for members.

Settle down for the weekend with 10 new games supported in the GeForce NOW library, including The Settlers: New Allies.

Plus, Amsterdam and Ashburn are next to light up on the RTX 4080 server map, giving nearby Ultimate members the power of an RTX 4080 gaming rig in the cloud. Keep checking the weekly GFN Thursday to see where the RTX 4080 SuperPOD upgrade rolls out next.

Palicos, Palamutes and Wyverns, Oh My

The hunt is on! Monster Hunter Rise, the popular action role-playing game from Capcom, is joining GeForce NOW soon. Protect the bustling Kamura Village from ferocious monsters; take on hunting quests with a variety of weapons and new hunting actions with the Wirebug; and work alongside a colorful cast of villagers to defend their home from the Rampage — a catastrophic event that nearly destroyed the village 50 years prior.

Members can expand the hunt with Monster Hunter Rise: Sunbreak, which adds new quests, monsters, locales, gear and more. And regular updates keep Hunters on the job, like February’s Free Title Update 4, which marks the return of the Elder Dragon Velkhana, the lord of the tundra that freezes all in its path.

Monster Hunter Rise Sunbreak on GeForce NOW
Carve out more time for monster hunting by playing in the cloud.

Whether playing solo or with a buddy, GeForce NOW members can take on dangerous new monsters anytime, anywhere. Ultimate members can protect Kamura Village at up to 4K at 120 frames per second — or immerse themselves in the most epic monster battles at ultrawide resolutions and 120 fps. Members won’t need to wait for downloads or worry about storage space, and can take the action with them across nearly all of their devices.

Rise to the challenge by upgrading today and get ready for Monster Hunter Rise to hit GeForce NOW soon.

New Week, New Games

The Settlers New Allies on GeForce NOW
Onward! There’s much to explore in the Forgotten Plains.

Kick off the weekend with 10 new titles, including The Settlers: New Allies. Choose among three unique factions and explore this whole new world powered by state-of-the-art graphics. Your settlement has never looked so lively.

Check out the full list of this week’s additions:

  • Labyrinth of Galleria: The Moon Society (New release on Steam)
  • Wanted: Dead (New release on Steam and Epic)
  • Elderand (New release on Steam, Feb. 16)
  • Wild West Dynasty (New release on Steam, Feb. 16)
  • The Settlers: New Allies (New release on Ubisoft, Feb. 17)
  • Across the Obelisk (Steam)
  • Captain of Industry (Steam)
  • Cartel Tycoon (Steam)
  • SimRail — The Railway Simulator (Steam)
  • Warpips (Epic Games Store)

The monthlong #3YearsOfGFN celebration continues on our Twitter and Facebook channels. Members shared the most beautiful place they’ve visited in-game on GFN.

And make sure to check out the question we have this week for GeForce NOW’s third anniversary celebration!

 

Read More

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

AI-augmented applications, photorealistic rendering, simulation and other technologies are helping professionals achieve business-critical results from multi-app workflows faster than ever.

Running these data-intensive, complex workflows, as well as sharing data and collaborating across geographically dispersed teams, requires workstations with high-end CPUs, GPUs and advanced networking.

To help meet these demands, Intel and NVIDIA are powering new platforms with the latest Intel Xeon W and Intel Xeon Scalable processors, paired with NVIDIA RTX 6000 Ada generation GPUs, as well as NVIDIA ConnectX-6 SmartNICs.

These new workstations bring together the highest levels of AI computing, rendering and simulation horsepower to tackle demanding workloads across data science, manufacturing, broadcast, media and entertainment, healthcare and more.

“Professionals require advanced power and performance to run the most intensive workflows, like using AI, rendering in real time or running multiple applications simultaneously,” said Bob Pette, vice president of professional visualization at NVIDIA. “The new Intel- and NVIDIA-Ada powered workstations deliver unprecedented speed, power and efficiency, enabling professionals everywhere to take on the most complex workflows across all industries.”

“The latest Intel Xeon W processors — featuring a breakthrough new compute architecture — are uniquely designed to help professional users tackle the most challenging current and future workloads,” said Roger Chandler, vice president and general manager of Creator and Workstation Solutions in the Client Computing Group at Intel. “Combining our new Intel Xeon workstation processors with the latest NVIDIA GPUs will unleash the innovation and creativity of professional creators, artists, engineers, designers, data scientists and power users across the world.”

Serving New Workloads 

Metaverse applications and the rise of generative AI require a new level of computing power from the underlying hardware. Creating digital twins in a simulated photorealistic environment that obeys the laws of physics and planning factories are just two examples of workflows made possible by NVIDIA Omniverse Enterprise, a platform for creating and operating metaverse applications.

BMW Group, for example, is using NVIDIA Omniverse Enterprise to design an end-to-end digital twin of an entire factory. This involves collaboration with thousands of planners, product engineers and facility managers in a single virtual environment to design, plan, simulate and optimize highly complex manufacturing systems before a factory is actually built or a new product is integrated into the real world.

The need for accelerated computing power is growing exponentially due to the explosion of AI-augmented workflows, from traditional R&D and data science workloads to edge devices on factory floors or in security offices, to generative AI solutions for text conversations and text-to-image applications.

Extended reality (XR) solutions for collaborative work also require significant computing resources. Examples of XR applications include design reviews, product design validation, maintenance and support training, rehearsals, interactive digital twins and location-based entertainment. All of these demand high-resolution, photoreal images to create the most intuitive and compelling immersive experiences, whether available locally or streamed to wireless devices.

Next-Generation Platform Features 

With a breakthrough new compute architecture for faster individual CPU cores and new embedded multi-die interconnect bridge packaging, the Xeon W-3400 and Xeon W-2400 series of processors enable unprecedented scalability for increased workload performance. Available with up to 56 cores in a single socket, the top-end Intel Xeon w9-3495X processor features a redesigned memory controller and larger L3 cache, delivering up to 28% more single-threaded(1) and 120% more multi-threaded(2) performance over the previous- generation Xeon W processors.

Based on the NVIDIA Ada Lovelace GPU architecture, the latest NVIDIA RTX 6000 brings incredible power efficiency and performance to the new workstations. It features 142 third-generation RT Cores, 568 fourth-generation Tensor Cores and 18,176 latest-generation CUDA cores combined with 48GB of high-performance graphics memory to provide up to 2x ray-tracing, AI, graphics and compute performance over the previous generation.

NVIDIA ConnectX-6 Dx SmartNICs enable professionals to handle demanding, high-bandwidth 3D rendering and computer-aided design tasks, as well as traditional office work with line-speed network connectivity support based on two 25Gbps ports and GPUDirect technology for increasing GPU bandwidth by 10x over standard NICs. The high-speed, low-latency networking and streaming capabilities enable teams to move and ingest large datasets or to allow remote individuals to collaborate across applications for design and visualization.

Availability 

The new generation of workstations powered by the latest Intel Xeon W and Intel Scalable processors and NVIDIA RTX Ada generation GPUs will be available for preorder beginning today from BOXX and HP, with more coming soon from other workstation system integrators.

To learn more, tune into the launch event.

 

(1) Based on SPEC CPU 2017_Int (1-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).
(2) Based on SPEC CPU 2017_Int (n-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).

Read More

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Whether creating realistic digital humans that can express emotion or building immersive virtual worlds, 3D artists can reach new heights with NVIDIA Omniverse, a platform for creating and operating metaverse applications.

A new Blender alpha release, now available in the Omniverse Launcher, lets users of the 3D graphics software optimize scenes and streamline workflows with AI-powered character animations.

Save Time, Effort With New Blender Add-Ons

The new scene optimization add-on in the Blender release enables creators to fix bad geometry and generate automatic UVs, or 2D maps of 3D objects. It also reduces the number of polygons that need to be rendered to increase the scene’s overall performance, which significantly brings down file size, as well as CPU and GPU memory usage.

Plus, anyone can now accomplish what used to require a technical rigger or animator using an Audio2Face add-on.

A panel in the add-on makes it easier to use Blender characters in Audio2Face, an AI-enabled tool that automatically generates realistic facial expressions from an audio file.

This new functionality eases the process of bringing generated face shapes back onto rigs — that is, digital skeletons — by applying shapes exported through the Universal Scene Description (USD) framework onto a character even if it is fully rigged, meaning its whole body has a working digital skeleton. The integration of the facial shapes doesn’t alter the rigs, so Audio2Face shapes and animation can be applied to characters — whether for games, shows and films, or simulations — at any point in the artist’s workflow.

Realistic Character Animation Made Easy

Audio2Face puts AI-powered facial animation in the hands of every Blender user who works with Omniverse.

Using the new Blender add-on for Audio2Face, animator and popular YouTuber Marko Matosevic, aka Markom 3D, rigged and animated a Battletoads-inspired character using just an audio file.

Australia-based Matosevic joined Dave Tyner, a technical evangelist at NVIDIA, on a livestream to showcase their live collaboration across time zones, connecting 3D applications in a real-time Omniverse jam session. The two used the new Blender alpha release with Omniverse to make progress on one of Matosevic’s short animations.

The new Blender release was also on display last month at CES in The Artists’ Metaverse, a demo featuring seven artists, across time zones, who used Omniverse Nucleus Cloud, Autodesk, SideFX, Unreal Engine and more to create a short cinematic in real time.

Creators can save time and simplify processes with the add-ons available in Omniverse’s Blender build.

NVIDIA principal artist Zhelong Xu, for example, used Blender and Omniverse to visualize an NVIDIA-themed “Year of the Rabbit” zodiac.

“I got the desired effect very quickly and tested a variety of lighting effects,” said Xu, an award-winning 3D artist who’s previously worked at top game studio Tencent and made key contributions to an animated show on Netflix.

Get Plugged Into the Omniverse 

Learn more about Blender and Omniverse integrations by watching a community livestream on Wednesday, Feb. 15, at 11 a.m. PT via Twitch and YouTube.

And the session catalog for NVIDIA GTC, a global AI conference running online March 20-23, features hundreds of curated talks and workshops for 3D creators and developers. Register free to hear from NVIDIA experts and industry luminaries on the future of technology.

Creators and developers can download NVIDIA Omniverse free. Enterprises can try Omniverse Enterprise free on NVIDIA LaunchPad. Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Making a Splash: AI Can Help Protect Ocean Goers From Deadly Rips

Making a Splash: AI Can Help Protect Ocean Goers From Deadly Rips

Surfers, swimmers and beachgoers face a hidden danger in the ocean: rip currents. These narrow channels of water can flow away from the shore at speeds up to 2.5 meters per second, making them one of the biggest safety risks for those enjoying the ocean.

To help keep beachgoers safe, Christo Rautenbach, a coastal and estuarine physical processes scientist, has teamed up with the National Institute of Water and Atmospheric Research in New Zealand to develop a real-time rip current identification tool using deep learning.

On this episode of the NVIDIA AI Podcast, host Noah Kravitz interviews Rautenbach about how AI can be used to identify rip currents and the potential for the tool to be used globally to help reduce the number of fatalities caused by rip currents.

Developed in collaboration with Surf Lifesaving New Zealand, the rip current identification tool has achieved a detection rate of roughly 90% in trials. Rautenbach also shares the research behind the technology, which was published in the November 22 edition of the journal Remote Sensing.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint
Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast on Your Favorite Platform

You can now listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Featured image credit: T. Caulfield

Read More

3D Creators Share Art From the Heart This Week ‘In the NVIDIA Studio’

3D Creators Share Art From the Heart This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Love and creativity are in the air this Valentine’s Day In the NVIDIA Studio, as 3D artist Molly Brady presents a parody scene inspired by the iconic The Birth of Venus (Redux) painting by Sando Botticelli.

Plus, join the #ShareYourHeART challenge by sharing what Valentine’s Day means to you in a scene built with NVIDIA Omniverse, a platform for creating and operating metaverse applications. Use the hashtag to post artwork — whether heartened by love, chocolate, teddy bears or anything else Valentine’s-themed — for a chance to be featured across NVIDIA social media channels.

3D artist Tanja Langgner’s delectable scene with chocolate hearts, featured below, is just one example.

Also, get a chance to win a GeForce RTX 3090 Ti GPU in the NVIDIA Instant NeRF VR sweepstakes. Named by TIME Magazine as one of the best inventions of 2022, NVIDIA Instant NeRF enables creators to rapidly create 3D models from 2D images and use them in virtual scenes. The tool provides a glimpse into the future of photography, 3D graphics and virtual worlds. Enter the sweepstakes by creating your own NeRF scene, and look to influencer Paul Trillo’s Instagram for inspiration.

New NVIDIA Studio laptops powered by GeForce RTX 40 Series Laptop GPUs are now available, including MSI’s Stealth 17 Studio and Razer’s 16 and 18 models — with more on the way. Learn why PC Gamer said the “RTX 4090 pushes laptops to blistering new frontiers: Yes, it’s fast, but also much more.”

Download the latest NVIDIA Studio Driver to enhance existing app features and reduce repetitive tasks. ON1 NoNoise AI, an app that quickly removes image noise while preserving and enhancing photo details, released an update speeding this process by an average of 50% on GeForce RTX 40 Series GPUs.

And NVIDIA GTC, a global conference for the era of AI and the metaverse, is running online March 20-23, with a slew of creator sessions, Omniverse tutorials and more — all free with registration. Learn more below.

A Satirical Valentine

Molly Brady is a big fan of caricatures.

“I love parody,” she gleefully admitted. “Nothing pleases me more than taking the air out of something serious and stoic.”

Botticelli’s The Birth of Venus painting, often referenced and revered, presented Brady with an opportunity for humor through her signature visual style.

According to Brady, “3D allows you to mix stylistic artwork with real-world limitations,” which is why the touchable, cinematic look of stop-motion animation heavily inspires her work.

“Stop-motion reforms found items into set pieces for fantastical worlds, giving them a new life and that brings me immense joy,” she said.

Brady’s portfolio features colorful, vibrant visuals with a touch of whimsical humor.

Brady composited Birth of Venus (Redux) with placeholder meshes, focusing on the central creature figure, before confirming the composition and scale were to her liking. She then sculpted finery details in the flexible 3D modeling app Foundry Modo, assisted by RTX acceleration in OTOY OctaneRender, which was made possible by her GeForce RTX 4090 GPUs.

Advanced sculpting was completed in Modo.

She then applied materials and staged lighting with precision, and added speed with the RTX-accelerated ray tracing renderer. Brady has the option to deploy Octane Render, her preferred 3D renderer, in over 20 3D applications, including Autodesk 3ds Max, Blender and Maxon’s Cinema 4D.

After rendering the image, Brady deployed several post-processing features in Adobe Photoshop to help ensure the colors popped, as well as to add grain to compensate for any compression when posted on social media. Her RTX GPU affords over 30 GPU-accelerated features, such as blur gallery, object selection, liquify and smart sharpen.

“Art has been highly therapeutic for me, not just as an outlet to express emotion but to reflect how I see myself and what I value,” Brady said. “Whenever I feel overwhelmed by the pressure of expectation, whether internal or external, I redirect my efforts and instead create something that brings me joy.”

3D artist Molly Brady.

View more of Brady’s artwork on Instagram.

Valen(time) to Join the #ShareYourHeART Challenge

The photorealistic, chocolate heart plate beside a rose-themed mug and napkins, featured below, is 3D artist and illustrator Tanja Langgner’s stunning #ShareYourHeART challenge entry.

Hungry?

Langgner gathered assets and sculpted the heart shape using McNeel Rhino and Maxon ZBrush. Next, she assembled the pieces in Blender and added textures using Adobe Substance 3D Painter. The scene was then exported from Blender as a USD file and brought into Omniverse Create, where the artist added lighting and virtual cameras to capture the sweets with the perfect illuminations and angles.

“The main reason I started using Omniverse was its capability to link all my favorite apps,” Langgner said. “Saving time on exporting, importing and recreating materials in each app is a dream come true.”

Learn more about Langgner’s creative journey at the upcoming Community Spotlight livestream on the Omniverse Twitch channel and YouTube on Wednesday, Feb. 22, from 11 a.m. to 12 p.m. PT.

Join the #ShareYourHeART challenge by posting your own Valentine’s-themed Omniverse scene on social media using the hashtag. Entries could be featured on the NVIDIA Omniverse Twitter, LinkedIn and Instagram accounts.

Creative Boosts at GTC 

Experience this spring’s GTC for more inspiring content, expert-led sessions and a must-see keynote to accelerate your life’s creative work.

Catch these sessions live or watch on demand:

  • 3D Art Goes Multiplayer: Behind the Scenes of Adobe Substance’s “End of Summer” Project With Omniverse [S51239]
  • 3D and Beyond: How 3D Artists Can Build a Side Hustle in the Metaverse [SE52117]
  • NVIDIA Omniverse User Group [SE52047]
  • Accelerate the Virtual Production Pipeline to Produce an Award-Winning Sci-Fi Short Film [S51496]
  • 3D by AI: How Generative AI Will Make Building Virtual Worlds Easier [S52163]
  • Custom World Building With AI Avatars: The Little Martians Sci-Fi Project [S51360]
  • AI-Powered, Real-Time, Markerless: The New Era of Motion Capture [S51845]

Search the GTC session catalog or check out the Media and Entertainment and Omniverse topics for additional creator-focused talks.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Crossing Continents: XPENG G9 SUV and P7 Sedan Set Course for Scandinavia, the Netherlands

Crossing Continents: XPENG G9 SUV and P7 Sedan Set Course for Scandinavia, the Netherlands

Electric automaker XPENG’s flagship G9 SUV and P7 sports sedan are now available for order in Sweden, Denmark, Norway and the Netherlands — an expansion revealed last week at the eCar Expo in Stockholm.

The intelligent electric vehicles are built on the high-performance NVIDIA DRIVE Orin centralized compute architecture and deliver AI capabilities that are continuously upgradable through over-the-air software updates.

“This announcement represents a significant milestone as we build our presence in Europe,” said Brian Gu, vice chair and president of XPENG. “We believe both vehicles deliver a new level of sophistication and a people-first mobility experience, and will be the electric vehicles of choice for many European customers.”

Safety Never Takes a Back Seat 

The XPENG G9 and P7 come equipped with XPENG’s proprietary XPILOT Advanced Driver Assistance System, which offers safety, driving and parking support through a variety of smart functions.

The system is supported by 29 sensors, including high-definition radars, ultrasonic sensors, and surround-view and high-perception cameras, enabling the vehicles to safely tackle diverse driving scenarios.

The EVs are engineered to meet the European New Car Assessment Programme’s five-star safety standards, along with the European Union’s stringent whole vehicle type approval certification.

Leading Charge for the Long Haul

The rear-wheel-drive (RWD), long-range version of the XPENG G9 can travel up to 354 miles on a single charge and features a new powertrain system for ultrafast charging, going from 10% to 80% in just 20 minutes. The P7 RWD, long-range model also has optimized charging power to reach 80% in 29 minutes, while offering up to 358 miles on a single charge.

To ensure an easy, fast charging experience, XPENG customers can access more than 400,000 charging stations in Europe through the automaker’s collaboration with major third-party charging operations and mobility service providers.

The XPENG G9 features faster charging and longer range, up to 354 miles on a single charge.

Beauty Backed by Brains and Brawn

With the high compute power found only with DRIVE Orin, the XPENG G9 and P7 advanced driving systems boast superior performance, while sporting sleek and elegant designs, quality craftsmanship and comfort to meet the most discerning of tastes.

The upgraded in-car Xmart operating system (OS) features a new 3D user interface that offers support in English and other European languages, depending on the market. The OS comes with an improved voice assistant that can distinguish voice commands from four zones in the cabin. It also features wide infotainment screens and a library of in-car apps to assist and entertain both the driver and passengers.

The G9 and P7 are available in all-wheel drive (AWD) or RWD configurations. XPENG reports that the G9’s AWD version delivers up to 551 horsepower, and can accelerate from 0 to 100 kilometers per hour in 3.9 seconds, while the upgraded P7 AWD model can do the same in 4.1 seconds.

The XPENG P7 features an immersive cabin experience.

Deliveries of the P7 will begin in June, while the G9 is expected to start in September. To support demand in key European markets, XPENG plans to open delivery and service centers in Lørenskog, Norway, this month — as well as in Badhoevedorp, the Netherlands; Stäket, Sweden; and Hillerød, Denmark in Q2 2023.

The EV maker expects to open additional authorized service locations in other European countries by the end of the year.

Featured image: Next-gen XPENG P7 sports sedan, powered by NVIDIA DRIVE Orin.

Read More

3D Artist Brings Ride and Joy to Automotive Designs With Real-Time Renders Using NVIDIA RTX

3D Artist Brings Ride and Joy to Automotive Designs With Real-Time Renders Using NVIDIA RTX

Designing automotive visualizations can be incredibly time consuming. To make the renders look as realistic as possible, artists need to consider material textures, paints, realistic lighting and reflections, and more.

For 3D artist David Baylis, it’s important to include these details and still create high-resolution renders in a short amount of time. That’s why he uses the NVIDIA RTX A6000 GPU, which allows him to use features like real-time ray tracing so he can quickly get the highest-fidelity image.

The RTX A6000 also enables Baylis to handle massive amounts of data with 48GB of VRAM, which means more GPU memory. In computer graphics, the higher the resolution of the image, the more memory is used. And with the RTX A6000, Baylis can extract more data without worrying about memory limits slowing him down.

Bringing Realistic Details to Life With RTX

To create his automotive visualizations, Baylis starts with 3D modeling in Autodesk 3ds Max software. He’ll set up the scene and work on the car model before importing it to Unreal Engine, where he works on lighting and shading for the final render.

In Unreal Engine, Baylis can experiment with details such as different car paints to see what works best on the 3D model. Seeing all the changes in real time enables Baylis to iterate and experiment with design choices, so he can quickly achieve the look and feel he’s aiming for.

In one of his latest projects, Baylis created a scene with an astounding polycount of more than 50 million triangles. Using the RTX A6000, he could easily move around the scene to see the car from different angles. Even in path-traced mode, the A6000 allows Baylis to maintain high frame rates while switching from one angle to the next.

Rendering at a higher resolution is important to create photorealistic visuals. In the example below, Baylis shows a car model rendered at 4K resolution. But when zoomed in, the graphics start to appear blurry.

When the car is rendered at 12K resolution, the details on the car become sharper. By rendering at higher resolutions, the artist can include extra details to make the car look even more realistic. With the RTX A6000, Baylis said the 12K render took under 10 minutes to complete.

It’s not just the real-time ray tracing and path tracing that help Baylis enhance his designs. There’s another component he said he never thought would make an impact on creative workflows — GPU memory.

The RTX A6000 GPU is equipped with 48GB of VRAM, which allows Baylis to load incredibly high-resolution textures and high-polygon assets. The VRAM is especially helpful for automotive renders because the datasets behind them can be massive.

The large memory of the RTX A6000 allows him to easily manage the data.

If we throw more polygons into the scene, or if we include more scanned assets, it tends to use a lot of VRAM, but the RTX A6000 can handle all of it,” explained Baylis. “It’s great not having to think about optimizing all those assets in the scene. Instead, we can just scan the data in, even if the assets are 8K, 16K or even 24K resolution.”

When Baylis rendered one still frame at 8K resolution, he saw it only took up 24GB of VRAM. So he pushed the resolution higher to 12K, using almost 35GB of VRAM — with plenty of headroom to spare.

This is an important feature to highlight, because when people look at new GPUs, they immediately look at benchmarks and how fast it can render things,” said Baylis. “And it’s good if you can render graphics a minute or two faster, but if you really want to take projects to the finish line, you need more VRAM.”

Using NVLink, Baylis can bridge two NVIDIA RTX A6000 GPUs together to scale memory and performance. With one GPU, it takes just about a minute to render a path-traced image of the car. But using dual RTX A6000 GPUs with NVLink, it reduces the render time by almost half. NVLink also combines GPU memory, providing 96 GB VRAM total. This makes Baylis’ animation workflows much faster and easier to manage.

Check out more of Baylis’ work in the video below, and learn more about NVIDIA RTX. And join us at NVIDIA GTC, which takes place March 20-23, to learn more about the latest technologies shaping the future of design and visualization.

Read More

Gather Your Party: GFN Thursday Brings ‘Baldur’s Gate 3’ to the Cloud

Gather Your Party: GFN Thursday Brings ‘Baldur’s Gate 3’ to the Cloud

Venture to the Forgotten Realms this GFN Thursday in Baldur’s Gate 3, streaming on GeForce NOW.

Celebrations for the cloud gaming service’s third anniversary continue with a Dying Light 2 reward that’s to die for. It’s the cherry on top of three new titles joining the GeForce NOW library this week.

Roll for Initiative

Mysterious abilities are awakening inside you. Embrace corruption or fight against darkness itself in Baldur’s Gate 3 (Steam) – a next-generation role-playing game, set in the world of Dungeons and Dragons.

Choose from a wide selection of D&D races and classes, or play as an origin character with a handcrafted background on underpowered PCs and Macs. Adventure, loot, battle and romance as you journey through the Forgotten Realms and beyond from mobile devices. Play alone and select companions carefully, or as a party of up to four in multiplayer.

Level up to the GeForce NOW Ultimate membership to experience the power of an RTX 4080 in the cloud and all of its benefits, including up to 4K 120 frames per second gameplay on PC and Mac, and ultrawide resolution support for a truly immersive experience.

Dying 2 Celebrate This Anniversary

To celebrate the third anniversary of GeForce NOW, members can now check their accounts to make sure they received the gift of free Dying Light 2 rewards.

Dying Light 2 GeForce NOW Anniversary Reward
You’re all set to survive the post-apocalyptic wasteland with this loadout.

Claim a new in-game outfit dubbed “Post-Apo,” complete with a Rough Duster, Bleak Pants, Well-Worn Boots, Tattered Leather Gauntlets, Dystopian Mask and Spiked Bracers to scavenge around and parkour in. Members who upgrade to Ultimate and Priority memberships can claim extra loot with this haul, including the Patchy Paraglider and Scrap Slicer weapon.

Visit the GeForce NOW Rewards portal to start receiving special offers and in-game goodies.

Welcome to the Weekend

Recipe for Disaster on GeForce NOW
Uh… maybe we should order takeout.

Buckle up for three more games supported in the GeForce NOW library this week.

  • Recipe for Disaster (Free on Epic Games, Feb. 9-16)
  • Baldur’s Gate 3 (Steam)
  • Inside the Backrooms (Steam)

Members continue to celebrate #3YearsOfGFN on our social channels, sharing their favorite cloud gaming devices:

Follow #3YearsOfGFN on Twitter and Facebook all month long and check out this week’s question.

 

Read More