Shutterstock Brings Generative AI to 3D Scene Backgrounds With NVIDIA Picasso

Shutterstock Brings Generative AI to 3D Scene Backgrounds With NVIDIA Picasso

Picture this: Creators can quickly create and customize 3D scene backgrounds with the help of generative AI, thanks to cutting-edge tools from Shutterstock.

The visual-content provider is building services using NVIDIA Picasso — a cloud-based foundry for developing generative AI models for visual design.

The work incorporates Picasso’s latest feature — announced today during NVIDIA founder and CEO Jensen Huang’s SIGGRAPH keynote — which will help artists enhance and light 3D scenes based on simple text or image prompts, all with AI models built using fully licensed, rights-reserved data.

From these prompts, the new gen AI feature quickly generates custom 360-degree, 8K-resolution, high-dynamic-range imaging (HDRi) environment maps, which artists can use to set a background and light a scene.

This expands on NVIDIA’s collaboration with Shutterstock to empower the next generation of digital content-creation tools and accelerate 3D model generation.

To meet a surging demand for immersive visuals in films, games, virtual worlds, advertising and more, the 3D artist community is rapidly expanding, with over 20% growth in the past year.

Many of these artists are tapping generative AI to bolster their complex workflows — and will be able to use the technology to quickly create and customize environment maps. This allows more time to work on hero 3D assets, which are the primary assets of a 3D scene that viewers will focus on. It makes a panoramic difference when creating compelling 3D visuals.

“We’re committed to hyper-enabling 3D artists and collaborators — helping them build the immersive environments they envision faster than ever before and streamlining their content-creation workflows using NVIDIA Picasso,” said Dade Orgeron, vice president of 3D innovation at Shutterstock.

Generating Photorealistic Environment Maps

Previously, artists needed to buy expensive 360-degree cameras to create backgrounds and environment maps from scratch, or choose from fixed options that may not precisely match their 3D scene.

Now, users can simply provide a prompt — whether that’s text or a reference image — and the 360 HDRi services built on Picasso will quickly generate panoramic images. Plus, thanks to generative AI, the custom environment map can automatically match the background image that’s inputted as a prompt.

Users can then customize the maps and quickly iterate on ideas until they achieve the vision they want.

Collaboration to Boost 3D World-Building

Autodesk, a provider of 3D software and tools for creators in media and entertainment, is focused on giving artists the creative freedom to inspire and delight audiences worldwide.

Enabling artists to trade mundane tasks for unbridled creativity, Autodesk will integrate generative AI content-creation services — developed using foundation models in Picasso — with its popular 3D software Maya.

Supercharging Autodesk customer workflows with AI allows artists to focus on creating — and to ultimately produce content faster.

Generative AI Model Foundry

Picasso is part of NVIDIA AI Foundations, which advances enterprise-level generative AI for text, visual content and even biology.

The foundry will also adopt new NVIDIA research to generate physics-based rendering materials from text and image prompts, demonstrated at SIGGRAPH’s Real-Time Live competition. This will enable content providers to create 3D services, software and tools that enhance and expedite the simulation of diverse physical materials, such as tiles, metals and wood — complete with texture-mapping techniques, including normal, roughness and ambient occlusion.

Picasso runs on the NVIDIA Omniverse Cloud platform-as-a-service and is accessible via a serverless application programming interface that content and service providers like Shutterstock can easily connect to their websites and applications.

Learn about the latest advances in generative AI, graphics and more by joining NVIDIA at SIGGRAPH, running through Thursday, Aug. 10.

Read More

A Textured Approach: NVIDIA Research Shows How Gen AI Helps Create and Edit Photorealistic Materials

NVIDIA researchers are taking the stage at SIGGRAPH, the world’s largest computer graphics conference, to demonstrate a generative AI workflow that helps artists rapidly create and iterate on materials for 3D scenes.

The research demo, which will be presented today at the show’s Real-Time Live event, showcases how artists can use text or image prompts to generate custom textured materials — such as fabric, wood and stone — faster and with finer creative control. These capabilities will be coming to NVIDIA Picasso, allowing enterprises, software creators and service providers to create custom generative AI models for materials, developed using their own fully licensed data.

This set of AI models will facilitate iterative creating and editing of materials, enabling companies to offer new tools that’ll help artists rapidly refine a 3D object’s appearance until they achieve the desired result.

In the demo, NVIDIA researchers experiment with a living-room scene, like an interior designer assisted by AI might do in any 3D rendering application. In this case, researchers use NVIDIA Omniverse USD Composer — a reference application for scene assembly and composition using Universal Scene Description, known as OpenUSD — to add a brick-textured wall, to create and modify fabric choices for the sofa and throw pillows, and to incorporate an abstract animal design in a specific area of the wall.

Generative AI Enables Iterative Design 

The Real-Time Live demo combines several optimized AI models — a palette of tools that developers using Picasso will be able to customize and integrate into creative applications for artists.

Once integrated into creative applications, these features will allow artists to enter a brief text prompt to generate materials — such as a brick or a mosaic pattern — that are tileable, meaning they can be seamlessly replicated over a surface of any size. Or, they can import a reference image, such as a swatch of flannel fabric, and apply it to any object in the virtual scene.

An AI editing tool lets artists modify a specific area of the material they’re working on, such as the center of a coffee table texture.

The AI-generated materials support physics-based rendering, responding realistically to changes in the scene’s lighting. They include normal, roughness and ambient occlusion maps — features that are critical to creating and fine-tuning materials for photorealistic 3D scenes.

When accelerated on NVIDIA Tensor Core GPUs, materials can be generated in near real time, and can be upscaled in the background, achieving up to 4K resolution while creators continue to refine other parts of the scene.

Across creative industries — including architecture, game development and interior design — these capabilities could help artists quickly explore ideas and experiment with different aesthetic styles to create multiple versions of a scene.

A game developer, for example, could use these generative AI features to speed up the process of designing an open world environment or creating a character’s wardrobe. An architect could experiment with different styles of building facades in various lighting environments.

Build Generative AI Services With NVIDIA Picasso 

These capabilities for physics-based material generation will be made available in NVIDIA Picasso, a cloud-based foundry that allows companies to build, optimize and fine-tune their own generative AI foundational models for visual content.

Picasso enables content providers to develop generative AI tools and services trained on fully licensed, rights-reserved data. It’s part of NVIDIA AI Foundations, a set of model-making services that advance generative AI across text, visual content and biology.

At today’s SIGGRAPH keynote, NVIDIA founder and CEO Jensen Huang also announced a new Picasso feature to generate photorealistic 360 HDRi environment maps to light 3D scenes using simple text or image prompts.

See This Research at SIGGRAPH’s Real-Time Live 

Real-Time Live is one of the most anticipated events at SIGGRAPH. This year, the showcase features more than a dozen jury-reviewed projects, including those from teams at Roblox, the University of Utah and Metaphysic, a member of the NVIDIA Inception program for cutting-edge startups.

At the event, NVIDIA researchers will present this interactive materials research live, including a demo of the super resolution tool. Conference attendees can catch the session today at 6 p.m. PT in West Hall B at the Los Angeles Convention Center.

Learn about the latest advances in generative AI, graphics and more by joining NVIDIA at SIGGRAPH, running through Thursday, Aug. 10.

Read More

DENZA Collaborates With WPP to Build and Deploy Advanced Car Configurators on NVIDIA Omniverse Cloud

DENZA Collaborates With WPP to Build and Deploy Advanced Car Configurators on NVIDIA Omniverse Cloud

DENZA, the luxury EV brand joint venture between BYD and Mercedes-Benz, has collaborated with marketing and communications giant WPP and NVIDIA Omniverse Cloud to build and deploy its next generation of car configurators, NVIDIA founder and CEO Jensen Huang announced at SIGGRAPH.

WPP is using Omniverse Cloud — a platform for developing, deploying and managing industrial digitalization applications — to help unify the automaker’s highly complex design and marketing pipeline.

Omniverse Cloud enables WPP to build a single, physically accurate, real-time digital twin of the DENZA N7 model by integrating full-fidelity design data from the EV maker’s preferred computer-aided design tools via Universal Scene Description, or OpenUSD.

OpenUSD is a 3D framework that enables interoperability between software tools and data types for the building of virtual worlds.

The implementation of a new unified asset pipeline breaks down proprietary data silos, fostering enhanced data accessibility and facilitating collaborative, iterative reviews for the organization’s large design teams and stakeholders. It enables WPP to work on launch campaigns earlier in the design process, making iterations faster and less costly.

Unifying Asset Pipelines With Omniverse Cloud

Using Omniverse Cloud, WPP’s teams can connect their own pipeline of OpenUSD-enabled design and content creation tools such as Autodesk Maya and Adobe Substance 3D Painter to develop a new configurator for the DENZA N7. With a unified asset pipeline in Omniverse, WPP’s teams of artists can iterate and edit in real time a path-traced view of the full engineering dataset of the DENZA N7 — ensuring the virtual car accurately represents the physical car.

Traditional car configurators require hundreds of thousands of images to be prerendered to represent all possible options and variants. OpenUSD makes it possible for WPP to create a digital twin of the car that includes all possible variants in one single asset. No prerendered images are required.

In parallel, WPP’s environmental artists create fully interactive, live 3D virtual sets. These can start with a scan of a real-world environment, such as those WPP captures with their robot dog, or tap into generative AI tools from providers such as Shutterstock to instantly generate 360-degree HDRi backgrounds to maximize opportunity for personalization.

Shutterstock is using NVIDIA Picasso — a foundry for building generative AI visual models — to  develop a variety of generative AI services to accelerate 3D workflows. At SIGGRAPH, Shutterstock announced the first offering of these new services – 360 HDRi – to create photorealistic HDR environment maps to relight a scene. With this feature, artists can rapidly create custom environments that fit their needs.

One-Click Publish to GDN

Once the 3D experience is complete, with just one click, WPP can publish it to Graphics Delivery Network (GDN), part of NVIDIA Omniverse Cloud. GDN is a network of data centers built to serve real-time, high-fidelity 3D content to nearly any web device, enabling interactive experiences in the dealer showroom as well as on consumers’ mobile devices.

This eliminates the tedious process of manually packaging, deploying, hosting and managing the experience themselves. If updates are needed, just like with the initial deployment, WPP can publish them with a single click.

CTA: Learn more about Omniverse Cloud and GDN.

Read More

NVIDIA H100 Tensor Core GPU Used on New Microsoft Azure Virtual Machine Series Now Generally Available

NVIDIA H100 Tensor Core GPU Used on New Microsoft Azure Virtual Machine Series Now Generally Available

Microsoft Azure users can now turn to the latest NVIDIA accelerated computing technology to train and deploy their generative AI applications.

Available today, the Microsoft Azure ND H100 v5 VMs using NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking — enables scaling generative AI, high performance computing (HPC) and other applications with a click from a browser.

Available to customers across the U.S., the new instance arrives as developers and researchers are using large language models (LLMs) and accelerated computing to uncover new consumer and business use cases.

The NVIDIA H100 GPU delivers supercomputing-class performance through architectural innovations, including fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs and the latest NVLink technology that lets GPUs talk to each other at 900GB/sec.

The inclusion of NVIDIA Quantum-2 CX7 InfiniBand with 3,200 Gbps cross-node bandwidth ensures seamless performance across the GPUs at massive scale, matching the capabilities of top-performing supercomputers globally.

Scaling With v5 VMs

ND H100 v5 VMs are ideal for training and running inference for increasingly complex LLMs and computer vision models. These neural networks drive the most demanding and compute-intensive generative AI applications, including question answering, code generation, audio, video and image generation, speech recognition and more.

The ND H100 v5 VMs achieve up to 2x speedup in LLMs like the BLOOM 175B model for inference versus previous generation instances, demonstrating their potential to further optimize AI applications.

NVIDIA and Azure

NVIDIA H100 Tensor Core GPUs on Azure provide enterprises the performance, versatility and scale to supercharge their AI training and inference workloads. The combination streamlines the development and deployment of production AI with the NVIDIA AI Enterprise software suite integrated with Azure Machine Learning for MLOps, and delivers record-setting AI performance in industry-standard MLPerf benchmarks.

In addition, by connecting the NVIDIA Omniverse platform to Azure, NVIDIA and Microsoft are providing hundreds of millions of Microsoft enterprise users with access to powerful industrial digitalization and AI supercomputing resources.

Learn more about new Azure v5 instances powered by NVIDIA H100 GPUs.

Read More

NVIDIA CEO Jensen Huang Returns to SIGGRAPH

NVIDIA CEO Jensen Huang Returns to SIGGRAPH

One pandemic and one generative AI revolution later, NVIDIA founder and CEO Jensen Huang returns to the SIGGRAPH stage next week to deliver a live keynote at the world’s largest professional graphics conference.

The address, slated for Tuesday, Aug. 8, at 8 a.m. PT in Los Angeles, will feature an exclusive look at some of NVIDIA’s newest breakthroughs, including award-winning research, OpenUSD developments and the latest AI-powered solutions for content creation.

NVIDIA founder and CEO Jensen Huang.

Huang’s address comes after NVIDIA joined forces last week with Pixar, Adobe, Apple and Autodesk to found the Alliance for OpenUSD, a major leap toward unlocking the next era of interoperability in 3D graphics, design and simulation.

The group will standardize and extend OpenUSD, the open-source Universal Scene Description framework that’s the foundation of interoperable 3D applications and projects ranging from visual effects to industrial digital twins.

Huang will also offer a perspective on what’s been a raucous year for AI, with wildly popular new generative AI applications — including ChatGPT and Midjourney — providing a taste of what’s to come as developers worldwide get to work.

Throughout the conference, NVIDIA will participate in sessions on immersive visualization, 3D interoperability and AI-mediated video conferencing and presenting 20 research papers. Attendees will also get the opportunity to join hands-on labs.

Join SIGGRAPH to witness the evolution of AI and visual computing. Watch the keynote on this page.

 

Image source: Ron Diering, via Flickr, some rights reserved.

Read More

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI-Powered Pit Droid

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI-Powered Pit Droid

Goran Vuksic is the brain behind a project to build a real-world pit droid, a type of Star Wars bot that repairs and maintains podracers which zoom across the much-loved film series.

The edge AI Jedi used an NVIDIA Jetson Orin Nano Developer Kit as the brain of the droid itself. The devkit enables the bot, which is a little less than four feet tall and has a simple webcam for eyes, to identify and move its head toward objects.

Vuksic — originally from Croatia and now based in Malmö, Sweden — recently traveled with the pit droid across Belgium and the Netherlands to several tech conferences. He presented to hundreds of people on computer vision and AI, using the droid as an engaging real-world demo.

The pit droid’s first look at the world.

A self-described Star Wars fanatic, he’s upgrading the droid’s capabilities in his free time, when not engrossed in his work as an engineering manager at a Copenhagen-based company. He’s also co-founder and chief technology officer of syntheticAIdata, a member of the NVIDIA Inception program for cutting-edge startups.

The company, which creates vision AI models with cost-effective synthetic data, uses a connector to the NVIDIA Omniverse platform for building and operating 3D tools and applications.

About the Maker

Named a Jetson AI Specialist by NVIDIA and an AI “Most Valuable Professional” by Microsoft, Vuksic got started with artificial intelligence and IT about a decade ago when working for a startup that classified tattoos with vision AI.

Since then, he’s worked as an engineering and technical manager, among other roles, developing IT strategies and solutions for various companies.

Robotics has always interested him, as he was a huge sci-fi fan growing up.

“Watching Star Wars and other films, I imagined how robots might be able to see and do stuff in the real world,” said Vuksic, also a member of the NVIDIA Developer Program.

Now, he’s enabling just that with the pit droid project powered by the NVIDIA Jetson platform, which the developer has used since the launch of its first product nearly a decade ago.

Vuksic reads to the pit droid.

Apart from tinkering with computers and bots, Vuksic enjoys playing the bass guitar in a band with his friends.

His Inspiration

Vuksic built the pit droid for both fun and educational purposes.

As a frequent speaker at tech conferences, he takes the pit droid on stage to engage with his audience, demonstrate how it works and inspire others to build something similar, he said.

Vuksic, his startup co-founder Sherry List and the pit droid present at the Techorama conference in Antwerp, Belgium.

“We live in a connected world — all the things around us are exchanging data and becoming more and more automated,” he added. “I think this is super exciting, and we’ll likely have even more robots to help humans with tasks.”

Using the NVIDIA Jetson platform, Vuksic is at the forefront of robotics innovation, along with an ecosystem of developers using edge AI.

His Jetson Project

Vuksic’s pit droid project, which took him four months, began with 3D printing its body parts and putting them all together.

He then equipped the bot with the Jetson Orin Nano Developer Kit as the brain in its head, which can move in all directions thanks to two motors.

Vuksic places an NVIDIA Jetson Orin Nano Developer Kit in the pit droid’s head.

The Jetson Orin Nano enables real-time processing of the camera feed. “It’s truly, truly amazing to have this processing power in such a small box that fits in the droid’s head,” said Vuksic.

He also uses Microsoft Azure to process the data in the cloud for object-detection training.

“My favorite part of the project was definitely connecting it to the Jetson Orin Nano, which made it easy to run the AI and make the droid move according to what it sees,” said Vuksic, who wrote a step-by-step technical guide to building the bot, so others can try it themselves.

“The most challenging part was traveling with the droid — there was a bit of explanation necessary when I was passing security and opened my bag which contained the robot in parts,” the developer mused. “I said, ‘This is just my big toy!’”

Learn more about the NVIDIA Jetson platform.

Read More

How to Build Generative AI Applications and 3D Virtual Worlds

How to Build Generative AI Applications and 3D Virtual Worlds

To grow and succeed, organizations must continuously focus on technical skills development, especially in rapidly advancing areas of technology, such as generative AI and the creation of 3D virtual worlds.  

NVIDIA Training, which equips teams with skills for the age of AI, high performance computing and industrial digitalization, has released new courses that cover these technologies. The program has already equipped hundreds of thousands of students, developers, researchers and data scientists with critical technical skills.  

With its latest courses, NVIDIA Training is enabling organizations to fully harness the power of generative AI and virtual worlds, which are transforming the business landscape. 

Get Started Building Generative AI Applications     

Generative AI is revolutionizing the ways organizations work. It enables users to quickly generate new content based on a variety of inputs, including text, images, sounds, animation, 3D models and other data types.  

New NVIDIA Training courses on gen AI include:         

  • Generative AI Explained Generative models are accelerating application development for many use cases, including question-answering, summarization, textual entailment, 2D and 3D image and audio creation. In this two-hour course, Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, provides an overview of gen AI’s major developments, where it stands now and what it could be capable of in the future. He’ll discuss technical details and popular generative AI applications, as well as how businesses can responsibly use the technology. 
  • Generative AI With Diffusion Models — Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever. Get started with gen AI application development with this hands-on course where students will learn how to build a text-to-image generative AI application using the latest techniques. Generate images with diffusion models and refine the output with various optimizations. Build a denoising diffusion model from the U-Net architecture to add context embeddings for greater user control. 

To see a complete list of courses on generative AI and large language models, check out these NVIDIA Training Learning Paths. 

Building Digital 3D Worlds

Advancements in digital world-building are transforming media and entertainment, architecture, engineering, construction and operations, factory planning and avatar creation, among other industries.

Immersive 3D environments elevate user engagement and enable innovative solutions to real-world problems. NVIDIA Omniverse, a platform for connecting and developing 3D tools and applications, lets technical artists, designers and engineers quickly assemble complex and physically accurate simulations and 3D scenes in real time, while seamlessly collaborating with team members.

New NVIDIA Training courses on this topic include:

  • Essentials of USD in NVIDIA Omniverse Universal Scene Description, or OpenUSD, is transforming 3D workflows across industries. It’s an open standard enabling 3D artists and developers to connect, compose and simulate in the metaverse. Students will learn what makes OpenUSD unique for designing 3D worlds. The training covers data modeling using primitive nodes, attributes and relationships, as well as custom schemas and composition for scene assembly and collaboration. 
  • Developing Omniverse Kit ApplicationsLearn how to use the NVIDIA Omniverse Kit development framework to build applications, custom extensions and microservices. Applications may comprise many extensions working in concert to address specific 3D workflows, like industrial digitalization and factory planning. Students will use Omniverse reference applications, like Omniverse USD Composer and USD Presenter, to kickstart their own application development.
     
  • Bootstrapping Computer Vision Models With Synthetic DataLearn how to use NVIDIA Omniverse Replicator, a core Omniverse extension, to accelerate the development of computer vision models. Generate accurate, photorealistic, physics-conforming synthetic data to ease the expensive, time-consuming task of labeling real-world data. Omniverse Replicator accelerates AI development at scale and reduces time to production. 

To see a complete list of courses on graphics and simulation, check out these NVIDIA Training Learning Paths 

Wide Portfolio of Courses 

NVIDIA Training offers courses and resources to help individuals and organizations develop expertise in using NVIDIA technologies to fuel innovation. In addition to those above, a wide range of courses and workshops covering AI, deep learning, accelerated computing, data science, networking and infrastructure are available to explore in the training catalog. 

At the SIGGRAPH conference session “Reimagine Your Curriculum With OpenUSD and NVIDIA Omniverse,” Laura Scholl, senior content developer on the Omniverse team at NVIDIA, will discuss how to incorporate OpenUSD and Omniverse into an educational setting using teaching kits, programs for educators and other resources available from NVIDIA.  

Learn about the latest advances in generative AI, graphics and more by joining NVIDIA at SIGGRAPH. NVIDIA founder and CEO Jensen Huang will deliver a keynote address on Tuesday, Aug. 8, at 8 a.m. PT.

Read More

An Ultimate GFN Thursday: 41 New Games, Plus ‘Baldur’s Gate 3’ Full Release and First Bethesda Titles to Join the Cloud in August

An Ultimate GFN Thursday: 41 New Games, Plus ‘Baldur’s Gate 3’ Full Release and First Bethesda Titles to Join the Cloud in August

The Ultimate upgrade is complete — GeForce NOW Ultimate performance is now streaming all throughout North America and Europe, delivering RTX 4080-class power for gamers across these regions. Celebrate this month with 41 new games, on top of the full release of Baldur’s Gate 3 and the first Bethesda titles coming to the cloud as the NVIDIA and Microsoft partnership benefits gamers everywhere.

And catch GeForce NOW at QuakeCon — the popular bring-your-own-PC mega-event running Aug. 10-13 — where the in-person and digital GeForce NOW Ultimate challenge will kick off.

Plus, game on with gaming peripherals and accessories company SteelSeries, which will be giving away codes for three-day GeForce NOW Ultimate and Priority memberships, along with popular GeForce NOW games and in-game goodies.

The Ultimate Rollout

Ultimate upgrade on GeForce NOW
Ultimate members everywhere have unlocked their maximum PC gaming potential.

The rollout of GeForce RTX 4080 SuperPODs across the world this year lit up cities with cutting-edge performance from the cloud. RTX 3080 members were introduced to the Ultimate membership, featuring gaming at 4K resolution 120 frames per second, or even up to 240 fps with ultra-low latency thanks to NVIDIA Reflex technology.

Ultimate memberships also bring the benefits of the NVIDIA Ada Lovelace architecture — including DLSS 3 with frame generation for the highest frame rates and visual fidelity, and full ray tracing for the most immersive, cinematic, in-game lighting experiences. Plus, ultrawide resolutions were supported for the first time ever from the cloud.

And members can experience it all without having to upgrade a single piece of hardware. With RTX 4080-class servers fully deployed, gamers can now experience ultra-high fps streaming from GeForce RTX 4080-class power in the cloud and see how an Ultimate membership raises the bar on cloud gaming.

To celebrate, the GeForce NOW team will be showing off Ultimate at QuakeCon with a special GeForce NOW Ultimate challenge. Members can register now to be first in line to get a free one-day upgrade to an Ultimate membership and see how their skills improve with 240 fps gaming when the challenge launches next week. Top scorers at QuakeCon can win various prizes, along with those participating in the challenge from home. Keep an eye out on GeForce NOW’s Twitter and Facebook accounts for more details.

It’s Party Time

The best thing to pair with an Ultimate membership are the best games in the cloud. Members have been enjoying early access to Baldur’s Gate 3 from Larian Studios, the role-playing game set in the world of Dungeons and Dragons that raised the bar for the RPG genre.

Baldur's Gate 3 full launch on GeForce NOW
Roll a nat 20 when streaming from the cloud.

Now, the full PC game launches and is streamable from GeForce NOW today. Choose from a wide selection of D&D races and classes, or play as an origin character with a handcrafted background. Adventure, loot, battle and romance while journeying through the Forgotten Realms and beyond. The game features a turn-based combat system, a dialogue system with choices and consequences, and a rich story that adapts to player actions and decisions.

Stream it across devices, whether solo or with others in online co-op mode. Those playing from the cloud will be able to enjoy it without worrying about download times or system requirements.

The Ultimate Shooters

Several titles from Bethesda’s well-known franchises — DOOM, Quake and Wolfenstein — will join the cloud this month for a mix of modern and classic first-person shooter games to enjoy across nearly all devices.

Feel the heat with the DOOM franchise, recognizable through its fast-paced epic gameplay and iconic heavy-metal soundtrack. Players take on the role of the DOOM Slayer to fight hordes of invading demons.

In addition, the Quake series features single- and multiplayer campaigns with gritty gameplay and epic music scores in which members can enjoy two sides of the legendary series.

First titles from Bethesda franchises to join GeForce NOW
The first Bethesda titles to heat up the cloud.

The modern Wolfenstein games feature intense first-person combat against oversized Nazi robots, hulking super soldiers and elite shock troops. Discover an unfamiliar world ruled by a familiar enemy — one that’s changed and twisted history as you know it.

Experience all of these iconic franchises with an Ultimate or Priority membership. Priority members get faster access to GeForce RTX servers in the cloud over free members, along with up to six-hour gaming sessions. Ultimate members can raze their enemies in epic 4K and ultrawide resolution, with up to eight-hour gaming sessions.

Ready, Set, Play!

SteelSeries Game On giveaway on GeForce NOW
Game on!

GeForce NOW and SteelSeries are rewarding gamers ‌throughout‌ August as part of the SteelSeries’ Game On sweepstakes.

Each week, gamers will have a chance to win three-day GeForce NOW Ultimate and Priority codes bundled with popular titles supported in the cloud — RuneScape, Genshin Impact, Brawlhalla and Dying Light 2 — as well in-game goodies.

Check GFN Thursday each week to see what the reward drop will be and head over to the SteelSeries Games site for more details on how to enter. Plus, save 20% with code “NVIDIAGAMEON” this month for premium SteelSeries products, which are perfect to pair with GeForce NOW cloud gaming.

Members can look forward to the 10 new games joining this week:

  • F1 Manager 2023 (New release on Steam, July 31)
  • Bloons TD 6 (Free on Epic Games Store, Aug. 3)
  • Bloons TD Battles 2 (Steam)
  • Brick Rigs (Steam)
  • Demonologist (Steam)
  • Empires of the Undergrowth (Steam)
  • Stardeus (Steam)
  • The Talos Principle (Steam)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Steam)
  • Yet Another Zombie Survivors (Steam)

And here’s what the rest of August looks like:

  • WrestleQuest (New release on Steam, Aug. 7)
  • I Am Future (New release on Steam, Aug. 8)
  • Atlas Fallen (New release on Steam, Aug. 10)
  • Sengoku Dynasty (New release on Steam, Aug. 10)
  • Tales & Tactics (New release on Steam, Aug. 10)
  • Moving Out 2 (New release on Steam, Aug. 15)
  • Hammerwatch II (New release on Steam, Aug. 15)
  • Desynced (New release on Steam, Aug. 15)
  • Wayfinder (New release on Steam, Aug. 15)
  • The Cosmic Wheel Sisterhood (New release on Steam, Aug. 16)
  • Gord (New release on Steam, Aug. 17)
  • Book of Hours (New release on Steam, Aug. 17)
  • Shadow Gambit: The Cursed Crew (New release on Steam, Aug. 17)
  • The Texas Chain Saw Massacre (New release on Steam, Aug. 18)
  • Bomb Rush Cyberfunk (New release on Steam, Aug. 18)
  • Jumplight Odyssey (New release on Steam, Aug. 21)
  • Blasphemous 2 (New release on Steam, Aug. 24)
  • RIDE 5 (New release on Steam, Aug. 24)
  • Sea of Stars (New release on Steam, Aug. 29)
  • Trine 5: A Clockwork Conspiracy (New release on Steam, Aug. 31)
  • Deceit 2 (New release on Steam, Aug. 31)
  • Inkbound (Steam)
  • LEGO Brawls (Epic Games Store)
  • Regiments (Steam)
  • Session (Epic Games Store)
  • Smalland: Survive the Wilds (Epic Games Store)
  • Superhot (Epic Games Store)
  • Terra Invicta (Epic Games Store)
  • Wall World (Steam)
  • Wild West Dynasty (Epic Games Store)
  • WRECKFEST (Epic Games Store)
  • Xenonauts 2 (Epic Games Store)

A Jammin’ July

On top of the 14 games announced in July, four extra joined the cloud last month:

  • Let’s School (New release on Steam, July 26)
  • Grand Emprise: Time Travel Survival (New release on Steam, July 27)
  • Dragon’s Dogma: Dark Arisen (Steam)
  • OCTOPATH TRAVELER (Epic Games Store)

What are you looking forward to streaming this month? Let us know your answer on Twitter or in the comments below.

Read More

Cuddly 3D Creature Comes to Life in Father-Son Collaboration This Week ‘In the NVIDIA Studio’

Cuddly 3D Creature Comes to Life in Father-Son Collaboration This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.

Principal NVIDIA artist and 3D expert Michael Johnson creates highly detailed art that’s both technically impressive and emotionally resonant. It’s evident in his latest piece, Father-Son Collaboration, which draws on inspiration from the vivid imagination of his son and is highlighted this week In the NVIDIA Studio.

“I love how art can bring joy and great memories to others — great work makes me feel special to be a human and an artist,” said Johnson. “Art can flip people’s perspectives and make them feel something completely different.”

Young minds inspire generations of artists.

“The story behind this piece is that I simply wanted to inspire my son and teach him how things can be perceived — how people can be inspired by others’ art,” said Johnson, who could tell that his son — a doodler himself — often considered his own artwork not good enough.

“I wanted to show him what I saw in his art and how it inspired me,” Johnson said.

Through this project, Johnson also aimed to demonstrate the NVIDIA Studio-powered workflows of art studios and concept artists across the world.

This creature is living its best life.

NVIDIA RTX GPU technology plays a pivotal role in accelerating Johnson’s creativity. “As an artist, I care about quick feedback and stability,” he said. “My NVIDIA A6000 RTX graphics card speeds up the rendering process so I can quickly iterate.”

For Father-Son Collaboration, Johnson first opened Autodesk Maya to model the creature’s basic 3D shapes. His GPU-accelerated viewport enabled fast, interactive 3D modeling.

 

Next, he imported models into ZBrush for further sculpting, freestyling and details. “After I had my final sculpt down, I took the model into Rizom-Lab IV software to lay out the UVs,” Johnson said. UV mapping is the process of projecting a 3D model’s surface to a 2D image for texture mapping. It makes the model easier to texture and shade later in the creative workflow.

 

Johnson then used Adobe Substance 3D Painter to apply standard and custom textures and shaders on the character.

“Substance 3D Painter is really great because it displays the final look of the textures without bringing it into an external renderer,” said Johnson.

His GPU unlocked RTX-accelerated light and ambient occlusion baking, optimizing assets in mere seconds.

 

With the textures complete, Johnson imported his models back into Autodesk Maya for hair, grooming, lighting and rendering. For the hair and fur, the artist used XGen, Autodesk Maya’s built-in instancing tool. Autodesk Maya also offers third-party support of GPU-accelerated renderers such as Chaos V-Ray, OTOY OctaneRender and Maxon Redshift.

“Redshift is great — and having a great GPU makes renders really quick,” Johnson added. Redshift’s RTX-accelerated final-frame rendering with AI-powered OptiX denoising exported files with plenty of time to spare.

Johnson put the final touches on Father-Son Collaboration in Adobe Photoshop. With access to over 30 GPU-accelerated features, such as blur gallery, object selection, perspective warp and more, he applied the background and added minor touch-ups to complete the piece.

 

The joy, awe and wonderment he’d hoped to invoke in his son came to fruition when Johnson finally shared the piece.

From a son’s concept to a father’s creation.

“Art is one of the rare things in life that really has no end goal — as it’s really about the process, rather than the result,” Johnson said. “Every day, you learn something new, grow and see things in different ways.”

Principal NVIDIA artist and 3D expert Michael Johnson.

Check out Johnson’s portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Learn about the latest with OpenUSD and Omniverse at SIGGRAPH, running August 6-10. Take advantage of showfloor experiences like hands-on labs, special events and demo booths — and don’t miss NVIDIA founder and CEO Jensen Huang’s keynote address on Tuesday, Aug. 8, at 8 a.m. PT. 

Read More

NVIDIA Helps Forge Forum to Set OpenUSD Standard for 3D Worlds

NVIDIA Helps Forge Forum to Set OpenUSD Standard for 3D Worlds

NVIDIA joined Pixar, Adobe, Apple and Autodesk today to found the Alliance for OpenUSD, a major leap toward unlocking the next era of 3D graphics, design and simulation.

The group will standardize and extend OpenUSD, the open-source Universal Scene Description framework that’s the foundation of interoperable 3D applications and projects ranging from visual effects to industrial digital twins.

Several leading companies in the 3D ecosystem already signed on as the alliance’s first general members — Cesium, Epic Games, Foundry, Hexagon, IKEA, SideFX and Unity.

Standardizing OpenUSD will accelerate its adoption, creating a foundational technology that will help today’s 2D internet evolve into a 3D web. Many companies are already working with NVIDIA to pioneer this future.

From Skyscrapers to Sports Cars

OpenUSD is the foundation of NVIDIA Omniverse, a development platform for connecting and building 3D tools and applications. Omniverse is helping companies like Heavy.AI, Kroger and Siemens build and test physically accurate simulations of factories, retail locations, skyscrapers, sports cars and more.

For IKEA, OpenUSD represents “a nonproprietary standard format to author and store 3D content to connect our value chain even closer, and develop home furnishing solutions to a lower price,” Martin Enthed, an innovation manager at IKEA, said in a press release the alliance issued today.

“By joining the alliance, we’re demonstrating our dedication to the advantages that OpenUSD provides our clients when linking with cloud-based platforms, including Nexus, Hexagon’s manufacturing platform, HxDR, Hexagon’s digital reality platform, and NVIDIA Omniverse to build innovative solutions in their industries,” said Burkhard Boeckem, CTO of Hexagon.

The Origins of OpenUSD

Pixar started work on USD in 2012 as a 3D foundation for its feature films, offering interoperability across data and workflows. The company made this powerful, multifaceted technology open source four years later, so anyone can use OpenUSD and contribute to its development.

Image from the Pixar film "Coco" that used USD
A breakdown of a scene from Pixar’s “Coco” contrasted with the final image. USD was instrumental in creating the film’s complex world. © Disney/Pixar

OpenUSD supports the requirements of building virtual worlds — like geometry, cameras, lights and materials. It also includes features necessary for scaling to large, complex datasets, and it’s tremendously extensible, enabling the technology to be adapted to workflows beyond visual effects.

OpenUSD enables real-time collaboration.
Diagram of OpenUSD that demonstrates it’s power as a technology for large scale, industrial workflows.

One unique capability of OpenUSD is its layering system, which lets users collaborate in real time without stepping on each other’s toes. For example, one artist can model a scene while others create the lighting for it.

Forging a Shared Standard

As its first priority, the alliance will develop a specification that describes the core functionality of OpenUSD. That’ll provide a recipe tool builders can implement, encouraging adoption of the open standard across the widest possible array of use cases.

The alliance will operate as part of the Joint Development Foundation (JDF), a branch of the Linux Foundation. The JDF provides a path to turn written specifications into industry standards suitable for adoption by globally respected groups like the International Organization for Standardization, or the ISO.

From OpenUSD to Omniverse

NVIDIA has a deep commitment to OpenUSD and working with ecosystem partners to accelerate the framework’s evolution and adoption across industries.

At last year’s SIGGRAPH, NVIDIA detailed a multiyear roadmap of contributions it’s making to enable OpenUSD use in architecture, engineering, manufacturing and more. An update on these plans will be presented by NVIDIA as part of the alliance at this year’s conference on computer graphics.

Help Build the 3D Future

Collaboration is key to the alliance and evolution of OpenUSD.

To get involved or learn more, attend NVIDIA’s keynote, OpenUSD day, hands-on labs and other showfloor activities at SIGGRAPH, running Aug. 6-10.

The Alliance for OpenUSD also will host a keynote panel session at the Academy Software Foundation’s Open Source Days 2023.

For a deeper dive on OpenUSD:

Read More