Get up to Speed: Five Reasons Not to Miss NVIDIA CEO Jensen Huang’s GTC Keynote Sept. 20

Think fast. Enterprise AI, new gaming technology, the metaverse and the 3D internet, and advanced AI technologies tailored to just about every industry are all coming your way.

NVIDIA founder and CEO Jensen Huang’s keynote at NVIDIA GTC on Tuesday, Sept. 20, is the best way to get ahead of all these trends.

NVIDIA’s virtual technology conference, which takes place Sept. 19-22, sits at the intersections of business and technology, science and the arts in a way no other event can.

This GTC will focus on neural graphics — which bring together AI and visual computing to create stunning new possibilities — the metaverse, an update on large language models, and the changes coming to every industry with the latest generation of recommender systems.

The free online gathering features speakers from every corner of industry, academia and research.

Speakers include Johnson & Johnson CTO Rowena Yao; Boeing Vice President Linda Hapgood; Polestar COO Dennis Nobelius; Deutsche Bank CTO Bernd Leukert; UN Assistant Secretary-General Ahunna Eziakonwa; UC San Diego distinguished professor Henrik Christensen, and hundreds more.

For those who want to get hands on, GTC features developer sessions for newbies and veteran developers.

Two-hour training labs are included for those who sign up for a free conference pass. Those who want to dig deeper can sign up for one of 21 full-day virtual hands-on workshops at a special price of $149, and for group purchases of more than five seats, we are offering a special of $99 per seat.

Finally, GTC offers networking opportunities that bring together people working on the most challenging problems of our time from all over the planet.

Register free and start loading up your calendar with content today.

The post Get up to Speed: Five Reasons Not to Miss NVIDIA CEO Jensen Huang’s GTC Keynote Sept. 20 appeared first on NVIDIA Blog.

Read More

AI on the Stars: Hyperrealistic Avatars Propel Startup to ‘America’s Got Talent’ Finals

More than 6 million pairs of eyes will be on real-time AI avatar technology in this week’s finale of America’s Got Talent — currently the second-most popular primetime TV show in the U.S..

Metaphysic, a member of the NVIDIA Inception global network of technology startups, is one of 11 acts competing for $1 million and a headline slot in AGT’s Las Vegas show in tonight’s final on NBC. It’s the first AI act to reach an AGT finals.

Called “the best act of the series so far” and “one of the most unique things we’ve ever seen on this show” by notoriously tough judge Simon Cowell, the team’s performances involve a demonstration of photorealistic AI avatars, animated in real time by singers on stage.

In Metaphysic’s semifinals act, three singers — Daniel Emmet, Patrick Dailey and John Riesen — lent their voices to AI avatars of Cowell, fellow judge Howie Mandel and host Terry Crews, performing the opera piece “Nessun Dorma.” For the finale, the team plans to “bring back one of the greatest rock and roll icons of all time,” but it’s keeping the audience guessing.

The AGT winner will be announced on Wednesday, Sept. 14.

“Metaphysic’s history-making run on America’s Got Talent has allowed us to showcase the application of AI on one of the most-watched stages in the world,” said the startup’s co-founder and CEO Tom Graham, who appears on the show alongside co-founder Chris Umé.

AMERICA'S GOT TALENT -- “Auditions” Episode 1702 -- Pictured: MetaPhysic Synthetic Media --
(L to R): Daniel Emmet, Tom Graham and Chris Umé presented Metaphysic’s audition for “America’s Got Talent.” (Photo by Trae Patton/NBC, courtesy of Metaphysic.)

“While overall awareness of synthetic media has grown in recent years, Metaphysic’s AGT performances provide a front-row seat into how this technology could impact the future of everything, from the internet to entertainment to education,” he said.

Capturing Imaginations While Raising AI Awareness

Founded in 2021, London-based Metaphysic is developing AI technologies to help creators build virtual identities and synthetic content that is hyperrealistic, moving beyond the so-called uncanny valley.

The team initially went viral last year for DeepTomCruise, a TikTok channel featuring videos where actor Miles Fisher animated an AI avatar of Tom Cruise. The posts garnered around 100 million views and “provided many people with their first introduction to the incredible capabilities of synthetic media,” Graham said.

By bringing its AI avatars to the AGT stage, the company has been able to reach millions more viewers — with sophisticated camera rigs and performers on stage demonstrating how the technology works live and in real time.

AI, GPU Acceleration Behind the Curtain

Metaphysic’s AI avatar software pipeline includes variants of the popular StyleGAN model developed by NVIDIA Research. The team, which uses the TensorFlow deep learning framework, relies on NVIDIA CUDA software to accelerate its work on NVIDIA GPUs.

“Without NVIDIA hardware and software libraries, we wouldn’t be able to pull off these hyperreal results to the level we have,” said Jo Plaete, director of product innovation at Metaphysic. “The computation provided by our NVIDIA hardware platforms allows us to train larger and more complex models at a speed that allows us to iterate on them quickly, which results in those most perfectly tuned results.”

For both AI model development and inference during live performances, Metaphysic uses NVIDIA DGX systems as well as other workstations and data center configurations with NVIDIA GPUs — including NVIDIA A100 Tensor Core GPUs.

“Excellent hardware support has helped us troubleshoot things really fast when in need,” said Plaete. “And having access to the research and engineering teams helps us get a deeper understanding of the tools and how we can leverage them in our pipelines.”

Following AGT, Metaphysic plans to pursue several collaborations in the entertainment industry. The company has also launched a consumer-facing platform, called Every Anyone, that enables users to create their own hyperrealistic AI avatars.

Discover the latest in AI and metaverse technology by registering free for NVIDIA GTC, running online Sept. 19-22. Metaphysic will be part of the panel “AI for VCs: NVIDIA Inception Global Startup Showcase.”

Header photo by Chris Haston/NBC, courtesy of Metaphysic

The post AI on the Stars: Hyperrealistic Avatars Propel Startup to ‘America’s Got Talent’ Finals appeared first on NVIDIA Blog.

Read More

Concept Designer Ben Mauro Delivers Epic 3D Trailer ‘Huxley’ This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

The gripping sci-fi comic Huxley was brought to life in an action-packed 3D trailer full of excitement and intrigue this week In the NVIDIA Studio.

3D artist, concept designer and storyteller Ben Mauro has contributed to some of the world’s biggest entertainment franchises. He’s worked on movies like Elysium, Valerian and Metal Gear Solid, as well as video games such as Halo Infinite and Call of Duty: Black Ops III.

Mauro has met many inspirational artists throughout his storied career, and he collaborated with a few of them to bring Huxley to life. He called the 3D trailer a year’s worth of work, worth every minute spent — following his decade-long process of creating the comic itself.

“Huxley” introduces a vibrant, futuristic world.

In Mauro’s fantastical, fictional world, two post-apocalyptic scavengers stumble upon a forgotten treasure map in the form of an ancient sentient robot, finding themselves amidst a mystery of galactic scale.

In designing Huxley the comic, Mauro worked old-school magic with a pad and pencil, sketching characters and environments before importing visuals into Adobe Photoshop. His NVIDIA GeForce RTX 3090 GPU provided fast performance and AI features to speed up his creative workflow.

Early concept art of “Huxley.” “What has become of me?” it thought.

The artist used Photoshop’s “Artboards” to quickly view reference artwork for inspiration, as well as “Image Size” to preserve critical details — both features accelerated by his GPU. To finish up the comic, Mauro turned to Blender software to create mockups and block out scenes with the intention of later converting back to 3D from 2D.

Camera shots were matched in Blender.

With 3D trailer production in progress, matte painter and environment artist Steve Cormann used Mauro’s Blender models as a convenient starting point, virtually a one-to-one match to the desired 3D outcome.

Advanced modeling in ZBrush.

Cormann, who specializes in Autodesk 3ds Max software, applied advanced modeling techniques in building the scene. 3ds Max has a GPU-accelerated viewport that guarantees fast and interactive 3D modeling. It also lets artists choose their preferred 3D renderer — which in Cormann’s case is Maxon’s Redshift, where combining GPU acceleration and AI-powered OptiX denoising resulted in lightning-fast final-frame rendering.

Applying textures in Adobe Substance 3D Painter.

This proved useful as Cormann exported scenes into Adobe Substance 3D Painter to apply various textures and colors. RTX-accelerated light- and ambient-occlusion features baked and optimized assets within the scenes in mere seconds, giving Cormann the option to experiment with different visual aesthetics quickly and easily.

All of the hero characters were textured from scratch by artist Antonio Esparza and team.

Enter more of Mauro’s collaborators: lead character artist Antonio Esparza and his team, who spent significant time in 3ds Max to refine individual scenes and generate the staggering number of hero characters. This included uniquely texturing each of the characters and props. Esparza said his GeForce RTX 2080 SUPER GPU allowed him to modify characters and export renders dramatically faster than his previous hardware.

Esparza joked that before his hardware upgrade, “Most of the last hours of the day, it was me here, you know, like, waiting.” Director Sava Živković would say to Esparza, “Turn the lights off Antonio, we don’t want to see that progress bar.”

Meanwhile, Živković turned his focus to lighting in 3ds Max. His trusty GeForce RTX 2080 Ti GPU enabled RTX-accelerated AI denoising with Maxon’s Redshift, resulting in photorealistic visuals while remaining highly interactive. This let the director tweak and modify scenes freely and easily.

City scenes were brought to life using Anima, a simple crowd-simulation software with off-the-shelf character assets.

With renders and textures in a good place, rigging and modeling artist Lucas Salmon began building meshes and rigging in 3ds Max to prepare for animation. Motion capture work was then outsourced to the well-regarded Belgrade-based studio, Take One. With 54 Vicon cameras and one of the biggest capture stages in Europe, it’s no surprise the animation quality in Huxley is world class.

Visual effects were added in Adobe After Effects.

Živković then deployed Adobe After Effects to composite the piece. Roughly 90% of the visual effects (VFX) were accomplished with built-in tools, stock footage and various plugins. Key 3D VFX such as ship smoke trails were simulated in Blender and then added in comp. The ability to move between multiple apps quickly is a testament to the power of the RTX GPU, Živković said.

“I love the RTX 3090 GPU for the extra VRAM, especially for increasingly bigger scenes where I want everything to look really nice and have quality texture sizes,” he said.

Photorealistic details create an immersive experience for the trailer’s viewers.

Satisfied with the trailer, Mauro reflected on artistry. “As creatives, if we don’t see the film, game, or universe we want to experience in our entertainment, we’re in the position to create it with our hard-earned skills. I feel this is our duty as artists and creators to leave behind more imagined worlds than existed before we got there, to inspire the world and the next generation of artists/creators to push things even further than we did.” he said.

Concept designer and storyteller Ben Mauro.

Access Mauro’s impressive portfolio on his website.

“Huxley” the movie is in development.

Huxley is an entire world rich in history and intrigue, currently being developed into a feature film and TV series.

Onwards and Upwards

Many of the techniques Mauro deployed can be learned by viewing free Studio Session tutorials on the NVIDIA Studio YouTube channel.

Learn core foundational warm-up exercises to inspire and ignite creative thinking, discover how to design sci-fi objects such as props, and transform 2D sketches into 3D models.

Also, in the spirit of learning, the NVIDIA Studio team has posed a challenge for the community to show off personal growth. Participate in the #CreatorsJourney challenge for a chance to be showcased on NVIDIA Studio social media channels.

Entering is easy. Post an older piece of artwork alongside a more recent one to showcase your growth as an artist. Follow and tag NVIDIA Studio on Instagram, Twitter or Facebook, and use the #CreatorsJourney tag to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post Concept Designer Ben Mauro Delivers Epic 3D Trailer ‘Huxley’ This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA Hopper Sweeps AI Inference Benchmarks in MLPerf Debut

In their debut on the MLPerf industry-standard AI benchmarks, NVIDIA H100 Tensor Core GPUs set world records in inference on all workloads, delivering up to 4.5x more performance than previous-generation GPUs.

The results demonstrate that Hopper is the premium choice for users who demand utmost performance on advanced AI models.

Additionally, NVIDIA A100 Tensor Core GPUs and the NVIDIA Jetson AGX Orin module for AI-powered robotics continued to deliver overall leadership inference performance across all MLPerf tests: image and speech recognition, natural language processing and recommender systems.

The H100, aka Hopper, raised the bar in per-accelerator performance across all six neural networks in the round. It demonstrated leadership in both throughput and speed in separate server and offline scenarios.

Hopper performance on MLPerf AI inference tests
NVIDIA H100 GPUs set new high watermarks on all workloads in the data center category.

The NVIDIA Hopper architecture delivered up to 4.5x more performance than NVIDIA Ampere architecture GPUs, which continue to provide overall leadership in MLPerf results.

Thanks in part to its Transformer Engine, Hopper excelled on the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models.

These inference benchmarks mark the first public demonstration of H100 GPUs, which will be available later this year. The H100 GPUs will participate in future MLPerf rounds for training.

A100 GPUs Show Leadership

NVIDIA A100 GPUs, available today from major cloud service providers and systems manufacturers, continued to show overall leadership in mainstream performance on AI inference in the latest tests.

A100 GPUs won more tests than any submission in data center and edge computing categories and scenarios. In June, the A100 also delivered overall leadership in MLPerf training benchmarks, demonstrating its abilities across the AI workflow.

Since their July 2020 debut on MLPerf, A100 GPUs have advanced their performance by 6x, thanks to continuous improvements in NVIDIA AI software.

NVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing.

Users Need Versatile Performance

The ability of NVIDIA GPUs to deliver leadership performance on all major AI models makes users the real winners. Their real-world applications typically employ many neural networks of different kinds.

For example, an AI application may need to understand a user’s spoken request, classify an image, make a recommendation and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.

The MLPerf benchmarks cover these and other popular AI workloads and scenarios — computer vision, natural language processing, recommendation systems, speech recognition and more. The tests ensure users will get performance that’s dependable and flexible to deploy.

Users rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Amazon, Arm, Baidu, Google, Harvard, Intel, Meta, Microsoft, Stanford and the University of Toronto.

Orin Leads at the Edge

In edge computing, NVIDIA Orin ran every MLPerf benchmark, winning more tests than any other low-power system-on-a-chip. And it showed  up to a 50% gain in energy efficiency compared to its debut on MLPerf in April.

In the previous round, Orin ran up to 5x faster than the prior-generation Jetson AGX Xavier module, while delivering an average of 2x better energy efficiency.

Orin leads MLPerf in edge inference
Orin delivered up to 50% gains in energy efficiency for AI inference at the edge.

Orin integrates into a single chip an NVIDIA Ampere architecture GPU and a cluster of powerful Arm CPU cores. It’s available today in the NVIDIA Jetson AGX Orin developer kit and production modules for robotics and autonomous systems, and supports the full NVIDIA AI software stack, including platforms for autonomous vehicles (NVIDIA Hyperion), medical devices (Clara Holoscan) and robotics (Isaac).

Broad NVIDIA AI Ecosystem

The MLPerf results show NVIDIA AI is backed by the industry’s broadest ecosystem in machine learning.

More than 70 submissions in this round ran on the NVIDIA platform.  For example, Microsoft Azure submitted results running NVIDIA AI on its cloud services.

In addition, 19 NVIDIA-Certified Systems appeared in this round from 10 systems makers, including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.

Their work shows users can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.

NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors. Results in the latest round demonstrate that the performance they deliver to users today will grow with the NVIDIA platform.

All the software used for these tests is available from the MLPerf repository, so anyone can get these world-class results. Optimizations are continuously folded into containers available on NGC, NVIDIA’s catalog for GPU-accelerated software. That’s where you’ll also find NVIDIA TensorRT, used by every submission in this round to optimize AI inference.

The post NVIDIA Hopper Sweeps AI Inference Benchmarks in MLPerf Debut appeared first on NVIDIA Blog.

Read More

GeForce NOW Supports Over 1,400 Games Streaming Instantly

This GFN Thursday marks a milestone: With the addition of six new titles this week, more than 1,400 games are now available to stream from the GeForce NOW library.

Plus, GeForce NOW members streaming to supported Smart TVs from Samsung and LG can get into their games faster with an improved user interface.

Your Games, Your Way

With more than 1,400 games streaming instantly on GeForce NOW, there’s always something new to play.

1,400 Games on GeForce NOW
A wide selection of games is ready to stream from the cloud.

Enjoy stunning stories like Mass Effect Legendary Edition or Life is Strange: True Colors, streaming to PCs and even Macs in 4K resolution with an RTX 3080 membership. Group up with friends in Lost Ark or betray them for fun in Among Us. Squad up for victory in Apex Legends, Rocket League and Counter-Strike: Global Offensive — and don’t worry about lagging behind, thanks to ultra-low latency.

For those craving something spooky, have a drop-dead good time ghost hunting in Phasmophobia or struggle to survive and slay in Dead by Daylight. Games like these sound scary good in 5.1 and 7.1 surround sound for Priority and 3080 members.

Build out your library, starting with over 100 free-to-play titles like League of Legends and Rumbleverse. RTX 3080 and Priority members can also experience real-time ray tracing in games like Dying Light 2, Loopmancer and Cyberpunk 2077, which launched a new 1.6 update this week, bringing even more content to Night City.

Take the action on the go with mobile devices. Fortnite on GeForce NOW with touch controls on mobile is available to all members, streaming through the Safari web browser on iOS and the GeForce NOW Android app. Or tap your way through Teyvet in Genshin Impact, streaming to mobile devices with touch controls.

With new games arriving on the cloud every week, the choices are endless.

Stream on TVs

GeForce NOW members streaming to Samsung and LG TVs can now quickly discover and easily launch top games through an improved UI.

Stream GeForce NOW on Samsung TVs
Turn on the TV and get right into gaming from the “Featured on GeForce NOW” row.

Samsung has integrated a “Featured on GeForce NOW” row in the Samsung Gaming Hub, streaming on select 2022 4K TVs. The list is curated and updated regularly — showcasing new, popular and recently released games. GeForce NOW is also integrated into other rows, like “Popular Games,” which Samsung also curates weekly. Pick out a game from these menus and easily launch the GeForce NOW app.

Stream GeForce NOW on LG TVs
Discover new titles directly from your home screen.

LG updated its UI with a home-screen “Gaming Shelf.” This addition brings GeForce NOW titles right onto the home screen, adding a new layer of game discoverability for members streaming to supported 2022 and 2021 LG TVs. Members with supported TVs can download the GeForce NOW app and check out the new UI today.

Revolutionize the Weekend

Steelrising on GeForce NOW
Lead the revolution in “Steelrising” with beautiful, cinematic graphics turning RTX ON.

Charge into the weekend with six new titles streaming from the cloud:

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post GeForce NOW Supports Over 1,400 Games Streaming Instantly appeared first on NVIDIA Blog.

Read More

Model Teachers: Startups Make Schools Smarter With Machine Learning

Like two valedictorians, SimInsights and Photomath tell stories worth hearing about how AI is advancing education.

SimInsights in Irvine, Calif., uses NVIDIA conversational AI to make virtual and augmented reality classes lifelike for college students and employee training.

Photomath — founded in Zagreb, Croatia and based in San Mateo, Calif. — created an app using computer vision and natural language processing to help students and their parents brush up on everything from arithmetic to calculus.

Both companies are a part of NVIDIA Inception, a free, global program that nurtures cutting-edge startups.

Surfing Sims in California

Rajesh Jha loved simulations since he developed a physics simulation engine for mechanical parts in college, more than 25 years ago. “So, I put sim in the name when I started my own company in 2009,” he said.

SimInsights originally developed web and mobile training simulations. When AR and VR platforms became available, Jha secured a grant to develop HyperSkill. Now the company’s main product, it’s a cloud-based, AI-powered 3D simulation authoring and analytics tool that makes training immersive.

The software helped UCLA’s medical center build a virtual clinic to train students. But they complained about the low accuracy of its rules-based conversational AI, so Jha took data from the first class and trained a deep neural network using NVIDIA Riva, GPU-accelerated software for building speech AI applications.

Riva Revs Up Speech AI

“There was a quick uptick in the quality, and they say it’s the most realistic training they’ve used,” said Jha.

Now, UCLA wants to apply the technology to train thousands of nurses on dealing with infectious diseases.

“There’s a huge role for conversational AI in education and training because it personalizes the experience,” he said. “And a lot of research shows if you can do that, people learn more and retain it longer.”

Access to New Technology

Because SimInsights is an NVIDIA Inception member, it got early access to Riva and NVIDIA TAO, a toolkit that accelerates evaluating and training AI models with transfer learning. They’ve become standard parts of the company’s workflow.

As for Riva, “it’s a powerful piece of software, and our team really appreciates working with NVIDIA to brainstorm our next steps,” Jha said.

Specifically, SimInsights aims to develop larger conversational AI models with more functions, such as question answering so students can point to objects in a scene and ask about them.

“As Riva gives us more capabilities, we’ll incorporate them into HyperSkill to make digital learning as good as working with an expert — it will take a while, but this is the way to get there,” he said.

Accelerating Math in Croatia

In Zagreb, Damir Sabol got stuck trying to help his eldest son understand a math problem in his homework. It sparked the idea for Photomath, an app that’s been downloaded more than 300 million times since its 2015 release.

The app detects an equation in a smartphone picture, then shows step-by-step solutions to it in formats that support different learning styles.

“At peak times, we get thousands of requests a second, so we need to be really fast,” said Ivan Jurin, who leads the startup’s AI projects.

Some teachers have students open the app as an alternative to working on the blackboard. It’s the kind of anecdote that makes Jurin’s day.

“We want to make education more accessible,” he said. “The free version of Photomath can help people who lack resources understand math almost as well as someone who can afford a tutor.”

A Large Hybrid Model

Under the hood, one large neural network does most of the work, detecting and parsing equations. It’s a mix of a convolutional network and a transformer model that packs about 100 million parameters.

It’s trained on local servers with NVIDIA RTX A6000 GPUs. For a cost-sensitive startup, “training in the cloud didn’t motivate us to experiment with larger datasets and more complex models, but with local servers we can queue up experiments as we see fit,” said Vedran Vekić, a senior machine learning engineer at the company.

Once trained, the service runs in the cloud on NVIDIA T4 Tensor Core GPUs, which he described as “very cost effective.”

NVIDIA Software Speeds Inference

The startup is migrating to a full stack of NVIDIA AI software to accelerate inference. It includes NVIDIA Triton Inference Server for maximum throughput, the TensorRT software development kit to minimize latency and NVIDIA DALI, a library for processing images fast.

“We were using the open-source TorchServe, but it wasn’t as efficient as we hoped,” Vekić said. NVIDIA software “gets 100% GPU utilization, so we’re using it on our smaller models and converting our large model to it, too.”

It’s a technical challenge that NVIDIA experts can help address, one of the benefits of being in Inception.

SimInsights and Photomath are among hundreds of startups — out of NVIDIA Inception’s total 10,000+ members — that are making education smarter with machine learning.

To learn more, check out these GTC sessions on NVIDIA Riva, NVIDIA Tao and NVIDIA Triton and TensorRT.

The post Model Teachers: Startups Make Schools Smarter With Machine Learning appeared first on NVIDIA Blog.

Read More

Ridiculously Realistic Renders Rule This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Viral creator turned NVIDIA 3D artist Lorenzo Drago takes viewers on a jaw-dropping journey through Toyama, Japan’s Etchū-Daimon Station this week In the NVIDIA Studio.

Drago’s photorealistic recreation of the train station has garnered over 2 million views in under four months, with audiences marveling at the remarkably accurate detail.

“Reality inspires me the most,” said Drago. “Mundane, everyday things always tell a story — they have nuances that are challenging to capture in fantasy.”

 

Drago started by camera matching in the fSpy open-source software. This process computed the approximate focal length, orientation and position of the camera in 3D space, based on the defined control points chosen from his Etchū-Daimon reference image.

The artist then moved to Blender software to begin the initial blockout, a 3D rough-draft level built with simple 3D shapes without details or polished art assets. The goal of the blockout was to prototype, test and adjust the foundational shapes of the level.

Drago tests and adjusts foundational shapes of the level in Blender during the blockout phase of the project.

From there, Drago measured the height of a staircase and extrapolated those proportions to the rest of the 3D scene, ensuring it fit the grid size. The scene could then be built modularly, one model at a time. He modeled with lightning speed using NVIDIA RTX-accelerated OptiX ray tracing in the Blender viewport.

Incredibly, the entire scene is a combination of custom-textured assets. Drago’s texturing technique elevated the sense of realism by using tileable textures and trim sheets, which are textures that combine separate details into a single sheet. Mixing these techniques proved to be profitable in creating original, more detailed textures, as well as keeping good pixel density across the scene. Textures never exceeded more than 2048×2048 pixels in size.

 

Drago created his textures in Adobe Substance 3D Painter, taking advantage of NVIDIA Iray rendering for faster, interactive rendering. RTX acceleration enabled him to quickly bake ambient occlusion and other maps used in texturing, before exporting and applying the textures to the models inside Unreal Engine 5.

Final frame renders came quickly with Drago’s GeForce RTX 2080 SUPER GPU doing the heavy lifting. Drago stressed the necessity of his graphics card.

“For a 3D artist, probably the majority of the work, excluding the planning phases, requires GPU acceleration,” he said. “Being able to work on scenes and objects with materials and lighting rendered in real time saves a lot of time and headaches compared to wireframe or unshaded modes.”

 

Drago moved on to importing and assembling his textured models in Unreal Engine 5. Elated with the assembly, he began the lighting process. Unreal Engine 5’s Lumen technology enabled lighting iterations in real time, without Drago having to wait for baking or render times.

Unreal Engine’s virtual-reality framework allowed Drago to set up a virtual camera with motion tracking. This gave the animation its signature third-person vantage point, enabling the artist to move around his creative space as if he were holding a smartphone.

The GPU-accelerated NVDEC decoder in Premiere Pro enabled smooth playback and scrubbing of high-resolution video for Drago.

With the renders and animation in place, Drago exported the scene to Adobe Premiere Pro software, where he added sound effects. He also sharpened image details in the animation, one of the many GPU-accelerated features in the software. Drago then deployed GPU-accelerated encoding, NVENC, to speed up the exporting of final files.

Subtle modifications allowed Drago to create ultra-realistic renders. “Mimicking a contemporary smartphone camera — with a limited dynamic range, glare and sharpening artifacts — was a good way of selling the realism,” he said.

“RTX features, like DLSS and ray tracing, are amazing, both from a developer’s point of view and a gamer’s,” Drago stated.

NVIDIA 3D artist Lorenzo Drago.

Drago recently imported Etchū-Daimon Station into NVIDIA Omniverse, a 3D design platform for collaborative editing, which replaces linear pipelines with live-sync creation. Drago noted the massive potential within the Omniverse Create App, calling it “a powerful tool capable of achieving extremely high-fidelity results.”

NVIDIA GTC, a global AI conference running online Sept. 19-22, will feature Omniverse sessions with industry experts to demonstrate how the platform can elevate creative workflows. Take advantage of these free resources and register today.

Follow Lorenzo Drago and view his portfolio on ArtStation.

Continue Your Creative Journey

Drago is a self-taught 3D artist, proof that resilience and dedication can lead to incredible, thought provoking, inspirational creative work.

In the spirit of learning, the NVIDIA Studio team is posing a challenge for the community to show off personal growth. Participate in the #CreatorsJourney challenge for a chance to be showcased on NVIDIA Studio social media channels.

Entering is easy. Post an older piece of artwork alongside a more recent one to showcase your growth as an artist. Follow and tag NVIDIA Studio on Instagram, Twitter or Facebook, and use the #CreatorsJourney tag to join.

There’s more than one way to create incredibly photorealistic visuals. Check out Detailed World Building tutorial by material artist Javier Perez, as well as the three-part series, Create Impressive 360 Panoramic Concept Art, by concept design artist Vladmir Somov. The series showcases a complete workflow in modeling, world building and texturing, and post-processing.

Access free tutorials by industry-leading artists on the Studio YouTube channel. Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post Ridiculously Realistic Renders Rule This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA GTC Dives Into the Industrial Metaverse, Digital Twins

This month’s NVIDIA GTC provides the best opportunity yet to learn how leading companies and their designers, planners and operators are using the industrial metaverse to create physically accurate, perfectly synchronized, AI-enabled digital twins.

The global conference, which runs online Sept. 19-22, will focus in part on how NVIDIA Omniverse Enterprise enables companies to design products, processes and facilities before bringing them to life in the real world — as well as simulate current and future operations.

Here are some of the experts — from such fields as retail, healthcare and manufacturing — who will discuss the of use AI-enabled digital twins:

Other sessions feature Guido Quaroni, senior director of engineering at Adobe; Matt Sivertson, vice president and chief architect for media and entertainment at Autodesk; and Steve May, vice president and chief technology officer at Pixar.

Plus, learn how to build a digital twin with these introductory learning sessions:

For hands-on, instructor-led training workshops, check out sessions from the NVIDIA Deep Learning Institute. GTC offers a full day of learning on Monday, Sept. 19.

Register free for GTC and watch NVIDIA founder and CEO Jensen Huang’s keynote on Tuesday, Sept. 20, at 8 a.m. PT to hear about the latest technology breakthroughs.

Feature image courtesy of Amazon Robotics.

The post NVIDIA GTC Dives Into the Industrial Metaverse, Digital Twins appeared first on NVIDIA Blog.

Read More

GFN Thursday Slides Into September With 22 New Games

We’d wake you up when September ends, but then you’d miss out on a whole new set of games coming to GeForce NOW.

Gear up for 22 games joining the GeForce NOW library, with 19 day-and-date releases including action role-playing game Steelrising. Playing them all will take some serious strategy.

And build the perfect Minifigure Fighter in LEGO Brawls, one of 10 new additions streaming this week.

Finally, did you hear? The 2.0.44 update, starting to roll out now and continuing over the next week, is bringing new audio modes to the PC and Mac apps. Priority members can experience support for 5.1 Surround sound, and GeForce NOW RTX 3080 members can enjoy support for both 5.1 and 7.1 surround sound.

Vive les Video Games

The revolution is streaming from the cloud. GeForce NOW brings 22 new titles in September to nearly all devices. Steel yourself for the challenging action-RPG Steelrising, arriving later this month at launch with RTX ON.

Steelrising on GeForce NOW
It’s the French Revolution, but full of violent legions of automatons.

Play as Aegis, a mechanical masterpiece, and save France from the madness of King Louis XVI and his army of mechanical soldiers. String together dodges, parries, jumps and devastating attacks to fight through Paris. Encounter allies and enemies in historical figures like Marie Antoinette, Lafayette, Robespierre and more.

Lead the revolution across low-end PCs, Macs and mobile phones. Experience Steelrising with beautiful, cinematic graphics turning RTX ON and take cloud gaming to the next level by upgrading to the RTX 3080 membership, streaming at 4K resolution on PC and Mac native apps.

Check out the full list of games coming in September:

  • TRAIL OUT (New release on Steam, Sept. 7)
  • Steelrising (New release on Steam and Epic Games Store, Sept. 8)
  • Broken Pieces (New release on Steam, Sept. 9)
  • Isonzo (New release on Steam and Epic Games Store, Sept. 13)
  • Little Orpheus (New release on Steam and Epic Games Store, Sept. 13)
  • Q.U.B.E. 10th Anniversary (New release on Steam, Sept. 14)
  • Metal: Hellsinger (New release on Steam, Sept. 15)
  • Stones Keeper (New release on Steam, Sept. 15)
  • SBK 22 (New release on Steam, Sept. 15)
  • Construction Simulator (New release on Steam, Sept. 20)
  • Soulstice (New release on Steam, Sept. 20)
  • The Legend of Heroes: Trails from Zero (New release on Steam and Epic Games Store, Sept. 27)
  • Brewmaster: Beer Brewing Simulator (New release on Steam, Sept. 29)
  • Jagged Alliance: Rage! (Steam)
  • Weable (Steam)
  • Animal Shelter (Steam)
  • River City Saga: Three Kingdoms (Steam)
  • Ground Branch (Steam)

Have a Brawl This Week

The September gaming fun starts with 10 new games streaming this week, including tomorrow’s release of LEGO Brawls, streaming on GeForce NOW for PC, macOS, and Chrome OS and web browsers.

LEGO Brawls on GeForce NOW
Build the ultimate brawler and put your battle skills to the test streaming on the cloud.

Dream up the ultimate LEGO Minifigure brawlers and bash your way into the first team-action brawler set in the LEGO universe. Design heroes with unique styles, strategies and personalities — and level them up for unlockable content. Team up and brawl 4v4, party with friends or play in a battle-royale-style game mode to beat the competition. With ultra-low latency, there’s no need to worry about lagging behind.

Catch the complete list of games streaming this week: 

Additional August Arrivals

On top of the 38 games announced last month, an extra four came to the cloud in August: 

One game announced last month, Mondealy (Steam), didn’t make it due to a shift in the release date.

With all of these sweet new games to play, we want to know what snack is powering your gaming sessions up. Let us know on Twitter or in the comments below.

The post GFN Thursday Slides Into September With 22 New Games appeared first on NVIDIA Blog.

Read More

Fraunhofer Research Leads Way Into Future of Robotics

Joseph Fraunhofer was a 19th-century pioneer in optics who brought together scientific research with industrial applications. Fast forward to today and Germany’s Fraunhofer Society — Europe’s largest R&D organization — is setting its sights on the applied research of key technologies, from AI to cybersecurity to medicine.

Its Fraunhofer IML unit is aiming to push the boundaries of logistics and robotics. The German researchers are harnessing NVIDIA Isaac Sim to make advances in robot design through simulation.

Like many — including BMW, Amazon and Siemens — Fraunhofer IML relies on NVIDIA Omniverse. It’s using it to make gains in applied research in logistics for fulfillment and manufacturing.

Fraunhofer’s newest innovation, dubbed O3dyn, uses NVIDIA simulation and robotics technologies to create an indoor-outdoor autonomous mobile robot (AMR).

Its goal is to enable the jump from automated guided vehicles to fast-moving AMRs that aren’t even yet available on the market.

This level of automation advancement promises a massive uptick in logistics acceleration.

“We’re looking at how we can go as fast and as safely as possible in logistics scenarios,” said Julian Eber, a robotics and AI researcher at Fraunhofer IML.

From MP3s to AMRs

Fraunhofer IML’s parent organization, based in Dortmund, near the country’s center, has more than 30,000 employees and is involved in hundreds of research projects. In the 1990s, it was responsible for the development of the MP3 file format, which led to the digital music revolution.

Seeking to send the automated guided vehicle along the same path as the compact disc, Fraunhofer in 2013 launched a breakthrough robot now widely used by BMW in its assembly plants and others.

This robot, known as the STR, is a workhorse for industrial manufacturing. It’s used for moving goods for the production lines. Fraunhofer IML’s AI work benefits the STR and other updates to this robotics platform, such as the O3dyn.

Fraunhofer IML is aiming to create AMRs that deliver a new state of the art. The O3dyn relies on the NVIDIA Jetson edge AI and robotics platform for a multitude of camera and sensor inputs to help navigate.

Advancing speed and agility, it’s capable of going up to 30 miles per hour and has wheels assisted by AI for any direction of movement to maneuver tight situations.

“The omnidirectional dynamics is very unique, and there’s nothing like this that we know of in the market,” said Sören Kerner, head of AI and autonomous systems at Fraunhofer IML.

Fraunhofer IML gave a sneak peek at its latest development on this pallet-moving robot at NVIDIA GTC.

Bridging Sim to Real

Using Isaac Sim, Fraunhofer IML’s latest research strives to develop and validate these AMRs in simulation by closing the sim-to-real gap. The researchers rely on Isaac Sim for virtual development of its highly dynamic autonomous mobile robot by exercising the robot in photorealistic, physically accurate 3D worlds.

This enables Fraunhofer to import into the virtual environment its robot’s more than 5,400 parts from computer-aided design software. It can then rig them with physically accurate specifications with Omniverse PhysX.

The result is that the virtual robot version can move as swiftly in simulation as the physical robot in the real world. Harnessing the virtual environment allows Fraunhofer to accelerate development, safely increase accuracy for real-world deployment and scale up faster.

Minimizing the sim-to-real gap makes simulation become a digital reality for robots. It’s a concept Fraunhofer refers to as simulation-based AI.

To make faster gains, Fraunhofer is releasing the AMR simulation model into open source so developers can make improvements.

“This is important for the future of logistics,” said Kerner. “We want to have as many people as possible work on the localization, navigation and AI of these kinds of dynamic robots in simulation.”

Learn more by watching Fraunhofer’s GTC session: “Towards a Digital Reality in Logistics Automation: Optimization of Sim-to-Real.”

Register for the upcoming GTC, running Sept. 19-22, and explore the robotics-related sessions.

 

The post Fraunhofer Research Leads Way Into Future of Robotics appeared first on NVIDIA Blog.

Read More