NVIDIA AI Makes Performance Capture Possible With Any Camera

NVIDIA AI tools are enabling deep learning-powered performance capture for creators at every level: visual effects and animation studios, creative professionals — even any enthusiast with a camera.

With NVIDIA Vid2Vid Cameo, creators can harness AI to capture their facial movements and expressions from any standard 2D video taken with a professional camera or smartphone. The performance can be applied in real time to animate an avatar, character or painting.

And with 3D body-pose estimation software, creators can capture full-body movements like walking, dancing and performing martial arts — bringing virtual characters to life with AI.

For individuals without 3D experience, these tools make it easy to animate creative projects, even using smartphone footage. Professionals can take it a step further, combining the pose estimation and Vid2Vid Cameo software to transfer their own movements to virtual characters for live streams or animation projects.

And creative studios can harness AI-powered performance capture for concept design or previsualization — to quickly convey an idea of how certain movements look on a digital character.

NVIDIA Demonstrates Performance Capture With Vid2Vid Cameo

NVIDIA Vid2Vid Cameo, available through a demo on the NVIDIA AI Playground, needs just two elements to generate a talking-head video: a still image of the avatar or painting to be animated, plus footage of the original performer speaking, singing or moving their head.

Based on generative adversarial networks, or GANs, the model maps facial movements to capture real-time motion, transferring that motion to the virtual character. Trained on 180,000 videos, the network learned to identify 20 key points to model facial motion — encoding the location of the eyes, mouth, nose, eyebrows and more.

These points are extracted from the video stream of the performer and applied to the avatar or digital character. See how it works in the demo below, which transfers a performance of Edgar Allan Poe’s “Sonnet — to Science” to a portrait of the writer by artist Gary Kelley.

Visual Platforms Integrate Vid2Vid Cameo, Pose Estimation by NVIDIA

While Vid2Vid Cameo captures detailed facial expressions, pose estimation AI tracks movement of the whole body — a key capability for creators working with virtual characters that perform complex motions or move around a digital scene.

Pose Tracker is a convolutional neural network model available as an Extension in the NVIDIA Omniverse 3D design collaboration and world simulation platform. It allows users to upload footage or stream live video as a motion source to animate a character in real time. Creators can download NVIDIA Omniverse for free and get started with step-by-step tutorials.

Companies that have integrated NVIDIA AI for performance capture into their products include:

  • Derivative, maker of TouchDesigner, a node-based real-time visual development platform, has implemented Vid2Vid Cameo as a way to provide easy-to-use facial tracking.
  • Notch, a company offering a real-time graphics tool for 3D, visual effects and live-events visuals, uses body-pose estimation AI from NVIDIA to help artists simplify stage setups. Instead of relying on custom hardware-tracking systems, Notch users can work with standard camera equipment to control 3D character animation in real time.
  • Pixotope, a leading virtual production company, uses NVIDIA AI-powered real-time talent tracking to drive interactive elements for live productions. The Norway-based company shared its work enabling interaction between real and virtual elements on screen at the most recent NVIDIA GTC.

Learn more about NVIDIA’s latest advances in AI, digital humans and virtual worlds at SIGGRAPH, the world’s largest gathering of computer graphics experts, running through Thursday, Aug. 11.

The post NVIDIA AI Makes Performance Capture Possible With Any Camera appeared first on NVIDIA Blog.

Read More

As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky

For cutting-edge visual effects and virtual production, creative teams and studios benefit from digital sets and environments that can be updated in real time.

A crucial element in any virtual production environment is a sky dome, often used to provide realistic lighting for virtual environments and in-camera visual effects. Legendary studio Industrial Light & Magic (ILM) is tapping into the power of AI to take its skies to new heights with NVIDIA AI-enabled DeepSearch and Omniverse Enterprise.

Capturing photorealistic details of a sky can be tricky. At SIGGRAPH today, ILM showcased how its team, with the NVIDIA DeepSearch tool, used natural language to rapidly search through a massive asset library and create a captivating sky dome.

The video shows how Omniverse Enterprise can provide filmmakers with the ultimate flexibility to develop the ideal look and lighting to further their stories. This helps artists save time, enhance productivity and accelerate creativity for virtual production.

After narrowing down their search results, the ILM team auditions the remaining sky domes in virtual reality to assess whether the asset will be a perfect match for the shot. By using VR, ILM can approximate what the skies will look like on a virtual production set.

The Sky’s the Limit With AI

An extensive library with thousands of references and 3D assets offers advantages, but it also presents some challenges without an efficient way to search through all the data.

Typically, users set up folders or tag items with keywords, which can be incredibly time consuming. This is especially true for a studio like ILM, which has over 40 years’ worth of material in its reference library, including photography, matte paintings, backdrops and other materials that have been captured over the decades.

With hundreds of thousands of untagged pieces of content, it’s impractical for the ILM team to manually search through them on a production schedule.

Omniverse DeepSearch, however, lets ILM search intuitively through untagged assets using text or a 2D image. DeepSearch uses AI to categorize and find images automatically — this results in massive time savings for the creative team, removing the need to manually tag each asset.

All images courtesy of Industrial Light & Magic.

“With Omniverse DeepSearch, we have the ability to search through data in real time, which is key for production,” said Landis Fields, real time principal creative at ILM. “And being able to search through assets with natural language allows for our creative teams to easily find what they’re looking for, helping them achieve the final look and feel of a scene much more efficiently than before.”

DeepSearch also works on USD files, so the ILM team can review search results and bring images into the 3D space in Omniverse Enterprise. The artists could then interact with the 3D environment using a VR headset.

With NVIDIA DeepSearch and Omniverse Enterprise, ILM has the potential to accelerate creative pipelines, lower costs and enhance production workflows to create captivating content for virtual productions.

Join NVIDIA at SIGGRAPH to learn more about the latest Omniverse announcements, watch the company’s special address on demand and see the global premiere of NVIDIA’s documentary, The Art of Collaboration: NVIDIA, Omniverse, and GTC, on Wednesday, Aug. 10, at 10 a.m. PT.

The post As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky appeared first on NVIDIA Blog.

Read More

New NVIDIA Neural Graphics SDKs Make Metaverse Content Creation Available to All

The creation of 3D objects for building scenes for games, virtual worlds including the metaverse, product design or visual effects is traditionally a meticulous process, where skilled artists balance detail and photorealism against deadlines and budget pressures.

It takes a long time to make something that looks and acts as it would in the physical world. And the problem gets harder when multiple objects and characters need to interact in a virtual world. Simulating physics becomes just as important as simulating light. A robot in a virtual factory, for example, needs to have not only the same look, but also the same weight capacity and braking capability as its physical counterpart.

It’s hard. But the opportunities are huge, affecting trillion-dollar industries as varied as transportation, healthcare, telecommunications and entertainment, in addition to product design. Ultimately, more content will be created in the virtual world than in the physical one.

To simplify and shorten this process, NVIDIA today released new research and a broad suite of tools that apply the power of neural graphics to the creation and animation of 3D objects and worlds.

These SDKs — including NeuralVDB, a ground-breaking update to industry standard OpenVDB,and Kaolin Wisp, a Pytorch library establishing a framework for neural fields research —  ease the creative process for designers while making it easy for millions of users who aren’t design professionals to create 3D content.

Neural graphics is a new field intertwining AI and graphics to create an accelerated graphics pipeline that learns from data. Integrating AI enhances results, helps automate design choices and provides new, yet to be imagined opportunities for artists and creators. Neural graphics will redefine how virtual worlds are created, simulated and experienced by users.

These SDKs and research contribute to each stage of the content creation pipeline, including:

3D Content Creation

  • Kaolin Wisp – an addition to Kaolin, a PyTorch library enabling faster 3D deep learning research by reducing the time needed to test and implement new techniques from weeks to days. Kaolin Wisp is a research-oriented library for neural fields, establishing a common suite of tools and a framework to accelerate new research in neural fields.
  • Instant Neural Graphics Primitives – a new approach to capturing the shape of real-world objects, and the inspiration behind NVIDIA Instant NeRF, an inverse rendering model that turns a collection of still images into a digital 3D scene. This technique and associated GitHub code accelerate the process by up to 1,000x.
  • 3D MoMa – a new inverse rendering pipeline that allows users to quickly import a 2D object into a graphics engine to create a 3D object that can be modified with realistic materials, lighting and physics.
  • GauGAN360 – the next evolution of NVIDIA GauGAN, an AI model that turns rough doodles into photorealistic masterpieces. GauGAN360 generates 8K, 360-degree panoramas that can be ported into Omniverse scenes.
  • Omniverse Avatar Cloud Engine (ACE) – a new collection of cloud APIs, microservices and tools to create, customize and deploy digital human applications. ACE is built on NVIDIA’s Unified Compute Framework, allowing developers to seamlessly integrate core NVIDIA AI technologies into their avatar applications.

Physics and Animation

  • NeuralVDB – a groundbreaking improvement on OpenVDB, the current industry standard for volumetric data storage. Using machine learning, NeuralVDB introduces compact neural representations, dramatically reducing memory footprint to allow for higher-resolution 3D data.
  • Omniverse Audio2Face – an AI technology that generates expressive facial animation from a single audio source. It’s useful for interactive real-time applications and as a traditional facial animation authoring tool.
  • ASE: Animation Skills Embedding – an approach enabling physically simulated characters to act in a more responsive and life-like manner in unfamiliar situations. It uses deep learning to teach characters how to respond to new tasks and actions.
  • TAO Toolkit – a framework to enable users to create an accurate, high-performance pose estimation model, which can evaluate what a person might be doing in a scene using computer vision much more quickly than current methods.

Experience

  • Image Features Eye Tracking – a research model linking the quality of pixel rendering to a user’s reaction time. By predicting the best combination of rendering quality, display properties and viewing conditions for the least latency, It will allow for better performance in fast-paced, interactive computer graphics applications such as competitive gaming.
  • Holographic Glasses for Virtual Reality – a collaboration with Stanford University on a new VR glasses design that delivers full-color 3D holographic images in a groundbreaking 2.5-mm-thick optical stack.

Join NVIDIA at SIGGRAPH to see more of the latest research and technology breakthroughs in graphics, AI and virtual worlds. Check out the latest innovations from NVIDIA Research, and access the full suite of NVIDIA’s SDKs, tools and libraries.

The post New NVIDIA Neural Graphics SDKs Make Metaverse Content Creation Available to All appeared first on NVIDIA Blog.

Read More

Upping the Standard: NVIDIA Introduces NeuralVDB, Bringing AI and GPU Optimization to Award-Winning OpenVDB

NVIDIA today announced NeuralVDB, which brings the power of AI to OpenVDB, the industry-standard library for simulating and rendering sparse volumetric data, such as water, fire, smoke and clouds.

Building on the past decade’s development of OpenVDB, the introduction at SIGGRAPH of NeuralVDB is a game-changer for professionals working in areas like scientific computing and visualization, medical imaging, rocket science and visual effects. By reducing memory footprint by up to 100x, it allows creators, developers and researchers to interact with extremely large and complex datasets in real time.

Over the past decade, OpenVDB has earned Academy Awards as a core technology used throughout the visual-effects industry. It has since grown beyond entertainment to industrial and scientific use cases where sparse volumetric data is prevalent, such as industrial design and robotics.

Last year, NVIDIA introduced NanoVDB, which added GPU support to OpenVDB. This delivered an order-of-magnitude speedup, enabling faster performance and easier development — and opening the door to real-time simulation and rendering.

NeuralVDB builds on the GPU acceleration of NanoVDB by adding machine learning to introduce compact neural representations that dramatically reduce its memory footprint. This allows 3D data to be represented at even higher resolution and at a much larger scale than OpenVDB. The result is that users can easily handle massive volumetric datasets on devices like individual workstations and even laptops.

NeuralVDB offers a significant efficiency improvement over OpenVDB by compressing a volume’s memory footprint up to 100x compared to NanoVDB. This allows users to transmit and share large, complex volumetric datasets much more efficiently.

To accelerate training up to 2x, NeuralVDB allows the weights of a frame to be used for the subsequent one. NeuralVDB also enables users to achieve temporal coherency, or smooth encoding, by using the network results from the previous frame.

Hitting this trifecta of dramatically reducing memory requirements, accelerating training and enabling temporal coherency allows NeuralVDB to unlock new possibilities for scientific and industrial use cases, including massive, complex volume datasets for AI-enabled medical imaging, large-scale digital twin simulations and more.

Learn more about NeuralVDB.

Watch the NVIDIA special address at SIGGRAPH on demand, and join NVIDIA at the conference through Thursday, Aug. 11, to see more of the latest technology breakthroughs in graphics, AI and virtual worlds.

The post Upping the Standard: NVIDIA Introduces NeuralVDB, Bringing AI and GPU Optimization to Award-Winning OpenVDB appeared first on NVIDIA Blog.

Read More

How to Start a Career in AI

How do I start a career as a deep learning engineer? What are some of the key tools and frameworks used in AI? How do I learn more about ethics in AI?

Everyone has questions, but the most common questions in AI always return to this: how do I get involved?

Cutting through the hype to share fundamental principles for building a career in AI, a group of AI professionals gathered at NVIDIA’s GTC conference in the spring offered what may be the best place to start.

Each panelist, in a conversation with NVIDIA’s Louis Stewart, head of strategic initiatives for the developer ecosystem, came to the industry from very different places.

Watch the session on demand.

But the speakers — Katie Kallot, NVIDIA’s former head of global developer relations and emerging areas; David Ajoku, founder of startup aware.ai; Sheila Beladinejad, CEO of Canada Tech; and Teemu Roos, professor at the University of Helsinki  — returned again and again to four basic principles.

1) Start With Networking and Mentorship

The best way to start, Ajoku explained, is to find people who are where you want to be in five years.

And don’t just look for them online — on Twitter and LinkedIn. Look for opportunities to connect with others in your community and at professional events who are going where you want to be.

“You want to find people you admire, find people who walk the path you want to be on over the next five years,” Ajoku said. “It doesn’t just come to you; you have to go get it.”

At the same time, be generous about sharing what you know with others. “You want to find people who will teach, and in teaching, you will learn,” he added.

But the best place to start is knowing that reaching out is okay.

“When I started my career in computer science, I didn’t even know I should be seeking a mentor,” Beladinejad said, echoing remarks from the other panelists.

“I learned not to be shy, to ask for support and seek help whenever you get stuck on something — always have the confidence to approach your professors and classmates,” she added.

2) Get Experience

Kallot explained that the best way to learn is by doing.

She got a degree in political science and learned about technology — including how to code — while working in the industry.

She started out as a sales and marketing analyst, then leaped to a product manager role.

“I had to learn everything about AI in three months, and at the same time I had to learn to use the product, I had to learn to code,” she said.

The best experience, explained Roos, is to surround yourself with people on the same learning journey, whether they’re learning online or in person.

“Don’t do it alone. If you can, grab your friends, grab your colleagues, maybe start a study group and create a curriculum,” he said. “Meet once a week, twice a week — it’s much more fun that way.”

3) Develop Soft Skills

You’ll also need the communications skills to explain what you’re learning, and doing, in AI as you progress.

“Practice talking about technical topics to non-technical audiences,” Stewart said.

Ajoku recommended learning and practicing public speaking.

Ajoku took an acting class at Carnegie Mellon University. Similarly, Roos took an improv comedy class.

Others on the panel learned to perform, publicly, through dance and sports.

“The more you’re cross-trained, the more comfortable you’re going to be and the better you’re going to be able to express yourself in any environment,” Stewart said.

4) Define Your Why 

The most important element, however, comes from within, the panelists said.

They urged listeners to find a reason, something that drives them to stay motivated on their journey.

For some, it’s environmental issues. Others are driven by a desire to make technology more accessible. Or to help make the industry more inclusive, panelists said.

“It’s helpful for anyone if you have a topic that you’re passionate about,” Beladinejad said. “That would help keep you going, keep your motivation up.”

Whatever you do, “do it with passion,” Stewart said. “Do it with purpose.”

Burning Questions

Throughout the conversation, thousands of virtual attendees submitted more than 350 questions about how to get started in their AI careers.

Among them:

What’s the best way to learn about deep learning? 

The NVIDIA Deep Learning Institute offers a huge variety of hands-on courses.

Even more resources for new and experienced developers alike are available through the NVIDIA Developer program, which includes resources for those pursuing higher education and research.

Massive open online courses — or MOOCs — have made learning about technical subjects more accessible than ever. One panelist suggested looking for classes taught by Stanford Professor Andrew Ng on Coursera.

“There are many MOOC courses out there, YouTube videos and books — I highly recommend finding a study buddy as well,” another wrote.

“Join technical and professional networks … get some experience through volunteering, participating in a Kaggle competition, etc.”

What are some of the most prevalent tools and frameworks used in machine learning and AI in industry? Which ones are crucial to landing a first job or internship in the field?

The best way to figure out which technologies you want to start with, one panelist suggested, is to think about what you want to do.

Another suggested, however, that learning Python isn’t a bad place to begin.

“A lot of today’s AI tools are based on Python,” they wrote. “You can’t go wrong by mastering Python.”

“The technology is evolving rapidly, so many of today’s AI developers are constantly learning new things. Having software fundamentals like data structures and common languages like Python and C++ will help set you up to ‘learn on the job,’” another added.

What’s the best way to start getting experience in the field? Do personal projects count as experience? 

Student clubs, online developer communities, volunteering and personal projects are all a great way to gain hands-on experience, panelists wrote.

And definitely include personal projects on your resume, another added.

Is there an age limit for getting involved in AI? 

Age isn’t at all a barrier, whether you’re just starting out or transitioning from another field, panelists wrote.

Build a portfolio for yourself so you can better demonstrate your skills and abilities — that’s what should count.

Employers should be able to easily realize your potential and skills.

I want to build a tech startup with some form of AI as the engine driving the solution to solve an as-yet-to-be-determined problem. What pointers do you have for entrepreneurs? 

Entrepreneurs should apply to be a part of NVIDIA Inception.

The program provides free benefits, such as technical support, go-to-market support, preferred pricing on hardware and access to its VC alliance for funding.

Which programming language is best for AI?

Python is widely used in deep learning, machine learning and data science. The programming language is at the center of a thriving ecosystem of deep learning frameworks and developer tools. It’s predominantly used for training complex models and for real-time inference for web-based services.

C/C++ is a popular programming language for self-driving cars which is used for deploying models for real-time inference.

Those getting started, though, will want to make sure they’re familiar with a broad array of tools, not just Python.

The NVIDIA Deep Learning Institute’s beginner self-paced courses can be one of the best ways to get oriented.

Learn More at GTC

At NVIDIA GTC, a global AI conference running Sept. 19-22, hear firsthand from professionals about how they got started in their careers.

Register for free now — and check out the sessions How to Be a Deep Learning Engineer and 5 Paths to a Career in AI.

Learn the AI essentials from NVIDIA fast: check out the “getting started” resources to explore the fundamentals of today’s hottest technologies on our learning series page.

The post How to Start a Career in AI appeared first on NVIDIA Blog.

Read More

NVIDIA Instant NeRF Wins Best Paper at SIGGRAPH, Inspires Creative Wave Amid Tens of Thousands of Downloads

3D content creators are clamoring for NVIDIA Instant NeRF, an inverse rendering tool that turns a set of static images into a realistic 3D scene.

Since its debut earlier this year, tens of thousands of developers around the world have downloaded the source code and used it to render spectacular scenes, sharing eye-catching results on social media.

The research behind Instant NeRF is being honored as a best paper at SIGGRAPH — which runs Aug. 8-11 in Vancouver and online — for its contribution to the future of computer graphics research. One of just five papers selected for this award, it’s among 17 papers and workshops with NVIDIA authors that are being presented at the conference, covering topics spanning neural rendering, 3D simulation, holography and more.

NVIDIA recently held an Instant NeRF sweepstakes, asking developers to share 3D scenes created with the software for a chance to win a high-end NVIDIA GPU. Hundreds participated, posting 3D scenes of landmarks like Stonehenge, their backyards and even their pets.

Among the creators using Instant NeRF are:

Through the Looking Glass: Karen X. Cheng and James Perlman

San Francisco-based creative director Karen X. Cheng is working with software engineer James Perlman to render 3D scenes that test the boundaries of what Instant NeRF can create.

The duo has used Instant NeRF to create scenes that explore reflections within a mirror (shown above) and handle complex environments with multiple people — like a group enjoying ramen at a restaurant.

“The algorithm itself is groundbreaking — the fact that you can render a physical scene with higher fidelity than normal photogrammetry techniques is just astounding,” Perlman said. “It’s incredible how accurately you can reconstruct lighting, color differences or other tiny details.”

“It even makes mistakes look artistic,” said Cheng. “We really lean into that, and play with training a scene less sometimes, experimenting with 1,000, or 5,000 or 50,000 iterations. Sometimes I’ll prefer the ones trained less because the edges are softer and you get an oil-painting effect.”

Using prior tools, it would take them three or four days to train a “decent-quality” scene. With Instant NeRF, the pair can churn out about 20 a day, using an NVIDIA RTX A6000 GPU to render, train and preview their 3D scenes.

With rapid rendering comes faster iteration.

“Being able to render quickly is very necessary for the creative process. We’d meet up and shoot 15 or 20 different versions, run them overnight and then see what’s working,” said Cheng. “Everything we’ve published has been shot and reshot a dozen times, which is only possible when you can run several scenes a day.”

Preserving Moments in Time: Hugues Bruyère

Hugues Bruyère, partner and chief of innovation at Dpt., a Montreal-based creative studio, uses Instant NeRF daily.

“3D captures have always been of strong interest to me because I can go back to those volumetric reconstructions and move in them, adding an extra dimension of meaning to them,” he said.

Bruyère rendered 3D scenes with Instant NeRF using the data he’d previously captured for traditional photogrammetry relying on mirrorless digital cameras, smartphones, 360 cameras and drones. He uses an NVIDIA GeForce RTX 3090 GPU to render his Instant NeRF scenes.

Bruyère believes Instant NeRF could be a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects.

“The aspect of capturing itself is being democratized, as camera and software solutions become cheaper,” he said. “In a few months or years, people will be able to capture objects, places, moments and memories and have them volumetrically rendered in real time, shareable and preserved forever.”

Using pictures taken with a smartphone, Bruyère created an Instant NeRF render of an ancient marble statue of Zeus from an exhibition at Toronto’s Royal Ontario Museum.

Stepping Into Remote Scenes: Jonathan Stephens

Jonathan Stephens, chief evangelist for spatial computing company EveryPoint, has been exploring Instant NeRF for both creative and practical applications.

EveryPoint reconstructs 3D scenes such as stockpiles, railyards and quarries to help businesses manage their resources. With Instant NeRF, Stephens can capture a scene more completely, allowing clients to freely explore a scene. He uses an NVIDIA GeForce RTX 3080 GPU to run scenes rendered with Instant NeRF.

“What I really like about Instant NeRF is that you quickly know if your render is working,” Stephens said. “With a large photogrammetry set, you could be waiting hours or days. Here, I can test out a bunch of different datasets and know within minutes.”

Visit the NVIDIA Technical Blog for a tutorial from Stephens on getting started with NVIDIA Instant NeRF.

Stephens has also experimented with making NeRFs using footage from lightweight devices like smart glasses. Instant NeRF could turn his low-resolution, bumpy footage from walking down the street into a smooth 3D scene.

Find NVIDIA at SIGGRAPH

Tune in for a special address by NVIDIA CEO Jensen Huang and other senior leaders on Tuesday, Aug. 9, at 9 a.m. PT to hear about the research and technology behind AI-powered virtual worlds.

NVIDIA is also presenting a score of in-person and virtual sessions for SIGGRAPH attendees, including:

Learn how to create with Instant NeRF in the hands-on demo, NVIDIA Instant NeRF — Getting Started With Neural Radiance Fields. Instant NeRF will also be part of SIGGRAPH’s “Real-Time Live” showcase — where in-person attendees can vote for a winning project.

For more interactive sessions, the NVIDIA Deep Learning Institute is offering free hands-on training with NVIDIA Omniverse and other 3D graphics technologies for in-person conference attendees.

And peek behind the scenes of NVIDIA GTC in the documentary premiere, The Art of Collaboration: NVIDIA, Omniverse, and GTC, taking place Aug. 10 at 10 a.m. PT, to learn how NVIDIA’s creative, engineering and research teams used the company’s technology to deliver the visual effects in the latest GTC keynote address.

Find out more about NVIDIA at SIGGRAPH, and see a full schedule of events and sessions in this show guide.

The post NVIDIA Instant NeRF Wins Best Paper at SIGGRAPH, Inspires Creative Wave Amid Tens of Thousands of Downloads appeared first on NVIDIA Blog.

Read More

Dive Into AI, Avatars and the Metaverse With NVIDIA at SIGGRAPH

Innovative technologies in AI, virtual worlds and digital humans are shaping the future of design and content creation across every industry. Experience the latest advances from NVIDIA in all these areas at SIGGRAPH, the world’s largest gathering of computer graphics experts, running Aug. 8-11.

At the conference, creators, developers, engineers, researchers and students will see all the new tech and research that enables them to elevate immersive storytelling, build realistic avatars and create stunning 3D virtual worlds.

NVIDIA’s special address on Tuesday, Aug. 9, at 9 a.m. PT will feature founder and CEO Jensen Huang, along with other senior leaders. Join to get an exclusive look at some of our most exciting work, from award-winning research to new AI-powered tools and solutions.

Discover the emergence of the metaverse, and see how users can build 3D content and connect photorealistic virtual worlds with NVIDIA Omniverse, a computing platform for 3D design collaboration and true-to-reality world simulation. See the advanced solutions that are powering these 3D worlds, and how they expand the realm of artistic expression and creativity.

NVIDIA is also presenting over 20 in-person sessions at SIGGRAPH, including hands-on labs and research presentations. Explore the session topics below to build your calendar for the event:

Building 3D Virtual Worlds

See how users can create assets and build virtual worlds for the metaverse using the power and versatility of Universal Scene Description (USD) with this presentation:

Powering the Metaverse

Find out how to accelerate complex 3D workflows and content creation for the metaverse. Discover groundbreaking ways to visualize, simulate and code with advanced solutions like NVIDIA Omniverse in sessions including:

  • Real-Time Collaboration in Ray-Traced VR. Discover the recent leaps in hardware architecture and graphics software that have made ray tracing at virtual-reality frame rates possible at this session on Monday, Aug. 8, at 5 p.m. PT.
  • Material Workflows in Omniverse. Learn how to improve graphics workflows with arbitrary material shading systems supported in Omniverse at this talk on Thursday, Aug. 11, at 9 a.m. PT.

Exploring Neural Graphics Research

Learn more about neural graphics — the unification of AI and graphics — which will make metaverse content creation available to everyone. From 3D assets to animation, see how AI integration can enhance results, automate design choices and unlock new opportunities for creativity in the metaverse. Check out the session below:

Accelerating Workflows Across Industries

Get insights on the latest technologies transforming industries, from cloud production to extended reality. Discover how leading film studios, cutting-edge startups and other graphics companies are building and supporting their technologies with NVIDIA solutions. Some must-see sessions include:

SIGGRAPH registration is required to attend the in-person events. Sessions will also be available the following day to watch on demand from our site.

Many NVIDIA partners will attend SIGGRAPH, showcasing demos and presenting on topics such as AI and virtual worlds. Download this event map to learn more.

And tune into the global premiere of The Art of Collaboration: NVIDIA, Omniverse and GTC on Wednesday, Aug. 10, at 10 a.m. PT. The documentary shares the story of the engineers, artists and researchers who pushed the limits of NVIDIA GPUs, AI and Omniverse to deliver the stunning GTC keynote last spring.

Join NVIDIA at SIGGRAPH to learn more, and watch NVIDIA’s special address to hear the latest on graphics, AI and virtual worlds.

The post Dive Into AI, Avatars and the Metaverse With NVIDIA at SIGGRAPH appeared first on NVIDIA Blog.

Read More

What Is Direct and Indirect Lighting?

Imagine hiking to a lake on a summer day — sitting under a shady tree and watching the water gleam under the sun.

In this scene, the differences between light and shadow are examples of direct and indirect lighting.

The sun shines onto the lake and the trees, making the water look like it’s shimmering and the leaves appear bright green. That’s direct lighting. And though the trees cast shadows, sunlight still bounces off the ground and other trees, casting light on the shady area around you. That’s indirect lighting.

For computer graphics to immerse viewers in photorealistic environments, it’s important to accurately simulate the behavior of light to achieve the proper balance of direct and indirect lighting.

What Is Direct and Indirect Lighting?

Light shining onto an object is called direct lighting.

It determines the color and quantity of light that reaches a surface from a light source, but ignores all light that may arrive at the surface from any other sources, such as after reflection or refraction. Direct lighting also determines the amount of the light that’s absorbed and reflected by the surface itself.

Direct lighting from the sun and sky.

Light bouncing off a surface, illuminating other objects is called indirect lighting. It arrives at surfaces from everything except light sources. In other words, indirect lighting determines the color and quantity of all other light that arrives at a surface. Most commonly, indirect light is reflected from one surface onto other surfaces.

Indirect lighting generally tends to be more difficult and expensive to compute than direct lighting. This is because there is a substantially larger number of paths that light can take between the light emitter and the observer.

Direct and indirect lighting in the same setting.

What Is Global Illumination?

Global illumination is the process of computing the color and quantity of all light — both direct and indirect — that is on visible surfaces in a scene.

Accurately simulating all types of indirect light is extremely difficult, especially if the scene includes complex materials such as glass, water and shiny metals — or if the scene has scattering in clouds, smoke, fog or other elements known as volumetric media.

As a result, real-time graphics solutions for global illumination are typically limited to computing a subset of the indirect light — commonly for surfaces with diffuse (aka matte) materials.

How Are Direct and Indirect Lighting Computed? 

Many algorithms can be used for computing direct lighting, all of which have strengths and weaknesses. For example, if the scene has a single light and no shadows, direct illumination is trivial to compute, but it won’t look very realistic. On the other hand, when a scene has multiple light sources, processing them all for each surface can become expensive.

To tackle these issues, optimized algorithms and shading techniques were developed, such as deferred or clustered shading. These algorithms reduce the number of surface and light interactions to be computed.

Shadows can be added through a number of techniques, including shadow maps, stencil shadow volumes and ray tracing.

Shadow mapping has two steps. First, the scene is rendered from the light’s point of view into a special texture called the shadow map. Then, the shadow map is used to test whether surfaces visible on the screen are also visible from the light’s point of view. Shadow maps come with many limitations and artifacts, and quickly become expensive as the number of lights in the scene increases.

Stencil shadows in ‘Doom 3’ (2004). Image source: Wikipedia.

Stencil shadow volumes are based on extruding scene geometry away from the light, and rendering that extruded geometry into the stencil buffer. The contents of the stencil buffer are then used to determine if a given surface on the screen is in shadow or not. Stencil shadows are always sharp, unnaturally so, but they don’t suffer from common shadow map problems.

Until the introduction of NVIDIA RTX technology, ray tracing was too costly to use when computing shadows. Ray tracing is a method of rendering in graphics that simulates the physical behavior of light. Tracing the rays from a surface on the screen to a light allows for the computation of shadows, but this can be challenging if the light comes from one point. And ray-traced shadows can quickly get expensive if there are many lights in the scene.

More efficient sampling methods were developed to reduce the number of rays required to compute soft shadows from multiple lights. One example is an algorithm called ReSTIR, which calculates direct lighting from millions of lights and shadows with ray tracing at interactive frame rates.

Direct illumination and ray-traced shadows created with ReSTIR, compared to a previous algorithm.

What Is Path Tracing?

For indirect lighting and global illumination, even more methods exist. The most straightforward is called path tracing, where random light paths are simulated for each visible surface. Some of these paths reach lights and contribute to the finished scene, while others do not.

Path tracing is the most accurate method capable of producing results that fully represent lighting in a scene, matching the accuracy of mathematical models for materials and lights. Path tracing can be very expensive to compute, but it’s considered the “holy grail” of real-time graphics.

Comparison of path tracing with a less complete ray-tracing algorithm and rasterization.

How Does Direct and Indirect Lighting Affect Graphics?

Light map applied to a scene. Image courtesy of Reddit.

Direct lighting provides the basic appearance of realism, and indirect lighting makes scenes look richer and more natural.

One way indirect lighting has been used in many video games is through omnipresent ambient light. This type of light can be constant, or vary spatially over light probes arranged in a grid pattern. It can also be rendered into a texture that is wrapped around static objects in a scene — this method is known as a “light map.”

In most cases, ambient light is shadowed by a function of geometry around the surface called ambient occlusion, which helps increase the image realism.

Direct lighting only vs. global illumination in a forest scene.

Examples of Direct Lighting, Indirect Lighting and Global Illumination

Direct and indirect lighting has been present, in some form, in almost every 3D game since the 1990s. Below are some milestones of how lighting has been implemented in popular titles:

  • 1993: Doom showcased one of the first examples of dynamic lighting. The game could vary the light intensity per sector, which made textures lighter or darker, and was used to simulate dim and bright areas or flickering lights.
Map sectors with varying light intensities in Doom.
  • 1995: Quake introduced light maps, which were pre-computed for each level in the game. The light maps could modulate the ambient light intensity.
  • 1997: Quake II added color to the light maps, as well as dynamic lighting from projectiles and explosions.
  • 2001: Silent Hill 2 showcased per-pixel lighting and shadow mapping. Shrek used deferred lighting and stencil shadows.
  • 2007: Crysis showed dynamic screen-space ambient occlusion, which uses pixel depth to give a sense of changes in lighting.
Crysis (2007). Image courtesy of MobyGames.com.
  • 2008: Quake Wars: Ray Traced became the first game tech demo to use ray-traced reflections.
  • 2011: Crysis 2 became the first game to include screen-space reflections, which is a popular technique for reusing screen-space data to calculate reflections.
  • 2016: Rise of the Tomb Raider became the first game to use voxel-based ambient occlusion.
  • 2018: Battlefield V became the first commercial game to use ray-traced reflections.
  • 2019: Q2VKPT became the first game to implement path tracing, which was later refined in Quake II RTX.
  • 2020: Minecraft with RTX used path tracing with RTX.
Minecraft with RTX.

What’s Next for Lighting in Real-Time Graphics?

Real-time graphics are moving toward a more complete simulation of light in scenes with increasing complexity.

ReSTIR dramatically expands the possibilities of artists to use multiple lights in games. Its newer variant, ReSTIR GI, applies the same ideas toward global illumination, enabling path tracing with more bounces and fewer approximations. It can also render less noisy images faster. And more algorithms are being developed to make path tracing faster and more accessible.

Using a complete simulation of lighting effects with ray tracing also means that the rendered images can contain some noise. Clearing that noise, or “denoising,” is another area of active research.

More technologies are being developed to help games effectively denoise lighting in complex, highly detailed scenes with lots of motion at real-time frame rates. This challenge is being approached from two ends: advanced sampling algorithms that generate less noise and advanced denoisers that can handle increasingly difficult situations.

Denoising with NRD in Cyberpunk 2077.

Check out NVIDIA’s solutions for direct lighting and indirect lighting, and access NVIDIA resources for game development.

Learn more about graphics with NVIDIA at SIGGRAPH ‘22 and watch the NVIDIA’s special address, presented by NVIDIA CEO and senior leaders, to hear the latest graphics announcements.

The post What Is Direct and Indirect Lighting? appeared first on NVIDIA Blog.

Read More

Rush Into August This GFN Thursday With 38 New Games on GeForce NOW

It’s the first GFN Thursday of the month and you know the drill — GeForce NOW is bringing a big batch of games to the cloud.

Get ready for 38 exciting titles like Saints Row and Rumbleverse arriving on the GeForce NOW library in August. Members can kick off the month streaming 13 new games today, including Retreat to Enen with RTX ON.

Arriving in August

This month is packed full of new games streaming across GeForce NOW-supported devices. Gamers have 38 new titles to look forward to, including exciting new releases like Saints Row and Rumbleverse that can be played on Macs only via the power of the GeForce cloud.

Saints Row on GeForce NOW
It feels so good to be bad. Play like a boss streaming ‘Saints Row’ this month on GeForce NOW.

Members will be able to visit the Weird Wild West of Santo Ileso, a vibrant city rife with crime in Deep Silver’s explosive franchise reboot of Saints Row. Embark on criminal ventures as the future Boss, form the Saints with allies Neenah, Kevin and Eli, take down competing gangs, and build your criminal empire to become truly Self Made.

Gamers will also be able to throw down in Rumbleverse, a new, free-to-play, 40-person Brawler Royale where anyone can be a champion. Customize your fighter by mixing and matching unique items and launch your way into the battlefield, streaming at full PC quality to mobile devices.

RTX 3080 members will also be able to play these and the other 1,300+ titles in the GeForce NOW library streaming in 4K resolution at 60 frames per second, or 1440p at 120 FPS on PC and Mac native apps.

Catch the full list of games coming to the cloud later this August:

Play New Games Today

Great gaming in August starts with 13 new games now ready to stream.

Retreat to Enen
Undertake a rite of passage to find your place in a world that narrowly avoided the extinction of humanity.

RTX 3080 and Priority members can play titles like Retreat to Enen with RTX ON support for beautiful, cinematic graphics. RTX 3080 members also get perks of ultra-low latency and maximized eight-hour gaming sessions to enjoy all of the new gaming goodness.

Catch all of the games ready to play today: 

Say Bye to July

In addition to the 13 games announced in July, an extra 13 joined over the month: 

And a few games announced last month didn’t make it, due to shifting of their release dates:

  • Grimstar: Welcome to the Savage Planet (Steam)
  • Panzer Arena: Prologue (Steam)
  • Turbo Sloths (Steam)

With all of these new games on the way, it’s a good time to take a look back and enjoy the games that have been bringing the heat over the summer. Let us know your response on Twitter or in the comments below.

The post Rush Into August This GFN Thursday With 38 New Games on GeForce NOW appeared first on NVIDIA Blog.

Read More

NVIDIA Jetson AGX Orin 32GB Production Modules Now Available; Partner Ecosystem Appliances and Servers Arrive

Bringing new AI and robotics applications and products to market, or supporting existing ones, can be challenging for developers and enterprises.

The NVIDIA Jetson AGX Orin 32GB production module — available now — is here to help.

Nearly three dozen technology providers in the NVIDIA Partner Network worldwide are offering commercially available products powered by the new module, which provides up to a 6x performance leap over the previous generation.

With a wide range of offerings from Jetson partners, developers can build and deploy feature-packed Orin-powered systems sporting cameras, sensors, software and connectivity suited for edge AI, robotics, AIoT and embedded applications.

Production-ready systems with options for peripherals enable customers to tackle challenges in industries from manufacturing, retail and construction to agriculture,  logistics, healthcare, smart cities, last-mile delivery and more.

Helping Build More Capable AI-Driven Products Faster 

Traditionally, developers and engineers have been limited in their ability to handle multiple concurrent data streams for complex application environments. They’ve faced strict latency requirements, energy-efficiency constraints, and issues with high-bandwidth wireless connectivity. And they need to be able to easily manage over-the-air software updates.

They’ve also been forced to include multiple chips in their designs to harness the compute resources needed to process diverse, ever-growing amounts of data.

NVIDIA Jetson AGX Orin overcomes all of these challenges.

The Jetson AGX Orin developer kit, capable of up to 275 trillion operations per second, supports multiple concurrent AI application pipelines with an NVIDIA Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed I/O, and fast memory bandwidth.

With Jetson AGX Orin, customers can develop solutions using the largest and most complex AI models to solve problems such as natural language understanding, 3D perception and multi-sensor fusion.

The four Jetson Orin-based production modules, announced at GTC, offer customers a full range of server-class AI performance. The Jetson AGX Orin 32GB module is available to purchase now, while the 64GB version will be available in November. Two Orin NX production modules are coming later this year.

The production systems are supported by the NVIDIA Jetson software stack, which has enabled thousands of enterprises and millions of developers to build and deploy fully accelerated AI solutions on Jetson.

On top of JetPack SDK, which includes the NVIDIA CUDA-X accelerated stack, Jetson Orin supports multiple NVIDIA platforms and frameworks such as Isaac for robotics, DeepStream for computer vision, Riva for natural language understanding, TAO Toolkit to accelerate model development with pretrained models, and Metropolis, an application framework, set of developer tools and partner ecosystem that brings visual data and AI together to improve operational efficiency and safety across industries.

Customers are bringing their next-generation edge AI and robotics applications to market much faster by first emulating any Jetson Orin-based production module on the Jetson AGX Orin developer kit.

Expanding Developer Community and Jetson Partner Ecosystem 

More than 1 million developers and over 6,000 companies are building commercial products on the NVIDIA Jetson edge AI and robotics computing platform to create and deploy autonomous machines and edge AI applications.

With over 150 members, the growing Jetson ecosystem of partners offers a vast range of support, including from companies specialized in AI software, hardware and application design services, cameras, sensors and peripherals, developer tools and development systems.

Some 32 partners offer commercially available products, powered by the new Jetson AGX Orin module, that are packed with options to help support cutting-edge applications and accelerate time to market.

Developers looking for carrier boards and full hardware systems will find a range of options from AAEON, Auvidea, Connect Tech, MiiVii, Plink-AI, Realtimes and TZTEK to serve their needs.

Over 350 camera and sensor options are available from Allied Vision, Appropho, Basler AG, e-Con Systems, Framos, Leopard Imaging, LIPS, Robosense, Shenzhen Sensing, Stereolabs, Thundersoft, Unicorecomm and Velodyne. These can support challenging indoor/outdoor lighting conditions, as well as capabilities like lidars for mapping, localization and navigation in robotics and autonomous machines.

For comprehensive software support like device management, operating systems (Yocto & Realtime OS), AI software and toolkits, developers can look to Allxon, Cogniteam, Concurrent Realtime, Deci AI, DriveU, Novauto, RidgeRun, and Sequitur Labs.

And for connectivity options, including WiFi 6/6E, LTE and 5G, developers can check out the product offerings from Telit, Quectel, Infineon and Silex.

The new NVIDIA Jetson AGX Orin 32GB production module is available in the Jetson store from retail and distribution partners worldwide.  

The post NVIDIA Jetson AGX Orin 32GB Production Modules Now Available; Partner Ecosystem Appliances and Servers Arrive appeared first on NVIDIA Blog.

Read More