Latest ‘I AM AI’ Video Features Four-Legged Robots, Smart Cell Analysis, Tumor-Tracking Tech and More

“I am a visionary,” says an AI, kicking off the latest installment of NVIDIA’s I AM AI video series.

Launched in 2017, I AM AI has become the iconic opening for GTC keynote addresses by NVIDIA founder and CEO Jensen Huang. Each video, with its AI-created narration and soundtrack, documents the newest advances in artificial intelligence and their impact on the world.

The latest, which debuted at GTC last week, showcases how NVIDIA technologies enable AI to take on complex tasks in the world’s most challenging environments, from farms and traffic intersections to museums and research labs.

Here’s a sampling of the groundbreaking AI innovations featured in the video.

Accuray Radiotherapy System Treats Lung Tumors 

Lung tumors can move as much as two inches with every breath — making it difficult to protect healthy lung tissue while targeting the tumor for treatment.

Bay Area-based radiation therapy company Accuray offers Radixact, an AI-powered system that uses motion-tracking capabilities to follow a tumor’s movement and deliver treatment with sub-millimeter accuracy.

The system’s respiratory motion synchronization feature, which works in real time, matches treatment to the natural rhythm of patients’ breathing cycles, allowing them to breathe as normal during the process.

Radixact, which can take precise imagery of the tumor from any angle, is powered by NVIDIA RTX GPUs.

ANYmal Robots Learn to Walk on Their Own

The Robotic Systems Lab, at ETH Zurich, in collaboration with Swiss-Mile, is embracing the future of robotic mobility.

The Swiss research lab fitted the four-legged robot ANYmal with wheels so that it can learn to stand, walk and drive — all on its own and in a matter of minutes.

Built on the NVIDIA Jetson edge AI platform and trained with Isaac Gym, the robot’s combination of legs and wheels enables it to carry tools and overcome obstacles like steps or stairs. Its AI-powered cameras and processing of laser scanning data allow it to perceive and create maps of its environment — indoors or outdoors.

The robot can help with delivery services, search-and-rescue missions, industrial inspection and more.

Sanctuary AI Robots Give a Helping Hand

Canadian startup Sanctuary AI aims “to create the world’s first human-like intelligence in general-purpose robots to help people work more safely, efficiently and sustainably.”

Built using NVIDIA Isaac Sim, Sanctuary AI’s general-purpose robots are highly dexterous — that is, great with their hands. They use their human-like fingers for a myriad of complex, precision tasks like opening Ziploc bags, handling pills or using almost any hand tool designed for a person.

The robots’ built-in cognitive architecture enables them to observe, assess and act on any task humans might need help with. Sanctuary AI aims to one day see its technology help with construction on the moon.

Sanctuary AI is a member of NVIDIA Inception, a program designed to nurture cutting-edge startups. Every member receives a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, opportunities to connect with investors, awareness support and technology assistance.

Scopio Accelerates Blood Cell Analysis

Another NVIDIA Inception member, Scopio, uses NVIDIA RTX GPUs to perform real-time, super-resolution analysis of blood, searching for hidden threats in every cell.

The company is transforming cell morphology with its microscopy scanning devices and Full-Field Peripheral Blood Smear application, which for the first time gives hematology labs and clinicians access to full-field scans of blood, with all cells imaged at 100x resolution.

The application runs Scopio’s machine learning algorithms to detect, classify and quantify blood cells — and help flag abnormalities, which are automatically documented in a digital report. This enhances workflow efficiency for labs and clinicians by more than 60 percent.

To learn more about the latest AI innovations, watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address in replay:

The post Latest ‘I AM AI’ Video Features Four-Legged Robots, Smart Cell Analysis, Tumor-Tracking Tech and More appeared first on NVIDIA Blog.

Read More

Teens Develop Handwriting-Recognition AI for Detecting Parkinson’s Disease

When Tanish Tyagi published his first research paper a year ago on deep learning to detect dementia, it started a family-driven pursuit.

Great-grandparents in his family had suffered from Parkinson’s, a genetic disease that affects more than 10 million people worldwide. So the now 16-year-old turned to that next, together with his sister, Riya, 14.

The siblings, from Short Hills, New Jersey, published a research paper in the fall about using machine learning to detect Parkinson’s disease by focusing on micrographia, a handwriting disorder that’s a marker for Parkinson’s.

They aim to make a model widely accessible so that early detection is possible for people around the world with limited access to clinics.

“Can we make some real change, can we not only impact our own family, but also see what’s out there and explore what we can do about something that might be a part of our lives in the future?” said Riya.

The Tyagis, who did the research over their summer break, attend prestigious U.S. boarding school Phillips Exeter Academy, alma mater to Mark Zuckerberg, Nobel Prize winners and one U.S. president.

When they aren’t busy with school or extracurricular research, they might be found pitching their STEM skills-focused board game (pictured above), available to purchase through Kickstarter.

Spotting Micrographia for Signs

Tanish decided to pursue research on Parkinson’s in February 2021, when he was just 15. He had recently learned about micrographia, a handwriting disorder that is a common symptom of Parkinson’s.

Micrographia in handwriting shows up as small text and exhibits tremors, involuntary muscle contractions and slow movement in the hands.

Not long after, Tanish heard a talk by Penn State University researchers Ming Wang and Lijun Zhang on Parkinson’s. So he sought their guidance on pursuing it for detection, and they agreed to supervise the project. Wang is also working with labs at Massachusetts General Hospital in connection with this research.

“Tanish and Riya’s work aims to enhance prediction of Micrographia by performing secondary analysis of public handwriting images and adopting state-of-art machine learning methods. The findings could help patients receive early diagnosis and treatment for better healthcare outcomes”, said Dr. Zhang, Associate Professor from Institute for Personalized Medicine at Penn State University.

In their paper, the Tyagis used NVIDIA GPU-driven machine learning for feature extraction of micrographia characteristics. Their dataset included open-source images of drawing exams from 53 healthy people and 105 Parkinson’s patients. They extracted several features from these images that allowed them to analyze tremors in writing.

“These are features that we had identified from different papers, and that we saw others had had success with,” said Riya.

With a larger and more balanced dataset, their high prediction accuracy of about 93 percent could get even better, said Tanish.

Developing a CNN for Diagnosis

Tanish had previously used his lab’s NVIDIA GeForce RTX 3080 GPU on a natural language processing project for dementia research. But neither sibling had much experience with computer vision before they began the Parkinson’s project.

Currently, the two are working on a convolutional neural network with transfer learning to put together a model that could be helpful for real-time diagnosis, said Riya.

“We’re working on processing the image from a user by feeding it into the model and then returning comprehensive results so that the user can really understand the diagnosis that the model is making,” Tanish said.

But first the Tyagis said they would like to increase the size of their dataset to improve the model’s accuracy. Their aim is to develop the model further and build a website. They want Parkinson’s detection to be so easy that people can fill out a handwriting assessment form and submit it for detection.

“It could be deployed to the general public and used in clinical settings, and that would be just amazing,” said Tanish.

The post Teens Develop Handwriting-Recognition AI for Detecting Parkinson’s Disease appeared first on NVIDIA Blog.

Read More

What Is a Transformer Model?

If you want to ride the next big wave in AI, grab a transformer.

They’re not the shape-shifting toy robots on TV or the trash-can-sized tubs on telephone poles.

So, What’s a Transformer Model?

A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence.

Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other.

First described in a 2017 paper from Google, transformers are among the newest and one of the most powerful classes of models invented to date. They’re driving a wave of advances in machine learning some have dubbed transformer AI.

Stanford researchers called transformers “foundation models” in an August 2021 paper because they see them driving a paradigm shift in AI. The “sheer scale and scope of foundation models over the last few years have stretched our imagination of what is possible,” they wrote.

What Can Transformer Models Do?

Transformers are translating text and speech in near real-time, opening meetings and classrooms to diverse and hearing-impaired attendees.

They’re helping researchers understand the chains of genes in DNA and amino acids in proteins in ways that can speed drug design.

Transformers models dubbed foundation models in Stanford paper
Transformers, sometimes called foundation models, are already being used with many data sources for a host of applications.

Transformers can detect trends and anomalies to prevent fraud, streamline manufacturing, make online recommendations or improve healthcare.

People use transformers every time they search on Google or Microsoft Bing.

The Virtuous Cycle of Transformer AI

Any application using sequential text, image or video data is a candidate for transformer models.

That enables these models to ride a virtuous cycle in transformer AI. Created with large datasets, transformers make accurate predictions that drive their wider use, generating more data that can be used to create even better models.

Transformer models herald era of transformer AI, says Stanford paper
Stanford researchers say transformers mark the next stage of AI’s development, what some call the era of transformer AI.

“Transformers made self-supervised learning possible, and AI jumped to warp speed,” said NVIDIA founder and CEO Jensen Huang in his keynote address this week at GTC.

Transformers Replace CNNs, RNNs

Transformers are in many cases replacing convolutional and recurrent neural networks (CNNs and RNNs), the most popular types of deep learning models just five years ago.

Indeed, 70 percent of arXiv papers on AI posted in the last two years mention transformers. That’s a radical shift from a 2017 IEEE study that reported RNNs and CNNs were the most popular models for pattern recognition.

No Labels, More Performance

Before transformers arrived, users had to train neural networks with large, labeled datasets that were costly and time-consuming to produce. By finding patterns between elements mathematically, transformers eliminate that need, making available the trillions of images and petabytes of text data on the web and in corporate databases.

In addition, the math that transformers use lends itself to parallel processing, so these models can run fast.

Transformers now dominate popular performance leaderboards like SuperGLUE, a benchmark developed in 2019 for language-processing systems.

How Transformers Pay Attention

Like most neural networks, transformer models are basically large encoder/decoder blocks that process data.

Small but strategic additions to these blocks (shown in the diagram below) make transformers uniquely powerful.

Example of a transformer model and self-attention
A look under the hood from a presentation by Aidan Gomez, one of eight co-authors of the 2017 paper that defined transformers.

Transformers use positional encoders to tag data elements coming in and out of the network. Attention units follow these tags, calculating a kind of algebraic map of how each element relates to the others.

Attention queries are typically executed in parallel by calculating a matrix of equations in what’s called multi-headed attention.

With these tools, computers can see the same patterns humans see.

Self-Attention Finds Meaning

For example, in the sentence:

She poured water from the pitcher to the cup until it was full. 

We know “it” refers to the cup, while in the sentence:

She poured water from the pitcher to the cup until it was empty.

We know “it” refers to the pitcher.

“Meaning is a result of relationships between things, and self-attention is a general way of learning relationships,” said Ashish Vaswani, a former senior staff research scientist at Google Brain who led work on the seminal 2017 paper.

“Machine translation was a good vehicle to validate self-attention because you needed short- and long-distance relationships among words,” said Vaswani.

“Now we see self-attention is a powerful, flexible tool for learning,” he added.

How Transformers Got Their Name

Attention is so key to transformers the Google researchers almost used the term as the name for their 2017 model. Almost.

“Attention Net didn’t sound very exciting,” said Vaswani, who started working with neural nets in 2011.

.Jakob Uszkoreit, a senior software engineer on the team, came up with the name Transformer.

“I argued we were transforming representations, but that was just playing semantics,” Vaswani said.

The Birth of Transformers

In the paper for the 2017 NeurIPS conference, the Google team described their transformer and the accuracy records it set for machine translation.

Thanks to a basket of techniques, they trained their model in just 3.5 days on eight NVIDIA GPUs, a small fraction of the time and cost of training prior models. They trained it on datasets with up to a billion pairs of words.

“It was an intense three-month sprint to the paper submission date,” recalled Aidan Gomez, a Google intern in 2017 who contributed to the work.

“The night we were submitting, Ashish and I pulled an all-nighter at Google,” he said. “I caught a couple hours sleep in one of the small conference rooms, and I woke up just in time for the submission when someone coming in early to work opened the door and hit my head.”

It was a wakeup call in more ways than one.

“Ashish told me that night he was convinced this was going to be a huge deal, something game changing. I wasn’t convinced, I thought it would be a modest gain on a benchmark, but it turned out he was very right,” said Gomez, now CEO of startup Cohere that’s providing a language processing service based on transformers.

A Moment for Machine Learning

Vaswani recalls the excitement of seeing the results surpass similar work published by a Facebook team using CNNs.

“I could see this would likely be an important moment in machine learning,” he said.

A year later, another Google team tried processing text sequences both forward and backward with a transformer. That helped capture more relationships among words, improving the model’s ability to understand the meaning of a sentence.

Their Bidirectional Encoder Representations from Transformers (BERT) model set 11 new records and became part of the algorithm behind Google search.

Within weeks, researchers around the world were adapting BERT for use cases across many languages and industries “because text is one of the most common data types companies have,” said Anders Arpteg, a 20-year veteran of machine learning research.

Putting Transformers to Work

Soon transformer models were being adapted for science and healthcare.

DeepMind, in London, advanced the understanding of proteins, the building blocks of life, using a transformer called AlphaFold2, described in a recent Nature article. It processed amino acid chains like text strings to set a new watermark for describing how proteins fold, work that could speed drug discovery.

AstraZeneca and NVIDIA developed MegaMolBART, a transformer tailored for drug discovery. It’s a version of pharmaceutical company’s MolBART transformer, trained on a large, unlabeled database of chemical compounds using the NVIDIA Megatron framework for building large-scale transformer models.

Reading Molecules, Medical Records

“Just as AI language models can learn the relationships between words in a sentence, our aim is that neural networks trained on molecular structure data will be able to learn the relationships between atoms in real-world molecules,” said Ola Engkvist, head of molecular AI, discovery sciences and R&D at AstraZeneca, when the work was announced last year.

Separately, the University of Florida’s academic health center collaborated with NVIDIA researchers to create GatorTron. The transformer model aims to extract insights from massive volumes of clinical data to accelerate medical research.

Transformers Grow Up

Along the way, researchers found larger transformers performed better.

For example, researchers from the Rostlab at the Technical University of Munich, which helped pioneer work at the intersection of AI and biology, used natural-language processing to understand proteins. In 18 months, they graduated from using RNNs with 90 million parameters to transformer models with 567 million parameters.

Transformer model applied to protein analysis
Rostlab researchers show language models trained without labeled samples picking up the signal of a protein sequence.

The OpenAI lab showed bigger is better with its Generative Pretrained Transformer (GPT). The latest version, GPT-3, has 175 billion parameters, up from 1.5 billion for GPT-2.

With the extra heft, GPT-3 can respond to a user’s query even on tasks it was not specifically trained to handle. It’s already being used by companies including Cisco, IBM and Salesforce.

Tale of a Mega Transformer

NVIDIA and Microsoft hit a high watermark in November, announcing the Megatron-Turing Natural Language Generation model (MT-NLG) with 530 billion parameters. It debuted along with a new framework, NVIDIA NeMo Megatron, that aims to let any business create its own billion- or trillion-parameter transformers to power custom chatbots, personal assistants and other AI applications that understand language.

MT-NLG had its public debut as the brain for TJ, the Toy Jensen avatar that gave part of the keynote at NVIDIA’s November 2021 GTC.

“When we saw TJ answer questions — the power of our work demonstrated by our CEO — that was exciting,” said Mostofa Patwary, who led the NVIDIA team that trained the model.

The Toy Jensen avatar aka TJ uses a transformer for a brain.
“Megatron helps me answer all those tough questions Jensen throws at me,” TJ said at GTC 2022.

Creating such models is not for the faint of heart. MT-NLG was trained using hundreds of billions of data elements, a process that required thousands of GPUs running for weeks.

“Training large transformer models is expensive and time-consuming, so if you’re not successful the first or second time, projects might be canceled,” said Patwary.

Trillion-Parameter Transformers

Today, many AI engineers are working on trillion-parameter transformers and applications for them.

“We’re constantly exploring how these big models can deliver better applications. We also investigate in what aspects they fail, so we can build even better and bigger ones,” Patwary said.

To provide the computing muscle those models need, our latest accelerator — the NVIDIA H100 Tensor Core GPU — packs a Transformer Engine and supports a new FP8 format. That speeds training while preserving accuracy.

With those and other advances, “transformer model training can be reduced from weeks to days” said Huang at GTC.

MoE Means More for Transformers

Last year, Google researchers described the Switch Transformer, one of the first trillion-parameter models. It uses AI sparsity, a complex mixture-of experts (MoE) architecture and other advances to drive performance gains in language processing and up to 7x increases in pre-training speed.

Google's Switch Transformer model
The encoder for the Switch Transformer, the first model to have up to a trillion parameters.

For its part, Microsoft Azure worked with NVIDIA to implement an MoE transformer for its Translator service.

Tackling Transformers’ Challenges

Now some researchers aim to develop simpler transformers with fewer parameters that deliver performance similar to the largest models.

“I see promise in retrieval-based models that I’m super excited about because they could bend the curve,” said Gomez, of Cohere, noting the Retro model from DeepMind as an example.

Retrieval-based models learn by submitting queries to a database. “It’s cool because you can be choosy about what you put in that knowledge base,” he said.

Transformer model size over time
In the race for higher performance, transformer models have grown larger.

The ultimate goal is to “make these models learn like humans do from context in the real world with very little data,” said Vaswani, now co-founder of a stealth AI startup.

He imagines future models that do more computation upfront so they need less data and sport better ways users can give them feedback.

“Our goal is to build models that will help people in their everyday lives,” he said of his new venture.

Safe, Responsible Models

Other researchers are studying ways to eliminate bias or toxicity if models amplify wrong or harmful language. For example, Stanford created the Center for Research on Foundation Models to explore these issues.

“These are important problems that need to be solved for safe deployment of models,” said Shrimai Prabhumoye, a research scientist at NVIDIA who’s among many across the industry working in the area.

“Today, most models look for certain words or phrases, but in real life these issues may come out subtly, so we have to consider the whole context,” added Prabhumoye.

“That’s a primary concern for Cohere, too,” said Gomez. “No one is going to use these models if they hurt people, so it’s table stakes to make the safest and most responsible models.”

Beyond the Horizon

Vaswani imagines a future where self-learning, attention-powered transformers approach the holy grail of AI.

“We have a chance of achieving some of the goals people talked about when they coined the term ‘general artificial intelligence’ and I find that north star very inspiring,” he said.

“We are in a time where simple methods like neural networks are giving us an explosion of new capabilities.”

NVIDIA H100 GPU speeds inference and training on transformers
Transformer training and inference will get significantly accelerated with the NVIDIA H100 GPU.

The post What Is a Transformer Model? appeared first on NVIDIA Blog.

Read More

NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly — making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NVIDIA applied this approach to a popular new technology called neural radiance fields, or NeRF. The result, dubbed Instant NeRF, is the fastest NeRF technique to date, achieving more than 1,000x speedups in some cases. The model requires just seconds to train on a few dozen still photos  — plus data on the camera angles they were taken from — and can then render the resulting 3D scene within tens of milliseconds.

“If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene,” says David Luebke, vice president for graphics research at NVIDIA. “In that sense, Instant NeRF could be as important to 3D as digital cameras and JPEG compression have been to 2D photography — vastly increasing the speed, ease and reach of 3D capture and sharing.”

Showcased in a session at NVIDIA GTC this week, Instant NeRF could be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps.

In a tribute to the early days of Polaroid images, NVIDIA Research recreated an iconic photo of Andy Warhol taking an instant photo, turning it into a 3D scene using Instant NeRF.

What Is a NeRF? 

NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images.

Collecting data to feed a NeRF is a bit like being a red carpet photographer trying to capture a celebrity’s outfit from every angle — the neural network requires a few dozen images taken from multiple positions around the scene, as well as the camera position of each of those shots.

In a scene that includes people or other moving elements, the quicker these shots are captured, the better. If there’s too much motion during the 2D image capture process, the AI-generated 3D scene will be blurry.

From there, a NeRF essentially fills in the blanks, training a small neural network to reconstruct the scene by predicting the color of light radiating in any direction, from any point in 3D space. The technique can even work around occlusions — when objects seen in some images are blocked by obstructions such as pillars in other images.

Accelerating 1,000x With Instant NeRF

While estimating the depth and appearance of an object based on a partial view is a natural skill for humans, it’s a demanding task for AI.

Creating a 3D scene with traditional methods takes hours or longer, depending on the complexity and resolution of the visualization. Bringing AI into the picture speeds things up. Early NeRF models rendered crisp scenes without artifacts in a few minutes, but still took hours to train.

Instant NeRF, however, cuts rendering time by several orders of magnitude. It relies on a technique developed by NVIDIA called multi-resolution hash grid encoding, which is optimized to run efficiently on NVIDIA GPUs. Using a new input encoding method, researchers can achieve high-quality results using a tiny neural network that runs rapidly.

The model was developed using the NVIDIA CUDA Toolkit and the Tiny CUDA Neural Networks library. Since it’s a lightweight neural network, it can be trained and run on a single NVIDIA GPU — running fastest on cards with NVIDIA Tensor Cores.

The technology could be used to train robots and self-driving cars to understand the size and shape of real-world objects by capturing 2D images or video footage of them. It could also be used in architecture and entertainment to rapidly generate digital representations of real environments that creators can modify and build on.

Beyond NeRFs, NVIDIA researchers are exploring how this input encoding technique might be used to accelerate multiple AI challenges including reinforcement learning, language translation and general-purpose deep learning algorithms.

To hear more about the latest NVIDIA research, watch the replay of CEO Jensen Huang’s keynote address at GTC below.

The post NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI appeared first on NVIDIA Blog.

Read More

Take Control This GFN Thursday With New Stratus+ Controller From SteelSeries

GeForce NOW gives you the power to game almost anywhere, at GeForce quality. And with the latest controller from SteelSeries, members can stay in control of the action on Android and Chromebook devices.

This GFN Thursday takes a look at the SteelSeries Stratus+, now part of the GeForce NOW Recommended program.

And it wouldn’t be Thursday without new games, so get ready for six additions to the GeForce NOW library, including the latest season of Fortnite and a special in-game event for MapleStory that’s exclusive for GeForce NOW members.

The Power to Play, in the Palm of Your Hand

GeForce NOW transforms mobile phones into powerful gaming computers capable of streaming PC games anywhere. The best mobile gaming sessions are backed by recommended controllers, including the new Stratus+ by SteelSeries.

SteelSeries Stratus+
Take control of how you play with the new SteelSeries Stratus+.

The Stratus+ wireless controller combines precision with comfort, delivering a full console experience on a mobile phone and giving a competitive edge to Android and Chromebook gamers. Gamers can simply connect to any Android mobile or Chromebook device with Bluetooth Low Energy and play with a rechargeable battery that lasts up to 90 hours. Or they can wire in to any Windows PC via USB connection.

The controller works great with GeForce NOW’s RTX 3080 membership. Playing on select 120Hz Android phones, members can stream their favorite PC games at up to 120 frames per second.

SteelSeries’ line of controllers is part of the full lineup of GeForce NOW Recommended products, including optimized routers that are perfect in-home networking upgrades.

Get Your Game On

This week brings the start of Fortnite Chapter 3 Season 2, “Resistance.” Building has been wiped out. To help maintain cover, you now have an overshield and new tactics like sprinting, mantling and more. Even board an armored battle bus to be a powerful force or attach a cow catcher to your vehicle for extra ramming power. Join the Seven in the final battle against the IO to free the Zero Point. Don’t forget to grab the Chapter 3 Season 2 Battle Pass to unlock characters like Tsuki 2.0, the familiar foe Gunnar and The Origin.

MapleStory on GeForce NOW
Adventure and rewards await on this exclusive GeForce NOW quest.

Nexon, maker of popular global MMORPG MapleStory, is launching a special in-game quest — exclusive to GeForce NOW members. Level 30+ Maplers who log in using GeForce NOW will receive a GeForce NOW quest that grants players a Lil Boo Pet, and a GeForce NOW Event Box that can be opened 24 hours after acquiring. But hurry – this quest is only available March 24-April 28.

And since GFN Thursday means more games every week. This week includes open-ended, zombie-infested sandbox Project Zomboid. Play alone or survive with friends thanks to multiplayer support across persistent servers.

Project Zomboid on GeForce NOW
Finally, a game that proves you can learn valuable skills by watching TV. Won’t your mother be proud?

Feeling zombie shy? That’s okay, there’s always something new to play on GeForce NOW. Here’s the complete list of six titles coming this week:

Finally, the release timing for Lumote: The Mastermote Chronicles has shifted and will join GeForce NOW at a later date.

With the cloud making new ways to play PC games across your devices possible, we’ve got a question that may get you a bit nostalgic this GFN Thursday. Let us know your answer on Twitter:

The post Take Control This GFN Thursday With New Stratus+ Controller From SteelSeries appeared first on NVIDIA Blog.

Read More

Orchestrated to Perfection: NVIDIA Data Center Grooves to Tune of Millionfold Speedups

The hum of a bustling data center is music to an AI developer’s ears — and NVIDIA data centers have found a rhythm of their own, grooving to the swing classic “Sing, Sing, Sing” in this week’s GTC keynote address.

The lighthearted video, created with the NVIDIA Omniverse platform, features Louis Prima’s iconic music track, re-recorded at the legendary Abbey Road Studios. Its drumming, dancing data center isn’t just for kicks — it celebrates the ability of NVIDIA data center solutions to orchestrate unprecedented AI performance.

Cutting-edge AI is tackling the world’s biggest challenges — but to do so, it needs the most advanced data centers, with thousands of hardware and software components working in perfect harmony.

At GTC, NVIDIA is showcasing the latest data center technologies poised to accelerate next-generation applications in business, research and art. To keep up with the growing demand for computing these applications, optimization is needed across the entire computing stack, as well as innovation at the level of distributed algorithms, software and systems.

Performance growth at the bottom of the computing stack, based on Moore’s law, can’t keep pace with the requirements of these applications. Moore’s law, which predicted a 2x growth in computing performance every other year, has yielded to Huang’s law — that GPUs will double AI performance every year.

Advancements across the entire computing stack, from silicon to application-level software, have contributed to an unprecedented million-x speedup in accelerated computing in the last decade. It’s not just about faster GPUs, DPUs and CPUs. Computing based on neural network models, advanced network technologies and distributed software algorithms all contribute to the data center innovation needed to keep pace with the demands of ever-growing AI models.

Through these innovations, the data center has become the single unit of computing. Thousands of servers work seamlessly as one, with NVIDIA Magnum IO software and new breakthroughs like the NVIDIA NVLink Switch System unveiled at GTC combining to link advanced AI infrastructure.

Orchestrated to perfection, an NVIDIA-powered data center will support innovations that are yet to be even imagined.

Developing a Digital Twin of the Data Center

The GTC video performance showcases a digital twin NVIDIA is building of its own data centers — a virtual representation of the physical supercomputer that NVIDIA designers and engineers can use to test new configurations or software builds before releasing updates to the physical system.

In addition to enabling continuous integration and delivery, a digital twin of a data center can be used to optimize operational efficiency, including response time, resource utilization and energy consumption.

Digital twins can help teams predict equipment failures, proactively replace weak links and test improvement measures before applying them. They can even provide a testing ground to fine-tune data centers for specific enterprise users or applications.

Applicable across industries and applications, digital twin technology is already being used as a powerful tool for warehouse optimizations, climate simulations, smart factory development and renewable energy planning.

In NVIDIA’s data center digital twin, viewers can spot flagship technologies including NVIDIA DGX SuperPOD and EGX-based NVIDIA-Certified systems with BlueField DPUs and InfiniBand switches. The performance also features a special appearance by Toy Jensen, an application built with Omniverse Avatar.

The visualization was developed in NVIDIA Omniverse, a platform for real-time world simulation and 3D design collaboration. Omniverse connects science and art by bringing together creators, developers, engineers and AIs across industries to work together in a shared virtual world.

Omniverse digital twins are true to reality, accurately simulating the physics and materials of their real counterparts. The realism allows Omniverse users to test out processes, interactions and new technologies in the digital space before moving to the physical world.

Every factory, neighborhood and city could one day be replicated as a digital twin. With connected sensors powered by edge computing, these sandbox environments can be continuously updated to reflect changes to the corresponding real-world assets or systems. They can help develop next-generation autonomous robots, smart cities and 5G networks.

A digital twin can learn the laws of physics, chemistry, biology and more, storing this information in its computing brain.

Just as kingdoms centuries ago sent explorers to travel the world and return with new knowledge, edge sensors and robots are today’s explorers for digital twin environments. Each sensor brings new observations back to the digital twin’s brain, which consolidates the data, learns from it and updates the autonomous systems within the virtual environment. This collective learning will tune digital twins to perfection.

Hear about the latest innovations in AI, accelerated computing and virtual world simulation at GTC, streaming online through March 24. Register free and learn more about data center acceleration in the session replay, “How to Achieve Millionfold Speedups in Data Center Performance.” Watch NVIDIA founder and CEO Jensen Huang’s keynote address below:

The post Orchestrated to Perfection: NVIDIA Data Center Grooves to Tune of Millionfold Speedups appeared first on NVIDIA Blog.

Read More

What Is Path Tracing?

Turn on your TV. Fire up your favorite streaming service. Grab a Coke. A demo of the most important visual technology of our time is as close as your living room couch.

Propelled by an explosion in computing power over the past decade and a half, path tracing has swept through visual media.

It brings big effects to the biggest blockbusters, casts subtle light and shadow on the most immersive melodramas and has propelled the art of animation to new levels.

More’s coming.

Path tracing is going real time, unleashing interactive, photorealistic 3D environments filled with dynamic light and shadow, reflections and refractions.

So what is path tracing? The big idea behind it is seductively simple, connecting innovators in the arts and sciences over the span half a millennium.

What’s the Difference Between Rasterization and Ray Tracing?

First, let’s define some terms, and how they’re used today to create interactive graphics — graphics that can react in real time to input from a user, such as in video games.

The first, rasterization, is a technique that produces an image as seen from a single viewpoint. It’s been at the heart of GPUs from the start. Modern NVIDIA GPUs can generate over 100 billion rasterized pixels per second. That’s made rasterization ideal for real-time graphics, like gaming.

Ray tracing is a more powerful technique than rasterization. Rather than being constrained to finding out what is visible from a single point, it can determine what is visible from many different points, in many different directions. Starting with the NVIDIA Turing architecture, NVIDIA GPUs have provided specialized RTX hardware to accelerate this difficult computation. Today, a single GPU can trace billions of rays per second.

Being able to trace all of those rays makes it possible to simulate how light scatters in the real world much more accurately than is possible with rasterization. However, we still must answer the questions, how will we simulate light and how will we bring that simulation to the GPU?

What’s Ray Tracing? Just Follow the String

To better answer that question, it helps to understand how we got here.

David Luebke, NVIDIA vice president of graphics research, likes to begin the story in the 16th century with Albrecht Dürer — one of the most important figures of the Northern European Renaissance — who used string and weights to replicate a 3D image on a 2D surface.

Dürer made it his life’s work to bring classical and contemporary mathematics together with the arts, achieving breakthroughs in expressiveness and realism.

The string’s the thing: Albrecht Dürer was the first to describe what’s now known as “ray tracing,” a technique for creating accurate representations of 3D objects on a 2D surfaces in Underweysung der Messung (Nuremberg, 1538).

In 1538 with Treatise on Measurement, Dürer was the first to describe the idea of ray tracing. Seeing how Dürer described the idea is the easiest way to get your head around the concept.

Just think about how light illuminates the world we see around us.

Now imagine tracing those rays of light backward from the eye with a piece of string like the one Dürer used, to the objects that light interacts with. That’s ray tracing.

Ray Tracing for Computer Graphics

Turner Whitted’s 1979 paper, “An improved illumination model for shaded display,” jump-started a ray-tracing renaissance.

In 1969, more than 400 years after Dürer’s death, IBM’s Arthur Appel showed how the idea of ray tracing could be brought to computer graphics, applying it to computing visibility and shadows.

A decade later, Turner Whitted was the first to show how this idea could capture reflection, shadows and refraction, explaining how the seemingly simple concept could make much more sophisticated computer graphics possible. Progress was rapid in the following few years.

In 1984, Lucasfilm’s Robert Cook, Thomas Porter and Loren Carpenter detailed how ray tracing could incorporate many common filmmaking techniques — including motion blur, depth of field, penumbras, translucency and fuzzy reflections — that were, until then, unattainable in computer graphics.

Jim Kajiya’s 1986 paper, “The Rendering Equation,” not only outlined an elegant, physics-based equation for describing how light moves around in a scene, it outlined an efficient way to put it to work.

Two years later, CalTech professor Jim Kajiya’s crisp, seven-page paper, “The Rendering Equation,” connected computer graphics with physics by way of ray tracing and introduced the path-tracing algorithm, which makes it possible to accurately represent the way light scatters throughout a scene.

What’s Path Tracing?

In developing path tracing, Kajiya turned to an unlikely inspiration: the study of radiative heat transfer, or how heat spreads throughout an environment. Ideas from that field led him to introduce the rendering equation, which describes how light passes through the air and scatters from surfaces.

The rendering equation is concise, but not easy to solve. Computer graphics scenes are complex, with billions of triangles not being unusual today. There’s no way to solve the rendering equation directly, which led to Kajiya’s second crucial innovation.

Kajiya showed that statistical techniques could be used to solve the rendering equation: even if it isn’t solved directly, it’s possible to solve it along the paths of individual rays. If it is solved along the path of enough rays to approximate the lighting in the scene accurately, photorealistic images are possible.

And how is the rendering equation solved along the path of a ray? Ray tracing.

The statistical techniques Kajiya applied are known as Monte Carlo integration and date to the earliest days of computers in the 1940s. Developing improved Monte Carlo algorithms for path tracing remains an open research problem to this day; NVIDIA researchers are at the forefront of this area, regularly publishing new techniques that improve the efficiency of path tracing.

By putting these two ideas together — a physics-based equation for describing the way light moves around a scene — and the use of Monte Carlo simulation to help choose a manageable number of paths back to a light source, Kajiya outlined the fundamental techniques that would become the standard for generating photorealistic computer-generated images.

His approach transformed a field dominated by a variety of disparate rendering techniques into one that — because it mirrored the physics of the way light moved through the real world — could put simple, powerful algorithms to work that could be applied to reproduce a large number of visual effects with stunning levels of realism.

Path Tracing Comes to the Movies

In the years after its introduction in 1987, path tracing was seen as an elegant technique — the most accurate approach known — but it was completely impractical. The images in Kajiya’s original paper were just 256 by 256 pixels, yet they took over 7 hours to render on an expensive mini-computer that was far more powerful than the computers available to most other people.

But with the increase in computing power driven by Moore’s law — which described the exponential increase in computing power driven by advances that allowed chipmakers to double the number of transistors on microprocessors every 18 months — the technique became more and more practical.

Beginning with movies such as 1998’s A Bug’s Life, ray tracing was used to enhance the computer-generated imagery in more and more motion pictures. And in 2006, the first entirely path-traced movie, Monster House, stunned audiences. It was rendered using the Arnold software that was co-developed at Solid Angle SL (since acquired by Autodesk) and Sony Pictures Imageworks.

The film was a hit — grossing more than $140 million worldwide. And it opened eyes about what a new generation of computer animation could do. As more computing power became available, more movies came to rely on the technique, producing images that are often indistinguishable from those captured by a camera.

The problem: it still takes hours to render a single image and sprawling collections of servers — known as “render farms” — are running continuously to render images for months in order to make a complete movie. Bringing that to real-time graphics would take an extraordinary leap.

What Does This Look Like in Gaming?

For many years, the idea of path tracing in games was impossible to imagine. While many game developers would have agreed that they would want to use path tracing if it had the performance necessary for real-time graphics, the performance was so far off of real time that path tracing seemed unattainable.

Yet as GPUs have continued to become faster and faster, and now with the widespread availability of RTX hardware, real-time path tracing is in sight. Just as movies began incorporating some ray-tracing techniques before shifting to path tracing — games have started by putting ray tracing to work in a limited way.

Right now a growing number of games are partially ray traced. They combine traditional rasterization-based rendering techniques with some ray-tracing effects.

So what does path traced mean in this context? It could mean a mix of techniques. Game developers could rasterize the primary ray, and then path trace the lighting for the scene.

Rasterization is equivalent to casting one set of rays from a single point that stops at the first thing they hit. Ray tracing takes this further, casting rays from many points in any direction. Path tracing simulates the true physics of light, which uses ray tracing as one component of a larger light simulation system.

This would mean all lights in a scene are sampled stochastically — using Monte Carlo or other techniques — both for direct illumination, to light objects or characters, and for global illumination, to light rooms or environments with indirect lighting.

To do that, rather than tracing a ray back through one bounce, rays would be traced over multiple bounces, presumably back to their light source, just as Kajiya outlined.

A few games are doing this already, and the results are stunning.

Microsoft has released a plugin that puts path tracing to work in Minecraft.

Quake II, the classic shooter — often a sandbox for advanced graphics techniques — can also be fully path traced, thanks to a new plugin.

There’s clearly more to be done. And game developers will need to know customers have the computing power they need to experience path-traced gaming.

Gaming is the most challenging visual computing project of all: requiring high visual quality and the speed to interact with fast-twitch gamers.

Expect techniques pioneered here to spill out to every aspect of our digital lives.

What’s Next?

As GPUs continue to grow more powerful, putting path tracing to work is the next logical step.

For example, armed with tools such as Arnold from Autodesk, V-Ray from Chaos Group or Pixar’s Renderman — and powerful GPUs — product designers and architects use ray tracing to generate photorealistic mockups of their products in seconds, letting them collaborate better and skip expensive prototyping.

CAPTION: Ray tracing has proven itself to architects and lighting designers, who are using its capabilities to model how light interacts with their designs.

As GPUs offer ever more computing power, video games are the next frontier for ray tracing and path tracing.

In 2018, NVIDIA announced NVIDIA RTX, a ray-tracing technology that brings real-time, movie-quality rendering to game developers.

NVIDIA RTX, which includes a ray-tracing engine running on NVIDIA Volta and Ampere architecture GPUs, supports ray tracing through a variety of interfaces.

And NVIDIA has partnered with Microsoft to enable full RTX support via Microsoft’s new DirectX Raytracing (DXR) API.

Since then, NVIDIA has continued to develop NVIDIA RTX technology, as more and more developers create games that support real-time ray tracing.

Minecraft even includes support for real-time path tracing, turning the blocky, immersive world into immersive landscapes swathed with light and shadow.

Thanks to increasingly powerful hardware, and a proliferation of software tools and related technologies, more is coming.

As a result, digital experiences — games, virtual worlds and even online collaboration tools — will take on the cinematic qualities of a Hollywood blockbuster.

So don’t get too comfy. What you’re seeing from your living room couch is just a demo of what’s to come in the world all around us.

 

The post What Is Path Tracing? appeared first on NVIDIA Blog.

Read More

NVIDIA Showcases Novel AI Tools in DRIVE Sim to Advance Autonomous Vehicle Development

Autonomous vehicle development and validation require the ability to replicate real-world scenarios in simulation.

At GTC, NVIDIA founder and CEO Jensen Huang showcased new AI-based tools for NVIDIA DRIVE Sim that accurately reconstruct and modify actual driving scenarios. These tools are enabled by breakthroughs from NVIDIA Research that leverage technologies such as NVIDIA Omniverse platform and NVIDIA DRIVE Map.

Huang demonstrated the methods side-by-side, showing how developers can easily test multiple scenarios in rapid iterations:

Once any scenario is reconstructed in simulation, it can act as the foundation for many different variations — from changing the trajectory of an oncoming vehicle, or adding an obstacle to the driving path — giving developers the ability to improve the AI driver.

However, reconstructing real-world driving scenarios and generating realistic data from it in simulation is a time- and labor-intensive process. It requires skilled engineers and artists, and even then, can be difficult to do.

NVIDIA has implemented two AI-based methods to seamlessly perform this process: virtual reconstruction and neural reconstruction. The first replicates the real-world scenario as a fully synthetic 3D scene, while the second uses neural simulation to augment real-world sensor data.

Both methods are able to expand well beyond recreating a single scenario to generating many new and challenging scenarios. This capability accelerates the continuous AV training, testing and validation pipeline.

Virtual Reconstruction 

In the keynote video above, an entire driving environment and set of scenarios around NVIDIA’s headquarters are reconstructed in 3D using NVIDIA DRIVE Map, Omniverse and DRIVE Sim.

With DRIVE Map, developers have access to a digital twin of a road network in Omniverse. Using tools built on Omniverse, the detailed map is  converted into a drivable simulation environment that can be used with NVIDIA DRIVE Sim.

With the reconstructed simulation environment, developers can recreate events, like a close call at an intersection or navigating a construction zone, using camera, lidar and vehicle data from real-world drives.

The platform’s AI helps reconstruct the scenario. First, for each tracked object, an AI looks at camera images and finds the most similar 3D asset available from the DRIVE Sim catalog and color that most closely matches the color of the object from the video.

Finally, the actual path of the tracked object is recreated; however, there are often gaps because of occlusions. In such cases, an AI-based traffic model is applied to the tracked object to predict what it would have done and fill in the gaps in its trajectory.

Camera and lidar data from real drives are used with AI to reconstruct scenarios.

Virtual reconstruction enables developers to find potentially challenging situations to train and validate the AV system with high-fidelity data generated by physically based sensors and AI behavior models that can create many new scenarios. Data from the scenario can also train the behavior model.

Neural Reconstruction 

The other approach relies on neural simulation rather than synthetically generating the scene, starting with real sensor data then modifying it.

Sensor replay — the process of playing back recorded sensor data to test the AV system’s performance — is a staple of AV development. This process is open loop, meaning the AV stack’s decisions don’t affect the world since all of the data is prerecorded.

A preview of neural reconstruction methods by NVIDIA Research turn this recorded data into a fully reactive and modifiable world — as in the demo, when the originally recorded van driving past the car could be reenacted to swerve right instead. This revolutionary approach allows closed-loop testing and full interaction between the AV stack and the world it’s driving in.

The process starts with recorded driving data. AI identifies the dynamic objects in the scene and removes them to create an exact replica of the 3D environment that can be rendered from new views. Dynamic objects are then reinserted into the 3D scene with realistic AI-based behaviors and physical appearance, accounting for illumination and shadows.

The AV system then drives in this virtual world and the scene reacts accordingly. The scene can be made more complex through augmented reality by inserting other virtual objects, vehicles and pedestrians which are rendered as if they were part of the real scene and can physically interact with the environment.

Every sensor on the vehicle, including camera and lidar, can be simulated in the scene using AI.

A Virtual World of Possibilities 

These new approaches are driven by NVIDIA’s expertise in rendering, graphics and AI.

As a modular platform, DRIVE Sim supports these capabilities with a foundation of deterministic simulation. It provides the vehicle dynamics, AI-based traffic models, scenario tools and a comprehensive SDK to build any tool needed.

With these two powerful new AI methods, developers can easily move from the real world to the virtual one for faster AV development and deployment.

The post NVIDIA Showcases Novel AI Tools in DRIVE Sim to Advance Autonomous Vehicle Development appeared first on NVIDIA Blog.

Read More

NVIDIA Inception Introduces New and Updated Benefits for Startup Members to Accelerate Computing

This week at GTC, we’re celebrating – celebrating the amazing and impactful work that developers and startups are doing around the world.

Nowhere is that more apparent than among the members of our global NVIDIA Inception program, designed to nurture cutting-edge startups who are revolutionizing industries. The program is free for startups of all sizes and stages of growth, offering go-to-market support, expertise and technology.

Inception members are doing amazing things on NVIDIA platforms across a multitude of areas, from digital twins and climate science, to healthcare and robotics. Now with over 10,000 members in 110 countries, Inception is a true reflection of the global startup ecosystem.

And we’re continuing momentum by offering new benefits to help startups accelerate even more.

Expanded Benefits

Inception members are now eligible for discounts across the NVIDIA Enterprise Software Suite, including NVIDIA AI Enterprise (NVAIE), Omniverse Enterprise and Riva Enterprise. NVAIE is a cloud-native software suite that is optimized, certified and supported by NVIDIA to streamline AI development and deployment. NVIDIA Omniverse Enterprise positions startups to build high-quality 3D tools or to simplify and accelerate complex 3D workflows. NVIDIA Riva Enterprise helps easily develop real-time applications like virtual assistants, transcription services and chatbots.

These discounts provide Inception members greater access to NVIDIA software tools to build computing applications in alignment with their own solutions.

Another new benefit for Inception members is access to special leasing for NVIDIA DGX systems. Available now for members in the U.S., this offers an enhanced opportunity for startups to leverage DGX to deliver leading solutions for enterprise AI infrastructure at scale.

Inception members continue to receive credits and exclusive discounts for technical self-paced courses and instructor-led workshops through the NVIDIA Deep Learning Institute. Upcoming DLI workshops include “Building Conversational AI Applications” and “Applications of AI for Predictive Maintenance” and courses include “Building Real-TIme Video AI Applications” and “Deploying a Model for Inference at Production Scale.”

A Growing Ecosystem

NVIDIA Inception is home for startups to do all types of interesting work, and welcomes developers in every field, area and industry.

Within the program, healthcare is a leading field, with over 1,600 healthcare startups. This is followed closely by over 1,500 IT services startups, more than 825 media and entertainment (M&E) startups and upwards of 800 video analytics startups. More than 660 robotics startups are members of Inception, paving the next wave of AI, through digital and physical robots.

An indicator of Inception’s growing popularity is the increase in startups who are doing work in emerging areas, such as NVIDIA Omniverse, a development platform for 3D design collaboration and real-time, physically accurate simulation, as well as climate sciences and more. Several Inception startups are already developing on the Omniverse platform.

Inception member Charisma is leveraging Omniverse to build digital humans for virtual worlds, games and education. The company enters interactive dialogue into the Omniverse Audio2Face app, tapping into NVIDIA V100 Tensor Core GPUs in the cloud.

Another Inception member, RIOS, helps enterprises automate factories, warehouses and supply chain operations by deploying AI-powered end-to-end robotic workcells. The company is harnessing Isaac Sim on Omniverse, which it also uses for customer deployments.

And RADiCAL is developing computer vision technology focused on detecting and reconstructing 3D human motion from 2D content. The startup is already developing on Omniverse to accelerate its work.

In the field of climate science, many Inception members are also doing revolutionary work to push the boundaries of what’s possible.

Inception member TrueOcean is running NVIDIA DGX A100 systems to develop AI algorithms for predicting quantification of carbon dioxide capture within seagrass meadows as well as for understanding subsea geology. Seagrass meadows can absorb and store carbon in the oxygen-depleted seabed, where it decomposes much slower than on land.

In alignment with NVIDIA’s own plans to build the world’s most powerful AI supercomputer for predicting climate change, Inception member Blackshark provides a semantic, photorealistic 3D digital twin of Earth as a plugin for Unreal Engine, relying on Omniverse as one its platforms for building large virtual geographic environments.

If you’re a startup doing disruptive and exciting development, join NVIDIA Inception today.

Check out GTC sessions on Omniverse and climate change from NVIDIA Inception members. Registration is free. And watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address, which features a new I AM AI video with Inception members HeartDub and PRENAV.

The post NVIDIA Inception Introduces New and Updated Benefits for Startup Members to Accelerate Computing appeared first on NVIDIA Blog.

Read More

NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators

At GTC, NVIDIA announced significant updates for millions of creators using the NVIDIA Omniverse real-time 3D design collaboration platform.

The announcements kicked off with updates to the Omniverse apps Create, Machinima and Showroom, with an immement View release. Powered by GeForce RTX and NVIDIA RTX GPUs, they dramatically accelerate 3D creative workflows.

New Omniverse Connections are expanding the ecosystem and are now available in beta: Unreal Engine 5 Omniverse Connector and the Adobe Substance 3D Material Extension, with the Adobe Substance 3D Painter Omniverse Connector very close behind.

Maxon’s Cinema 4D now has Universal Scene Description (USD) support. Unlocking Cinema 4D workflows via OmniDrive brings deeper integration and flexibility to the Omniverse ecosystem.

Leveraging the hydra render delegate feature, artists can now use Pixar HDStorm, Chaos V-Ray, Maxon Redshift and OTOY Octane Hydra render delegates within the viewport of all Omniverse apps, with Blender Cycles coming soon.

Whether refining 3D scenes or exporting final projects, artists can switch between the lightning-fast Omniverse RTX Renderer or their preferred renderer, giving them ultimate freedom to create however they like.

The Junk Shop by Alex Treviño. Original Concept by Anaïs Maamar. Note Hydra render delegates displayed in the renderer toggle menu.

These updates and more are available today in the Omniverse launcher, free to download, alongside the March NVIDIA Studio Driver release.

To celebrate the Machinima app update, we’re kicking off the #MadeInMachinima contest, in which artists can remix iconic characters from Squad, Mount & Blade II: Bannerlord and Mechwarrior 5 into a cinematic short in Omniverse Machinima to win NVIDIA Studio laptops. The submission window opens on March 29 and runs through June 27. Visit the contest landing page for details.

Can’t Wait to Create

Omniverse Create allows users to interactively assemble full-fidelity scenes by connecting to their favorite creative apps. Artists can add lighting, simulate physically accurate scenes and choose to render with Omniverse’s advanced RTX Renderer, or their favorite Hydra Render delegate.

Create version 2022.1 includes USD support for NURBS curves, a type of curve modeling useful for hair, particles and more. Scenes can now be rendered in passes with arbitrary output variables, or AOVs, delivering more control to artists during the compositing stage.

Animation curve editing is now possible with the addition of a graph editor. The feature helps animators feel comfortable working in creative apps such as Autodesk Maya and Blender. They can iterate simpler, faster and more intuitively.

The new ActionGraph feature unlocks keyboard shortcuts and user-interface buttons to trigger complex events simultaneously.

Apply different colors and textures with ease in Omniverse Create.

NVIDIA PhysX 5.0 updates provide soft and deformable body support for objects such as fabric, jelly and balloons, adding further realism to scenes with no animation necessary.

VMaterials 2.0, a curated collection of MDL materials and lights, now has over 900 physical materials for artists to apply physically accurate, real-world materials to their scenes with just a double click, no shader writing necessary.

Several new Create features are also available in beta:

  • AnimGraph based on OmniGraph brings characters to life with a new graph editor for simple, no-code, realistic animations.
  • New animation retargeting allows artists to map animations from one character to another, automating complex animation tasks such as joint mapping, reference post matching and previewing. When used with AnimGraph, artists can automate character rigging, saving artists countless hours of manual, tedious work.
  • Users can drag and drop assets they own, or click on others to purchase directly from the asset’s product page. Nearly 1 million assets from TurboSquid by Shutterstock, Sketchfab and Reallusion ActorCore are directly searchable in the Omniverse asset browser.

This otherworldly set of features is Create-ing infectious excitement for 3D workflows.

Machinima Magic

Omniverse Machinima 2022.1 beta provides tools for artists to remix, recreate and redefine animated video game storytelling through immersive visualization, collaborative design and photorealistic rendering.

The integration of NVIDIA Maxine’s body pose estimation feature gives users the ability to track and capture motion in real time using a single camera — without requiring a MoCap suit — with live conversion from a 2D camera capture to a 3D model.

Prerecorded videos can now be converted to animations with a new easy-to-use interface.

The retargeting feature applies these captured animations to custom-built skeletons, providing an easy way to animate a character with a webcam. No fancy, expensive device necessary, just a webcam.

Sequencer functionality updates include a new user interface for easier navigation; new tools including splitting, looping, hold and scale; more drag-and-drop functionality to simplify pipelines; and a new audio graph display.

Stitching and building cinematics is now as intuitive as editing video projects.

Step Into the Showroom

Omniverse Showroom 2022.1 includes seven new scenes that invite the newest of users to get started and embrace the incredible possibilities and technology within the platform.

Artists can engage with tech demos showcasing PhysX, rigid and soft body dynamics, flow, combustible fluid, smoke and fire, and blast, featuring destruction and fractures.

Enjoy the View

Omniverse View 2022.1 will enable non-technical project reviewers to collaboratively and interactively review 3D design projects in stunning photorealism, with several astonishing new features.

Markup gives artists the ability to add 2D feedback based on their viewpoint, including shapes and scribbles, for 3D feedback in the cloud.

Turntable places an interactive scene on a virtual table that can be rotated to see how realistic lighting conditions affect the scene in real time, advantageous for high-end movie production and architects.

Teleport and Waypoints allow artists to easily jump around their scenes and preset fully interactive views of Omniverse scenes for sharing.

Omniverse Ecosystem Expansion Continues

New beta Omniverse Connectors and extensions add variety and versatility to 3D creative workflows.

Now available, an Omniverse Connector for Unreal Engine 5 allows live-sync workflows.

The Adobe Substance 3D Material extension is now available, with a beta Substance 3D Painter Omniverse Connector coming soon, enabling artists to achieve more seamless, live-sync texture and material workflows.

Maxon’s Cinema4D now supports USD and is compatible with OmniDrive, unlocking Omniverse workflows for visualization specialists.

Finally, a new CAD importer enables product designers to convert 26 popular CAD formats into Omniverse USD scenes.

More Machinima Magic — With Prizes

The #MadeInMachinima contest asks participants to build scenes and assets — composed of characters from Squad, Mount & Blade II: Bannerlord and Mechwarrior 5 — using Omniverse Machinima.

Legendary Halo Red vs. Blue Studio, Rooster Teeth, produced this magnificent cinematic short in Machinima. Take a look to see what’s possible.

Machinima expertise, while welcome, is not required; this contest is for creators of all levels. Three talented winners will get an NVIDIA Studio laptop, powerful and purpose-built with vivid color displays and blazing-fast memory and storage, to boost future Omniverse sessions.

Machinima will be prominently featured at the Game Developers Conference, where game artists, producers, developers and designers come together to exchange ideas, educate and inspire. At the show, we also launched Omniverse for Developers, providing a more collaborative environment for the creation of virtual worlds.

NVIDIA offers sessions at GDC to assist content creators featuring virtual worlds and AI, real-time ray tracing, and developer tools. Check out the complete list.

Launch or download Omniverse today.

The post NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators appeared first on NVIDIA Blog.

Read More