Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

With deep learning, amputees can now control their prosthetics by simply thinking through the motion.

Jules Anh Tuan Nguyen spoke with NVIDIA AI Podcast host Noah Kravitz about his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Using neural decoders and deep learning, this system allows humans to control just about anything digital with their thoughts, including playing video games and a piano.

Nguyen is a postdoctoral researcher in the biomedical engineering department at the University of Minnesota. His work with his team is detailed in a paper titled “A Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control.”

Key Points From This Episode:

  • Nguyen and his team created an AI-based system using receptors implanted in the arm to translate the electrical information from the nerves into commands to execute the appropriate arm, hand and finger movements — all built into the arm.
  • The two main objectives of the system are to make the neural interface wireless and to optimize the AI engine and neural decoder to consume less power — enough for a person to use it for at least eight hours a day before having to recharge it.

Tweetables:

“To make the amputee move and feel just like a real hand, we have to establish a neural connection for the amputee to move their finger and feel it just like a missing hand.” — Jules Anh Tuan Nguyen [7:24]

“The idea behind it can extend to many things. You can control virtual reality. You can control a robot, a drone — the possibility is endless. With this nerve interface and AI neural decoder, suddenly you can manipulate things with your mind.” — Jules Anh Tuan Nguyen [22:07]

You Might Also Like:

AI for Hobbyists: DIYers Use Deep Learning to Shoo Cats, Harass Ants

Robots recklessly driving cheap electric kiddie cars. Autonomous machines shining lasers at ants — and spraying water at bewildered cats — for the amusement of cackling grandchildren. Listen in to hear NVIDIA engineer Bob Bond and Make: Magazine Executive Editor Mike Senese explain how they’re entertaining with deep learning.

A USB Port for Your Body? Startup Uses AI to Connect Medical Devices to Nervous System

Think of it as a USB port for your body. Emil Hewage is the co-founder and CEO at Cambridge Bio-Augmentation Systems, a neural engineering startup. The U.K. startup is building interfaces that use AI to help plug medical devices into our nervous systems.

Behind the Scenes at NeurIPS With NVIDIA and CalTech’s Anima Anandkumar

Anima Anandkumar, NVIDIA’s director of machine learning research and Bren professor at CalTech’s CMS Department, talks about NeurIPS and discusses the transition from supervised to unsupervised and self-supervised learning, which she views as the key to next-generation AI.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic appeared first on The Official NVIDIA Blog.

Read More

All AI Do Is Win: NVIDIA Research Nabs ‘Best in Show’ with Digital Avatars at SIGGRAPH

In a turducken of a demo, NVIDIA researchers stuffed four AI models into a serving of digital avatar technology for SIGGRAPH 2021’s Real-Time Live showcase — winning the Best in Show award.

The showcase, one of the most anticipated events at the world’s largest computer graphics conference, held virtually this year, celebrates cutting-edge real-time projects spanning game technology, augmented reality and scientific visualization. It featured a lineup of jury-reviewed interactive projects, with presenters hailing from Unity Technologies, Rensselaer Polytechnic Institute, the NYU Future Reality Lab and more.

Broadcasting live from our Silicon Valley headquarters, the NVIDIA Research team presented a collection of AI models that can create lifelike virtual characters for projects such as bandwidth-efficient video conferencing and storytelling.

The demo featured tools to generate digital avatars from a single photo, animate avatars with natural 3D facial motion and convert text to speech.

“Making digital avatars is a notoriously difficult, tedious and expensive process,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, in the presentation. But with AI tools, “there is an easy way to create digital avatars for real people as well as cartoon characters. It can be used for video conferencing, storytelling, virtual assistants and many other applications.”

AI Aces the Interview

In the demo, two NVIDIA research scientists played the part of an interviewer and a prospective hire speaking over video conference. Over the course of the call, the interviewee showed off the capabilities of AI-driven digital avatar technology to communicate with the interviewer.

The researcher playing the part of interviewee relied on an NVIDIA RTX laptop throughout, while the other used a desktop workstation powered by RTX A6000 GPUs. The entire pipeline can also be run on GPUs in the cloud.

While sitting in a campus coffee shop, wearing a baseball cap and a face mask, the interviewee used the Vid2Vid Cameo model to appear clean-shaven in a collared shirt on the video call (seen in the image above). The AI model creates realistic digital avatars from a single photo of the subject — no 3D scan or specialized training images required.

“The digital avatar creation is instantaneous, so I can quickly create a different avatar by using a different photo,” he said, demonstrating the capability with another two images of himself.

Instead of transmitting a video stream, the researcher’s system sent only his voice — which was then fed into the NVIDIA Omniverse Audio2Face app. Audio2Face generates natural motion of the head, eyes and lips to match audio input in real time on a 3D head model. This facial animation went into Vid2Vid Cameo to synthesize natural-looking motion with the presenter’s digital avatar.

Not just for photorealistic digital avatars, the researcher fed his speech through Audio2Face and Vid2Vid Cameo to voice an animated character, too. Using NVIDIA StyleGAN, he explained, developers can create infinite digital avatars modeled after cartoon characters or paintings.

The models, optimized to run on NVIDIA RTX GPUs, easily deliver video at 30 frames per second. It’s also highly bandwidth efficient, since the presenter is sending only audio data over the network instead of transmitting a high-resolution video feed.

Taking it a step further, the researcher showed that when his coffee shop surroundings got too loud, the RAD-TTS model could convert typed messages into his voice — replacing the audio fed into Audio2Face. The breakthrough text-to-speech, deep learning-based tool can synthesize lifelike speech from arbitrary text inputs in milliseconds.

RAD-TTS can synthesize a variety of voices, helping developers bring book characters to life or even rap songs like “The Real Slim Shady” by Eminem, as the research team showed in the demo’s finale.

SIGGRAPH continues through Aug. 13. Check out the full lineup of NVIDIA events at the conference and catch the premiere of our documentary, “Connecting in the Metaverse: The Making of the GTC Keynote,” on Aug. 11.

The post All AI Do Is Win: NVIDIA Research Nabs ‘Best in Show’ with Digital Avatars at SIGGRAPH appeared first on The Official NVIDIA Blog.

Read More

Three’s Company: NVIDIA Studio 3D Showcase at SIGGRAPH Spotlights NVIDIA Omniverse Update, New NVIDIA RTX A2000 Desktop GPU, August Studio Driver

The future of 3D graphics is on display at the SIGGRAPH 2021 virtual conference, where NVIDIA Studio is leading the way, showcasing exclusive benefits that NVIDIA RTX technologies bring to creators working with 3D workflows.

It starts with NVIDIA Omniverse, an immersive and connected shared virtual world where artists create one-of-a-kind digital scenes, perfect 3D models, design beautiful buildings and more with endless creative possibilities. The Omniverse platform continues to expand, gaining Blender USD support, a new Adobe Substance 3D plugin, and a new extension, GANverse3D — designed to make 3D modeling easier with AI.

Omniverse is currently in open beta and free for NVIDIA RTX and GeForce RTX GPU users. With today’s launch of the NVIDIA RTX A2000 GPU, millions more 3D artists and content creators will have the opportunity to explore the platform’s capabilities.

The latest creative app updates, along with Omniverse and RTX A2000 GPUs, gain improved levels of support in the August NVIDIA Studio Driver, available for download today.

Omniverse Expands the 3D Metaverse at SIGGRAPH

NVIDIA announced that Blender, the world’s leading open-source 3D animation application, will include support for Pixar’s Universal Scene Description (USD) in the Blender 3.0 release, enabling artists to use the application with Omniverse production pipelines.

The open-source 3D file framework gives software partners and artists multiple ways to extend and connect to Omniverse through USD adoption, building a plugin, or an Omniverse Connector, extension or app.

NVIDIA also unveiled an experimental Blender alpha 3.0 USD branch that includes more advanced USD and material support, which will be available soon for Blender users everywhere.

In addition, NVIDIA and Adobe are collaborating on a new Substance 3D plugin that will enable Substance Material support in Omniverse.

With the plugin, materials created in Adobe Substance 3D or imported from the Substance 3D Asset Library can be adjusted directly in Omniverse. 3D artists will save valuable time when making changes as they don’t need to export and reupload assets from Substance 3D Designer and Substance 3D Sampler.

We’re also releasing a new Omniverse extension, GANverse3D – Image2Car, which makes 3D modeling easier with AI. It’s the first of a collection of extensions that will comprise the Omniverse AI Toy Box.

GANverse3D was built on a generative adversarial network trained on 2D photos, synthesizing multiple views of thousands of objects to predict 3D geometry, texture and part segmentation labels. This process could turn a single photo of a car into a 3D model that can drive around a virtual scene, complete with realistic headlights, blinkers and wheels.

The AI Toy Box extension allows inexperienced 3D artists to easily create scenes, and experienced artists to bring new enhancements to their multi-app workflows.

Here’s GANverse3D in action with an Omniverse-connected workflow featuring Omniverse Create, Reallusion Character Creator 3 and Adobe Photoshop.

For a further dive into the latest innovations in 3D, including Omniverse, watch the NVIDIA special address at SIGGRAPH on demand.

Omniverse plays a critical role in many creative projects, like the GTC keynote with NVIDIA CEO Jensen Huang.

Get a sneak peek of how a small team of artists was able to blur the line between real and rendered.

The full documentary releases alongside the NVIDIA SIGGRAPH panel on Wednesday, August 11, at 11 a.m. PT.

The world’s leading artists use NVIDIA RTX and Omniverse to create beautiful work and stunning worlds. Hear from them directly in the second edition of NVIDIA’s RTX All Stars, a free e-book that spotlights creative professionals.

More RTX, More 3D Creative Freedom

NVIDIA RTX A2000 joins the RTX lineup as the most powerful, low-profile, dual-slot GPU for 3D creators. The new desktop GPU encompasses the latest RTX technologies in the NVIDIA Ampere architecture, including:

  • 2nd-gen RT Cores for real-time ray tracing with up to 5x performance from last gen with RTX ON.
  • 3rd-gen Tensor Cores to power and accelerate creative AI features.
  • 2x PCIe Gen 4 accelerating data paths in and out of the GPU and up to 6GB of GPU ECC memory for rendering and exporting large files.

RTX A2000-based systems will be available starting in October.

For on-the-go creators, the NVIDIA RTX A2000 laptop GPU — available in Studio laptops shipping today like the Lenovo ThinkPad P17 Gen 2 — is the most power-efficient, professional RTX laptop GPU bringing ray tracing and AI capabilities to thin and light mobile workstations.

The NVIDIA RTX A2000 GPUs support a wealth of creative workflows, including 3D modeling and Omniverse, whether behind a desktop or anywhere a laptop may travel.

August Brings Creative App Updates and Latest Studio Driver

Several exciting updates to cutting-edge creative apps shipped recently. The August Studio Driver, available today, sharpens support for all of them.

Topaz Sharpen AI v3.2 offers refinements to AI models accelerated by RTX GPUs and Tensor Cores, adding 1.5x motion blur and Too Soft/Very Blurry features further reducing artifacts.

In-app masking has also been improved with real-time processing of mask strokes and customization controls for the overlay display.

Reallusion Character Creator v3.43, the first third-party app with Audio2Face integration, now allows artists to export characters from Character Creator to Omniverse as USD files with Audio2Face-compliant meshes. This allows facial and lip animations to be completely AI-driven solely from voice input, regardless of language, simplifying the process of animating a 3D character.

Capture One 21 v14.3.0 adds a new Magic Brush tool to create complex masks for layer editing based on image content in a split second, working on an underlying processed image from the raw file. This process is hardware accelerated and is up to 3x faster when using the GPU compared to the CPU.

Support for these app updates, plus the new features in Omniverse, are only a click away. Download the August Studio Driver.

Best Studio Laptops for 3D Workflows

3D workflows range from modeling scenes in real time with complex lights and shadows, to visualizing architectural marvels, in or out of Omniverse, with massive exports. These necessitate major computational power, requiring advanced NVIDIA RTX and GeForce RTX GPUs to get jobs done quickly.

These Studio laptops are built to handle demanding 3D creative workflows:

  • Lenovo P1 Gen 4 is stylish and lightweight, at less than 4 pounds. It comes in a ton of configurations, including GeForce RTX 3070 and 3080, plus RTX A4000 and A5000 laptop GPUs.
  • Dell Precision 7760 is their thinnest, smallest and lightest 17-inch mobile workstation. With up to an RTX A5000 and 16GB of video memory, it’s great for working with massive 3D models or in multi-app workflows.
  • Acer ConceptD 7 Ezel features their patented Ezel Hinge with a 15.6-inch, 4K PANTONE-validated touchscreen display. Available later this month, it also comes with up to a GeForce RTX 3080 laptop GPU and 16GB of video memory.

Set to make a splash later this year is the HP Zbook Studio G8. Engineered for heavy 3D work, it comes well-equipped with up to an RTX A5000 or GeForce RTX 3080 laptop GPU, perfect for on-the-go creativity.

Browse the NVIDIA Studio Shop for more great options.

Stay up to date on all things Studio by subscribing to the NVIDIA Studio newsletter and following us on Facebook, Twitter and Instagram.

The post Three’s Company: NVIDIA Studio 3D Showcase at SIGGRAPH Spotlights NVIDIA Omniverse Update, New NVIDIA RTX A2000 Desktop GPU, August Studio Driver appeared first on The Official NVIDIA Blog.

Read More

What Is the Metaverse?

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.

Just as the physical universe is a collection of worlds that are connected in space, the metaverse can be thought of as a bunch of worlds, too.

Massive online social games, like battle royale juggernaut Fortnite and user-created virtual worlds like Minecraft and Roblox, reflect some elements of the idea.

Video-conferencing tools, which link far-flung colleagues together amidst the global COVID pandemic, are another hint at what’s to come.

But the vision laid out by Neal Stephenson’s 1992 classic novel “Snow Crash” goes well beyond any single game or video-conferencing app.

The metaverse will become a platform that’s not tied to any one app or any single place — digital or real, explains Rev Lebaredian, vice president of simulation technology at NVIDIA.

And just as virtual places will be persistent, so will the objects and identities of those moving through them, allowing digital goods and identities to move from one virtual world to another, and even into our world, with augmented reality.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
The metaverse will become a platform that’s not tied to any one place, physical or digital.

“Ultimately we’re talking about creating another reality, another world, that’s as rich as the real world,” Lebaredian says.

Those ideas are already being put to work with NVIDIA Omniverse, which, simply put, is a platform for connecting 3D worlds into a shared virtual universe.

Omniverse is in use across a growing number of industries for projects such as design collaboration and creating “digital twins,” simulations of real-world buildings and factories.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
BMW Group uses NVIDIA Omniverse to create a future factory, a perfect “digital twin” designed entirely in digital and simulated from beginning to end in NVIDIA Omniverse.

How NVIDIA Omniverse Creates, Connects Worlds Within the Metaverse

So how does Omniverse work? We can break it down into three big parts.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
NVIDIA Omniverse weaves together the Universal Scene Description interchange framework invented by Pixar with technologies for modeling physics, materials, and real-time path tracing.

The first is Omniverse Nucleus. It’s a database engine that connects users and enables the interchange of 3D assets and scene descriptions.

Once connected, designers doing modeling, layout, shading, animation, lighting, special effects or rendering can collaborate to create a scene.

Omniverse Nucleus relies on USD, or Universal Scene Description, an interchange framework invented by Pixar in 2012.

Released as open-source software in 2016, USD provides a rich, common language for defining, packaging, assembling and editing 3D data for a growing array of industries and applications.

Lebardian and others say USD is to the emerging metaverse what hyper-text markup language, or HTML, was to the web — a common language that can be used, and advanced, to support the metaverse.

Multiple users can connect to Nucleus, transmitting and receiving changes to their world as USD snippets.

The second part of Omniverse is the composition, rendering and animation engine — the simulation of the virtual world.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
Simulation of virtual worlds in NVIDIA DRIVE Sim on Omniverse.

Omniverse is a platform built from the ground up to be physically based. Thanks to NVIDIA RTX graphics technologies, it is fully path traced, simulating how each ray of light bounces around a virtual world in real-time.

Omniverse simulates physics with NVIDIA PhysX. It simulates materials with NVIDIA MDL, or material definition language.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
Built in NVIDIA Omniverse Marbles at Night is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons.

And Omniverse is fully integrated with NVIDIA AI (which is key to advancing robotics, more on that later).

Omniverse is cloud-native, scales across multiple GPUs, runs on any RTX platform and streams remotely to any device.

The third part is NVIDIA CloudXR, which includes client and server software for streaming extended reality content from OpenVR applications to Android and Windows devices, allowing users to portal into and out of Omniverse.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
NVIDIA Omniverse promises to blend real and virtual realities.

You can teleport into Omniverse with virtual reality, and AIs can teleport out of Omniverse with augmented reality.

Metaverses Made Real

NVIDIA released Omniverse to open beta in December, and NVIDIA Omniverse Enterprise in April. Professionals in a wide variety of industries quickly put it to work.

At Foster + Partners, the legendary design and architecture firm that designed Apple’s headquarters and London’s famed 30 St Mary Axe office tower — better known as “the Gherkin” — designers in 14 countries worldwide create buildings together in their Omniverse shared virtual space.

Visual effects pioneer Industrial Light & Magic is testing Omniverse to bring together internal and external tool pipelines from multiple studios. Omniverse lets them collaborate, render final shots in real-time and create massive virtual sets like holodecks.

Multinational networking and telecommunications company Ericsson uses Omniverse to simulate 5G wave propagation in real-time, minimizing multi-path interference in dense city environments.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
Ericsson uses Omniverse to do real-time 5G wave propagation simulation in dense city environments.

Infrastructure engineering software company Bentley Systems is using Omniverse to create a suite of applications on the platform. Bentley’s iTwin platform creates a 4D infrastructure digital twin to simulate an infrastructure asset’s construction, then monitor and optimize its performance throughout its lifecycle.

The Metaverse Can Help Humans and Robots Collaborate

These virtual worlds are ideal for training robots.

One of the essential features of NVIDIA Omniverse is that it obeys the laws of physics. Omniverse can simulate particles and fluids, materials and even machines, right down to their springs and cables.

Modeling the natural world in a virtual one is a fundamental capability for robotics.

It allows users to create a virtual world where robots — powered by AI brains that can learn from their real or digital environments — can train.

Once the minds of these robots are trained in the Omniverse, roboticists can load those brains onto a NVIDIA Jetson, and connect it to a real robot.

Those robots will come in all sizes and shapes — box movers, pick-and-place arms, forklifts, cars, trucks and even buildings.

In the future, a factory will be a robot, orchestrating many robots inside, building cars that are robots themselves.

How the Metaverse, and NVIDIA Omniverse, Enable Digital Twins

NVIDIA Omniverse provides a description for these shared worlds that people and robots can connect to — and collaborate in — to better work together.

It’s an idea that automaker BMW Group is already putting to work.

The automaker produces more than 2 million cars a year. In its most advanced factory, the company makes a car every minute. And each vehicle is customized differently.

BMW Group is using NVIDIA Omniverse to create a future factory, a perfect “digital twin.” It’s designed entirely in digital and simulated from beginning to end in Omniverse.

The Omniverse-enabled factory can connect to enterprise resource planning systems, simulating the factory’s throughput. It can simulate new plant layouts. It can even become the dashboard for factory employees, who can uplink into a robot to teleoperate it.

The AI and software that run the virtual factory are the same as what will run the physical one. In other words, the virtual and physical factories and their robots will operate in a loop. They’re twins.

No Longer Science Fiction

Omniverse is the “plumbing,” on which metaverses can be built.

It’s an open platform with USD universal 3D interchange, connecting them into a large network of users. NVIDIA has 12 Omniverse Connectors to major design tools already, with another 40 on the way. The Omniverse Connector SDK sample code, for developers to write their own Connectors, is available for download now.

The most important design tool platforms are signed up. NVIDIA has already enlisted partners from the world’s largest industries — media and entertainment; gaming; architecture, engineering and construction; manufacturing; telecommunications; infrastructure; and automotive.

And the hardware needed to run it is here now.

Computer makers worldwide are building NVIDIA-Certified workstations, notebooks and servers, which all have been validated for running GPU-accelerated workloads with optimum performance, reliability and scale. And starting later this year, Omniverse Enterprise will be available for enterprise license via subscription from the NVIDIA Partner Network.

What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.
With NVIDIA Omniverse teams are able to collaborate in real-time, from different places, using different tools, on the same project.

Thanks to NVIDIA Omniverse, the metaverse is no longer science fiction.

Back to the Future

So what’s next?

Humans have been exploiting how we perceive the world for thousands of years, NVIDIA’s Lebaredian points out. We’ve been hacking our senses to construct virtual realities through music, art and literature for millennia.

Next, add interactivity and the ability to collaborate, he says. Better screens, head-mounted displays like the Oculus Quest, and mixed-reality devices like Microsoft’s Hololens are all steps toward fuller immersion.

All these pieces will evolve. But the most important one is here already: a high-fidelity simulation of our virtual world to feed the display. That’s NVIDIA Omniverse.

To steal a line from science-fiction master William Gibson: the future is already here; it’s just not very evenly distributed.

The metaverse is the means through which we can distribute those experiences more evenly. Brought to life by NVIDIA Omniverse, the metaverse promises to weave humans, AI and robots together in fantastic new worlds.

The post What Is the Metaverse? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Makes RTX Technology Accessible to More Professionals

With its powerful real-time ray tracing and AI acceleration capabilities, NVIDIA RTX technology has transformed design and visualization workflows for the most complex tasks, like designing airplanes and automobiles, visual effects in movies and large-scale architectural design.

The new NVIDIA RTX A2000 — our most compact, power-efficient GPU for a wide range of standard and small-form-factor workstations — makes it easier to access RTX from anywhere.

The RTX A2000 is designed for everyday workflows, so professionals can develop photorealistic renderings, build physically accurate simulations and use AI-accelerated tools. With it, artists can create beautiful 3D worlds, architects can design and virtually explore the next generation of smart buildings and homes, and engineers can create energy-efficient and autonomous vehicles that will drive us into the future.

The GPU has 6GB of memory capacity with error correction code (ECC) to maintain data integrity for uncompromised computing accuracy and reliability, which especially benefits the healthcare and financial services fields.

With remote work part of the new normal, simultaneous collaboration with colleagues on projects across the globe is critical. NVIDIA RTX technology powers Omniverse, our collaboration and simulation platform that enables teams to iterate together on a single 3D design in real time while working across different software applications. The A2000 will serve as a portal into this world for millions of designers.

Customer Adoption 

Among the first to tap into the RTX A2000 are Avid, Cuhaci & Peterson and Gilbane Building Company.

“The A2000 from NVIDIA has made our modeling flow faster and more efficient. No longer are we sitting and wasting valuable time for graphics to render, and panning around complex geometry has become smoother,” said Connor Reddington, mechanical engineer and certified SOLIDWORKS professional at Avid Product Development, a Lubrizol Company.

A custom lattice pillow structure for lightweighting of 3D printed parts. Image courtesy of Avid.

“Introducing RT Cores into the NVIDIA RTX A2000 has resulted in impressive rendering speedups for photorealistic visualization compared to the previous generation GPUs,” said Steven Blevins, director of Digital Practice at Cuhaci & Peterson.

“The small form factor and low power usage of the NVIDIA RTX A2000 is extraordinary and ensures fitment in just about any existing workstation chassis,” said Ken Grothman, virtual design and construction manager at Gilbane Building Company.

A building model in Autodesk Revit with point cloud data. Image courtesy of Gilbane Building Company.

Next-Generation RTX Technology

The NVIDIA RTX A2000 is the most powerful low-profile, dual-slot GPU for professionals. It combines the latest-generation RT Cores, Tensor Cores and CUDA cores with 6GB of ECC graphics memory in a compact form factor to fit a wide range of systems.

The NVIDIA RTX A2000 features the latest technologies in the NVIDIA Ampere architecture:

  • Second-Generation RT Cores: Real-time ray tracing for all professional workflows. Up to 5x the rendering performance from the previous generation with RTX on.
  • Third-Generation Tensor Cores: Available in the GPU architecture to enable AI-augmented tools and applications.
  • CUDA Cores: Up to 2x the FP32 throughput of the previous generation for significant increases in graphics and compute workloads.
  • Up to 6GB of GPU Memory: Supports ECC memory, the first time that NVIDIA has enabled ECC memory in its 2000 series GPUs, for error-free computing.
  • PCIe Gen 4: Double the throughput with more than 40 percent bandwidth improvement from the previous generation for accelerating data paths in and out of the GPU.

Availability 

The NVIDIA RTX A2000 desktop GPU will be available in workstations from manufacturers including ASUS, BOXX Technologies, Dell Technologies, HP and Lenovo as well as NVIDIA’s global distribution partners starting in October.

Learn more about NVIDIA at SIGGRAPH.

The post NVIDIA Makes RTX Technology Accessible to More Professionals appeared first on The Official NVIDIA Blog.

Read More

A Code for the Code: Simulations Obey Laws of Physics with USD

Life in the metaverse is getting more real. 

Starting today, developers can create and share realistic simulations in a standard way. Apple, NVIDIA and Pixar Animation Studios have defined a common approach for expressing physically accurate models in Universal Scene Description (USD), the common language of virtual 3D worlds. 

Pixar released USD and described it in 2016 at SIGGRAPH. It was originally designed so artists could work together, creating virtual characters and environments in a movie with the tools of their choice. 

Fast forward, and USD is now pervasive in animation and special effects. USD is spreading to other professions like architects who can benefit from their tools to design and test everything from skyscrapers to sports cars and smart cities. 

Playing on the Big Screen 

To serve the needs of this expanding community, USD needs to stretch in many directions. The good news is Pixar designed USD to be open and flexible. 

So, it’s fitting the SIGGRAPH 2021 keynote provides a stage to describe USD’s latest extension. In technical terms, it’s a new schema for rigid-body physics, the math that describes how solids behave in the real world.  

For example, when you’re simulating a game where marbles roll down ramps, you want them to respond just as you would expect when they hit each other. To do that, developers need physical details like the weight of the marbles and the smoothness of the ramp. That’s what this new extension supplies. 

USD Keeps Getting Better

The initial HTML 1.0 standard, circa 1993, defined how web pages used text and graphics. Fifteen years later HTML5 extended the definition to include video so any user on any device could watch videos and movies. 

Apple and NVIDIA were both independently working on ways to describe physics in simulations. As members of the SIGGRAPH community, we came together with Pixar to define a single approach as a new addition to USD. 

In the spirit of flexibility, the extension lets developers choose whatever solvers they prefer as they can all be driven from the same set of USD-data. This presents a unified set of data suitable for off-line simulation for film, to games, to augmented reality. 

That’s important because solvers for real-time uses like gaming prioritize speed over accuracy, while architects, for example, want solvers that put accuracy ahead of speed. 

An Advance That Benefits All 

Together the three companies wrote a white paper describing their combined proposal and shared it with the USD community. The reviews are in and it’s a hit. Now the extension is part of the standard USD distribution, freely available for all developers. 

The list of companies that stand to benefit reads like credits for an epic movie. It includes architects, building managers, product designers and manufacturers of all sorts, companies that design games — even cellular providers optimizing layouts of next-generation networks. And, of course, all the vendors that provide the digital tools to do the work. 

“USD is a major force in our industry because it allows for a powerful and consistent representation of complex, 3D scene data across workflows,” said Steve May, Chief Technology Officer at Pixar. 

“Working with NVIDIA and Apple, we have developed a new physics extension that makes USD even more expressive and will have major implications for entertainment and other industries,” he added. 

Making a Metaverse Together 

It’s a big community we aim to serve with NVIDIA Omniverse, a collaboration environment that’s been described as an operating system for creatives or “like Google Docs for 3D graphics.” 

We want to make it easy for any company to create lifelike simulations with the tools of their choice. It’s a goal shared by dozens of organizations now evaluating Omniverse Enterprise, and close to 400 companies and tens of thousands of individual creators who have downloaded Omniverse open beta since its release in December 2020.  

We envision a world of interconnected virtual worlds — a metaverse — where someday anyone can share their life’s work.  

Making that virtual universe real will take a lot of hard work. USD will need to be extended in many dimensions to accommodate the community’s diverse needs. 

A Virtual Invitation 

To get a taste of what’s possible, watch a panel discussion from GTC (free with registration), where 3D experts from nine companies including Pixar, BMW Group, Bentley Systems, Adobe and Foster + Partners talked about the opportunities and challenges ahead.   

We’re happy we could collaborate with engineers and designers at Apple and Pixar on this latest USD extension. We’re already thinking about a sequel for soft-body physics and so much more.  

Together we can build a metaverse where every tool is available for every job. 

For more details, watch a talk on the USD physics extension from NVIDIA’s Adam Moravanszky and attend a USD birds-of-a-feather session hosted by Pixar. 

The post A Code for the Code: Simulations Obey Laws of Physics with USD appeared first on The Official NVIDIA Blog.

Read More

On the Air: Creative Technology Elevates Broadcast Workflows for International Sporting Event with NVIDIA Networking

Talk about a signal boost. Creative Technology is tackling 4K and 8K signals, as well as new broadcast workflows, with the latest NVIDIA networking technologies.

The London-based firm is one of the world’s leading suppliers of audio visual equipment for broadcasting and online events. Part of global production company NEP Group, CT helps produce high-quality virtual and live events by providing advanced technologies and equipment, from large-screen displays to content delivery systems.

Before the COVID-19 pandemic hit, CT was looking to enhance the broadcast experience, bringing audiences and content closer together. Already in the process of switching from a baseband software-defined infrastructure (SDI) architecture to more advanced internet protocol (IP)-based technologies, CT was prepared when the pandemic led to an increased demand in virtual events.

The company decided to invest in KAIROS, Panasonic’s next-generation IT and IP video processing platform. KAIROS is a software-based, open architecture platform that uses CPU and GPU processing to significantly improve broadcast performance.

CT opted for NVIDIA GPUs to power KAIROS, which uses NVIDIA Rivermax IP streaming acceleration to enable direct data transfers to and from the GPU, leading to enhanced flexibility and increased performance for virtual events.

With plans to use KAIROS for the world’s most recognized sporting event this month, CT is using IP enabled by NVIDIA switches and NVIDIA RTX GPUs. This technology allows CT to easily scale up for larger shows and save time in setting up new productions, while transforming broadcast workflows.

Taking Broadcast Beyond the Standard

With LED screens increasing in resolution, it’s now more common for companies to deal with 4K and 8K signals. CT wanted a powerful solution that could keep up, while also providing better scalability and flexibility to enhance workflows.

When CT first started testing KAIROS, they were discussing using the platform to accommodate a 3G-SDI workflow, which supports the move from 1080/50 interlaced video formats (1080i) to 1080/50 progressive video formats (1080p).

In interlaced scanning, the frame is divided into odd and even lines — only half the frame is shown on screen, and the other half appears in 1/60th of a second. The lines switch so quickly that viewers will see the entire frame, but they may also see flickers on screen.

In progressive scans, the entire frame is transmitted simultaneously. All the lines in the frame are shown at once to fill the screen, which reduces flicker. Progressive scans are ideal for digital transmissions and have become the standard for high-definition TV displays.

But CT also needed to ensure its technology could keep up with any future video workflow advances demanded by clients.

The company has its own servers built on NVIDIA RTX GPUs with ConnectX-6 DX cards, and KAIROS delivers high performance by using the power and flexibility of the GPUs. The CT team no longer has to deal with the painful process of converting 4K and 8K signals to SDI. Instead, it can pass the signals to KAIROS, which can distribute video feeds to projectors or screens regardless of the resolution or format.

“Essentially, what KAIROS did was give us a lot more flexibility,” said Sid Lobb, head of Vision and Integrated Networks at Creative Technology. “There is utter flexibility with what we can use and how we allocate the power that the NVIDIA RTX GPUs provide.”

Switching It Up 

Transitioning from SDI to IP allowed CT to use software for driving all the events. With IP, CT can use a switch instead of cables to connect systems.

“Now, it’s more like connecting computers to each other versus directly connecting cameras to a processor,” said Lobb. “We’re able to use a network to connect the entire production signal path. It’s a whole change to broadcast workflows.”

The latest version of KAIROS enables CT to use the network as a matrix switcher, which allows the team to easily switch from one video or audio source to another. For example, in events that take place in a sports arena, there could be up to 100 PCs capturing and producing different content. During the event, CT could be switching from one PC to another, which would’ve been challenging with traditional architectures. But with IP, CT can easily switch among sources, and also scale up and down to different size shows using the same solution.

The team is also experiencing massive time savings when it comes to getting new productions up and running, as the programming of KAIROS is intuitive and efficient. Each virtual event is different, but KAIROS makes it easy for CT to configure input and outputs based on their productions.

The team will use GPU-powered solutions to enhance the experience for future broadcasting and live events.

The post On the Air: Creative Technology Elevates Broadcast Workflows for International Sporting Event with NVIDIA Networking appeared first on The Official NVIDIA Blog.

Read More

NVIDIA-Certified Systems Land on the Desktop

Enterprises challenged with running accelerated workloads have an answer: NVIDIA-Certified Systems. Available from nearly 20 global computer makers, these servers have been validated for running a diverse range of accelerated workloads with optimum performance, reliability and scale.

Now NVIDIA-Certified Systems are expanding to the desktop with workstations that undergo the same testing to validate their ability to run GPU-accelerated applications well.

Certification ensures that these systems, available as desktop or laptop models, have a well-balanced design and the correct configurations to maximize performance. GPUs eligible for certification in the workstations include the newest NVIDIA RTX A6000, A5000 and A4000, as well as the RTX 8000 and 6000.

NVIDIA-Certified workstations will join a lineup of over 90 already available systems that range from the highest performance AI servers with the NVIDIA HGX A100 8-GPU, to enterprise-class servers with the NVIDIA A30 Tensor Core GPU for mainstream accelerated data centers, to low-profile, low-power systems designed for the edge with NVIDIA T4 GPUs.

Certified Systems to Accelerate Data Science on CDP

Cloudera Data Platform (CDP) v7.1.6, which went into general availability last week, now takes advantage of NVIDIA-Certified Systems. This latest version adds RAPIDS to accelerate data analytics, ETL and popular data science tools like Apache Spark with NVIDIA GPUs to churn through massive data operations.

Testing has shown that this version of CDP runs up to 10x faster on servers with NVIDIA GPUs vs. non-accelerated servers. To make it easy to get started, NVIDIA and Cloudera recommend two NVIDIA-Certified server configurations that customers can purchase from several vendors:

  • CDP-Ready: For running Apache Spark, a CDP-Ready configuration of NVIDIA-Certified servers with two NVIDIA A30 GPUs per server offers over 5x the performance at less than 50 percent incremental cost relative to modern CPU-only alternatives.
  • AI ready: For customers additionally running machine learning or other AI-related applications, the NVIDIA A100 GPU provides even more performance — as well as acceleration on machine learning and AI training.

Data scientists often develop and refine machine learning and deep learning models on workstations to augment data center resources or help minimize cloud-based compute costs. By using an NVIDIA-Certified workstation, they can transition their work to NVIDIA-Certified servers when it’s time for larger scale prototyping and eventually production, without having to port to a different tool or framework.

New White Paper Describes Value of Certification

When it comes to installing GPUs and SmartNICs in a system, choosing the right server or workstation model and correctly configuring the components and firmware are critical to getting the most out of the investment.

With NVIDIA-Certified Systems, NVIDIA and its partners have already done the work of validating that a particular system is capable of running accelerated workloads well, and they’ve figured out the most optimal hardware configuration.

Misconfiguration can lead to poor performance and even inability to function properly or complete tasks. The certification process ensures that issues such as these are surfaced and resolved for each tested system. We’ve described this and more in a new white paper, Accelerate Compute-Intensive Workloads with NVIDIA-Certified Systems.

Our system partners run a suite of more than 25 tests designed by NVIDIA based on our vast experience with compute, graphics and network acceleration. Each of the tests is chosen to exercise the hardware of the system in a unique and thorough manner, so as many potential configuration issues as possible can be exposed. Some of the tests focus on a single aspect of the hardware, while others stress multiple components, both simultaneously as well as in a multi-step workflow.

With NVIDIA-Certified Systems, enterprises can confidently choose performance-optimized hardware to power their accelerated computing workloads — from the desktop to the data center to the edge.

Learn more about NVIDIA-Certified Systems:

The post NVIDIA-Certified Systems Land on the Desktop appeared first on The Official NVIDIA Blog.

Read More

Leading Lights: NVIDIA Researchers Showcase Groundbreaking Advancements for Real-Time Graphics

Computer graphics and AI are cornerstones of NVIDIA. Combined, they’re bringing creators closer to the goal of cinema-quality 3D imagery rendered in real time.

At a series of graphics conferences this summer, NVIDIA Research is sharing groundbreaking work in real-time path tracing and content creation, much of it based on cutting-edge AI techniques. These projects are tackling the hardest unsolved problems in graphics with new tools that advance the state of the art in real-time rendering.

One goal is improving the realism of rendered light as it passes through complex materials like fur or fog. Another is helping artists more easily turn their creative visions into lifelike models and scenes.

Presented at this week’s SIGGRAPH 2021 — as well as the recent High-Performance Graphics conference and the Eurographics Symposium on Rendering — these research advancements highlight how NVIDIA RTX GPUs make it possible to further the frontiers of photorealistic real-time graphics.

Rendering photorealistic images in real time requires accurate simulation of light, modeling the same laws that govern light in the physical world. The most effective approach known so far, path tracing, requires massive computational resources but can deliver spectacular imagery.

The NVIDIA RTX platform, with dedicated ray-tracing hardware and high-performance Tensor Cores for efficient evaluation of AI models, is tailor made for this task. Yet there are still situations where creating high-fidelity rendered images remains challenging.

Consider, for one, a tiger prowling through the woods.

Seeing the Light: Real-Time Path Tracing

To make a scene completely realistic, creators must render complex lighting effects such as reflections, shadows and visible haze.

In a forest scene, dappled sunlight filters through the leaves on the trees and grows hazy among the water molecules suspended in the foggy air. Rendering realistic real-time imagery of clouds, dusty surfaces or mist like this was once out of reach. But NVIDIA researchers have developed techniques that often compute the visual effect of these phenomena 10x more efficiently.

The tiger itself is both illuminated by sunlight and shadowed by trees. As it strides through the woods, its reflection is visible in the pond below. Lighting these kinds of rich visuals with both direct and indirect reflections can require calculating thousands of paths for every pixel in the scene.

It’s a task far too resource-hungry to solve in real time. So our research team created a path-sampling algorithm that prioritizes the light paths and reflections most likely to contribute to the final image, rendering images over 100x more quickly than before.

AI of the Tiger: Neural Radiance Caching

Another group of NVIDIA researchers achieved a breakthrough in global illumination with a new technique named neural radiance caching. This method uses both NVIDIA RT Cores for ray tracing and Tensor Cores for AI acceleration to train a tiny neural network live while rendering a dynamic scene.

The neural network learns how light is distributed throughout the scene. It evaluates over a billion global illumination queries per second when running on an NVIDIA GeForce RTX 3090 GPU, depicting the tiger’s dense fur with rich lighting detail previously unattainable at interactive frame rates.

Seamless Creation of Tough Textures

As rendering algorithms have progressed, it’s crucial that the 3D content available keeps up with the complexity and richness that the algorithms are capable of.

NVIDIA researchers are diving into this area by developing a variety of techniques that support content creators in their efforts to model rich and realistic 3D environments. One area of focus is on materials with rich geometric complexity, which can be difficult to simulate using traditional techniques.

The weave of a polo shirt, the texture of a carpet, or blades of grass have features often much smaller than the size of a pixel, making it difficult to efficiently store and render representations of them. NVIDIA researchers are addressing this with NeRF-Tex, an approach that uses neural networks to represent these challenging materials and encode how they respond to lighting.

Seeing the Forest for the Trees

Complex geometric objects also vary in their appearance depending on how close they are to the viewer. A leafy tree is one example: Close up, there’s enormous detail in its branches, leaves and bark. From afar, it may appear to be little more than a green blob.

It would be a waste of time to render detailed bark and leaves on a tree that’s on the other end of the forest in a scene. But when zooming in for a close-up, the model should be as realistic as possible.

This is a classic problem in computer graphics known as level of detail. Artists have often been burdened with this challenge, manually modeling multiple versions of each 3D object to enable efficient rendering.

NVIDIA researchers have developed a new approach that generates simplified models automatically based on an inverse rendering method. With it, creators can generate simplified models that are optimized to appear indistinguishable from the originals, but with drastic reductions in their geometric complexity.

NVIDIA at SIGGRAPH 2021 

More than 200 scientists around the globe make up the NVIDIA Research team, focusing on AI, computer graphics, computer vision, self-driving cars, robotics and more. At SIGGRAPH, which runs from Aug. 9-13, our researchers are presenting the following papers:

Don’t miss NVIDIA’s special address at SIGGRAPH on Aug. 10 at 8 a.m. Pacific, revealing our latest technology, demos and more. Catch our Real Time Live demo on Aug. 10 at 4:30 p.m. Pacific to see how NVIDIA Research creates AI-driven digital avatars.

We’re also discussing esports as a real-time graphics challenge in a panel on Aug. 11. An interactive esports demo is available on demand through the SIGGRAPH Emerging Technologies program.

For more, check out the full lineup of NVIDIA events at SIGGRAPH 2021.

The post Leading Lights: NVIDIA Researchers Showcase Groundbreaking Advancements for Real-Time Graphics appeared first on The Official NVIDIA Blog.

Read More

Time to Embark: Autonomous Trucking Startup Develops Universal Platform on NVIDIA DRIVE

Autonomous trucking startup Embark is planning for universal autonomy of commercial semi-trucks, developing one AI platform that fits all.

The company announced today that it will use NVIDIA DRIVE to develop its Embark Universal Interface (EUI), a manufacturer-agnostic platform that includes the compute and multimodal sensors necessary for autonomous trucks. This flexible approach, combined with the high performance of NVIDIA DRIVE, leads to an easily scalable solution for safer, more efficient delivery and logistics.

The EUI is purpose-built to run Embark Driver autonomous driving software for a comprehensive self-driving trucking system.

Most trucking carriers don’t just use one model of vehicle in their fleets. This variety can even extend to vehicles from different manufacturers to haul a wide range of cargo around the world.

The Embark platform will be capable of integrating into trucks from any of the four major truck manufacturers in the U.S. — PACCAR, Volvo, International and Freightliner. By developing a platform that can be retrofitted to such a wide range of vehicles, Embark is helping the trucking industry realize the benefits of AI-powered driving without having to wait for purpose-built vehicles.

And with NVIDIA DRIVE at its core, the platform leverages the best in high-performance AI compute for robust self-driving capabilities.

Scaling Safety

Autonomous vehicles are always learning, taking in vast amounts of data to navigate the unpredictability of the real world, from highways to crowded ports. This rapid processing requires centralized, high-performance AI compute.

The NVIDIA DRIVE platform is the first scalable AI hardware and software platform to enable the production of automated and self-driving vehicles. It combines deep learning, sensor fusion and surround vision for a safe driving experience.

This end-to-end open platform allows for one development investment across an entire fleet, from level 2+ systems all the way to level 5 fully autonomous vehicles. In addition to high-performance, scalable compute, the EUI will have all the necessary functional safety certification to operate without a driver on public roads.

“We need an enormous amount of compute horsepower in our trucks,” said Ajith Dasari, head of Hardware Platform at Embark. “NVIDIA DRIVE meets this need head-on, and allows us to outfit our partners and customers with the best self-driving hardware and software currently on the market.”

A Growing Ecosystem

Embark is already working with leading trucking companies and plans to continue to extend its software and hardware technology.

In April, the company unveiled partnerships with Werner Enterprises, Mesilla Valley Transportation and Bison Transport. It’s also working with shippers including Anheuser Busch InBev and HP, Inc.

Embark plans to list on the public market, announcing a SPAC, or special purpose acquisition company, agreement in June, as well as a partnership with Knight-Swift Transportation. The autonomous trucking company will join the ranks of NVIDIA DRIVE ecosystem members who have collectively raised more than $8 billion via public listings.

And just like the trucks running on its Embark Universal Interface, the company is tapping the power of NVIDIA DRIVE to keep traveling further and more intelligently.

The post Time to Embark: Autonomous Trucking Startup Develops Universal Platform on NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More