July NVIDIA Studio Driver Improves Performance for Chaos V-Ray 6 for 3ds Max

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Creativity heats up In the NVIDIA Studio as the July NVIDIA Studio Driver, available now, accelerates the recent Chaos V-Ray 6 for 3ds Max release.

Plus, this week’s In the NVIDIA Studio 3D artist, Brian Lai, showcases his development process for Afternoon Coffee and Waffle, a piece that went from concept to completion faster with NVIDIA RTX acceleration in Chaos V-Ray rendering software.

Chaos V-Ray Powered by NVIDIA Studio Offers a July to Remember

Visualization and computer graphics software company Chaos this month released an update to the all-in-one photorealistic rendering software, V-Ray 6 for 3ds Max. The July Studio Driver gives creators on NVIDIA GPUs the best experience.

Add procedural clouds to create beautiful custom skies with the latest V-Ray 6 release.

This release is a major upgrade that gives artists powerful new world-building and workflow tools to quickly distribute 3D objects, generate detailed 3D surfaces and add procedural clouds to create beautiful custom skies.

V-Ray GPU software — which already benefits from high-performance final-frame rendering with RTX-accelerated ray tracing and accelerated interactive rendering with AI-powered denoising — gets a performance bump, in addition to the new features.

Catch some V-Rays with an average of 2x faster GPU Light Cache calculations than V-Ray 5.

3D artists benefit from speedups in multiple ways with V-Ray 6. GPU improvements include support for nearly all new V-Ray 6 features, and enable faster Light Cache and a new Device Selector to assign rendering devices to tasks. By allowing users to specify use of the GPU for the AI denoiser, rendering performance is nearly doubled.

Adaptive dome-light rendering clocks in at up to 3x faster than V-Ray 5.

Additional key new features include:

  • Chaos Scatter — easily populate scenes with millions of 3D objects to produce natural-looking landscapes and environments without adjusting objects by hand.
  • Procedural Clouds — simulate a variety of cloud types and weather conditions, from partly cloudy to overcast.
  • Improved trace-depth workflow — simplifies the setting of trace-depth overrides for reflections and refractions.
  • Shading improvements — includes new energy-preserving GGX shader, thin film rollout in the V-Ray material for bubbles and fabrics, and improved V-Ray dirt functionality.

And there’s much more to explore. Creators can join a free V-Ray 6 for 3ds Max webinar on Thursday, July 28, to see how the new features are already helping 3D artists create fantastic visuals.

Sweet, Rendery Waffles

This week In the NVIDIA Studio, Brian Lai, a computer graphics craftsman, details the sweet, buttery-smooth creative process for Afternoon Coffee and Waffle.

Lai enjoyed the sweet satisfaction of creating with a GeForce RTX 3090 GPU and NVIDIA Studio benefits.

Lai’s journey into 3D art started at an early age, inspired by his photographer father — who Lai worked for until getting accepted into Malaysia’s top art college, The One Academy. He finds inspiration in real-world environments, observing what’s around him.

“I was always obsessed with optics and materials, so I just look at things around me, and then I want to replicate them into 3D form,” Lai said. “It’s satisfying to recreate a thing or environment to confuse my audience about whether it’s a real or 3D rendered image.”

It’s no wonder that Afternoon Coffee and Waffle takes on a picturesque quality, as typically exhibited by a social media image showing off a food adventure.

Before Lai turned on RTX.

After finding a reference shot online, Lai started his creative process with basic 3D model blocking, which helped him set a direction for the final image, as well as find a good focal length and position for the camera. Then, he finalized each 3D model, turning them into high-resolution models and texturing each separately. This allowed him to focus on the props’ details.

Lai called his experience creating with a GeForce RTX 3090 “butter smooth throughout the process.”

RTX-accelerated ray tracing happens lightning quick in V-Ray GPU with AI-powered denoising. Lai prefers V-Ray “mainly because it utilizes the power from RT and Tensor Cores on the graphics card,” he said. “I don’t really feel limited by the software.”

The anticipation of completion is so syrupy thick, one can almost taste it.

The artist also used his GPU for simulation in Autodesk Maya to get the best-looking fabrics possible, noting that “texturing in 4K preview is so satisfying.” He then finalized the models with normal map folds in Adobe Substance Painter.

Once all the models were ready, Lai gathered everything into one scene. In the above tutorial, he shows how each shader was constructed in Autodesk Maya’s Hypershade window.

The very last — and creatively infinite — step was to look dev. Here, Lai had to self-contain his tinkering as he “can do endless improvement in this stage,” he said. Ultimately, he reached the extent of realism he was aiming for and called the piece complete.

3D artist and CG craftsman Brian Lai.

Find more work from Lai, who first made his name discovering invert art, or negative drawing, on his Instagram.

Join the #ExtendtheOmniverse contest, running through Friday, Aug. 19. Perform something akin to magic by making your own NVIDIA Omniverse Extension for a chance to win an NVIDIA RTX GPU. Winners will be announced in September at GTC.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post July NVIDIA Studio Driver Improves Performance for Chaos V-Ray 6 for 3ds Max appeared first on NVIDIA Blog.

Read More

Digital Sculptor Does Heavy Lifting With Lightweight Mobile Workstation

As a professional digital sculptor, Marlon Nuñez is on a mission to make learning 3D art skills easier, smoother and more fun for all. And with the help of an NVIDIA RTX-powered Lenovo mobile workstation, he takes his 3D projects to the next level, wherever he goes.

Nuñez is the art director and co-founder of Art Heroes, a 3D art academy. Based in Spain, Nuñez specializes in creating digital humans and stylized 3D characters, a complex feat. But he tackles his demanding creative workflows from anywhere, thanks to his Lenovo ThinkPad P1 powered by the NVIDIA RTX A5000 Laptop GPU.

The speed and performance of RTX-powered technology enable Nuñez to create stunning characters in real time.

“As an artist, render times and look development are where you spend most of your time,” said Nuñez. “NVIDIA RTX allows you to work with ray tracing on, providing artists with the option to make these creative processes faster and easier.”

Powerful Performance on the Go

Nuñez says there are three main benefits he has experienced with his RTX-powered mobile workstation. First, it’s light — the Lenovo ThinkPad P1 packs the power of the ultra-high-end NVIDIA RTX A5000 laptop GPU into a thin chassis that only weighs around four pounds. Nuñez said he can easily travel with his portable workstation — he doesn’t even need a bag to carry it.

Second, the NVIDIA RTX GPU supports intense real-time ray tracing, which allows Nuñez to make photorealistic graphics and hyper-accurate designs. It also helps him tackle challenging tasks and maintain multiple workflows. From multitasking with several apps to rendering on the fly, RTX technology helps Nuñez easily keep up with creative tasks.

And lastly, the Lenovo ThinkPad P1 has a color-calibrated screen. Nuñez finds this preset feature particularly helpful, as it lets him see vibrant colors in his designs without having to worry about screen reflections.

All of these benefits make the ThinkPad P1 the ideal workstation for working under any scenario, the artist said. With the accelerated workflows it enables, as well as the ability to see his designs running in real time, Nuñez can finalize his 3D character designs faster than ever.

Image courtesy of Marlon Nuñez.

NVIDIA RTX Accelerates Creative Development

Nuñez creates extremely detailed 3D creations and characters, which means rendering and look development take a significant amount of time. RTX graphics cards enable unique ray-tracing capabilities that allow Nuñez to easily speed up his overall development process.

“I decided to test the NVIDIA RTX on my new Lenovo ThinkPad P1, and I was pretty shocked at how well it performed inside Unreal Engine 5 with ray tracing enabled,” said Nuñez. “I created the layout, played with the alembic hair and shaders, and used the sequencer on the scene — it was very responsive all the time.”

Ray tracing also enables artists to access extreme precision when it comes to life-like lighting. Because ray-tracing technology automatically renders light behavior in a physically accurate way, Nuñez doesn’t have to manually adjust render settings or complex setups.

Nuñez believes real-time ray tracing is already making a big difference across industries, especially for virtual productions and game development. With the help of an NVIDIA RTX GPU on a mobile workstation, creators can perform complex tasks in less time, from any location.

Learn more about NVIDIA RTX Laptop GPUs and watch Marlon Nuñez talk about his workflow below:

The post Digital Sculptor Does Heavy Lifting With Lightweight Mobile Workstation appeared first on NVIDIA Blog.

Read More

Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul

South Korean startup Lunit, developer of two FDA-cleared AI models for healthcare, went public this week on the country’s Kosdaq stock market.

The move marks the maturity of the Seoul-based company — which was founded in 2013 and has for years been part of the NVIDIA Inception program that nurtures cutting-edge startups.

Lunit’s AI software for chest X-rays and mammograms are used in 600 healthcare sites across 40 countries. In its home market alone, around 4 million chest X-rays a year are analyzed by Lunit AI models.

Lunit has partnered with GE Healthcare, Fujifilm, Philips and Guardant Health to deploy its AI products. Last year, it achieved FDA clearance for two AI tools: one that analyzes mammograms for signs of breast cancer, and another that triages critical findings in chest X-rays. It’s also received the CE mark in Europe for these, as well as a third model that analyzes tumors in cancer tissue samples.

“By going public, which is just one step in our long journey, I strongly believe that we will succeed and accomplish our mission to conquer cancer through AI,” said Brandon Suh, CEO of Lunit.

Lunit raised $60 million in venture capital funding late last year, and its current market cap is some $320 million, based on its latest closing price. Following its recent regulatory approvals, the startup is expanding its presence in the U.S. and the European Union. It’s also developing additional AI models for 3D mammography.

Forging Partnerships to Deploy AI for Radiology, Oncology

Lunit has four AI products to help radiologists and pathologists detect cancer and deliver care:

  • INSIGHT CXR: Trained on a dataset of 3.5 million cases, this tool detects 10 of the most common findings in chest X-rays with 97-99% accuracy.
  • INSIGHT MMG: This product reduces the chance that physicians overlook breast cancer in the screening mammography by 50%.
  • SCOPE IO: Demonstrating 94% accuracy, this AI helps identify 50% more patients eligible for immunotherapy by analyzing tissue slide images of more than 15 types of cancer, including lung, breast and colorectal cancer.
  • SCOPE PD-L1: Trained on more than 1 million annotated cell images, the tool helps accurately quantify expression levels of PD-L1, a protein that influences immune response.

GE Healthcare made eight AI algorithms from INSIGHT CXR available through its Thoracic Care Suite to flag abnormalities in lung X-rays, including pneumonia, tuberculosis and lung nodules.

Fujifilm incorporated INSIGHT CXR into its AI-powered product to analyze chest X-rays. Lunit AI connects to Fujufilm’s X-ray devices and PACS imaging system, and is already used in more than 130 sites across Japan to detect chest nodules, collapsed lung, and fluid or other foreign substances in the lungs.

Philips, too, is adopting INSIGHT CXR, making the software accessible to users of its diagnostic X-ray solutions. And Guardant Health, a liquid biopsy company, made a $26 million strategic investment in Lunit to support the company’s innovation in precision oncology through the Lunit SCOPE tissue analysis products.

Accelerating Insights With NVIDIA AI

Lunit develops its AI models using various NVIDIA Tensor Core GPUs, including NVIDIA A100 GPUs, in the cloud. Its customers can deploy Lunit’s AI with an NVIDIA GPU-powered server on premises or in the cloud — or within a medical imaging device using the NVIDIA Jetson edge AI platform.

The company also uses NVIDIA TensorRT software to optimize its trained AI models for real-world deployment.

“The goal here is to optimize our AI in actual user settings — for the specific NVIDIA GPUs that operate the AI,” said Donggeun Yoo, chief of research at Lunit.

Over the years, Lunit has presented its work at NVIDIA GTC and as an NVIDIA Inception member at the prestigious RSNA conference for radiology.

“It was very helpful for us to build credibility as a startup,” said Yoo. “I believe joining Inception helped trigger the bigger acknowledgements that followed from the healthcare industry.”

Join the NVIDIA Inception community of over 10,000 technology startups, and register for NVIDIA GTC, running online Sept. 19-22, to hear more from leaders in healthcare AI.

Subscribe to NVIDIA healthcare news.

The post Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul appeared first on NVIDIA Blog.

Read More

Get Battle Ready With New GeForce NOW Fortnite Reward

<Incoming Transmission> Epic Games is bringing a new Fortnite reward to GeForce NOW, available to all members. Drop from the Battle Bus in Fortnite on GeForce NOW between today and Thursday, Aug. 4, to earn “The Dish-stroyer Pickaxe” in game for free.

<Transmission continues> Members can earn this item by streaming Fortnite on GeForce NOW on their PCs, Macs, Chromebooks, SHIELD TVs and with an optimized touch experience on iOS Safari and Android mobile devices. Thanks to the power of Epic and GeForce servers, all GeForce NOW members can take the action wherever they go.

Plus, nine new titles arrive on GeForce NOW this week, joining Fortnite and 1,300+ other games streaming at GeForce quality.

<bZZZt bZZZt> Whoops, sorry there, almost lost the signal. It’s coming through loud and clear now for a jam-packed GFN Thursday.

Dish It Out With This In-Game Reward

Bring home the big wins with the free “Dish-stroyer Pickaxe,” a reward available to GeForce NOW members who stream Fortnite any time between today at noon Eastern and Thursday, Aug. 4, 11:59 p.m Eastern. Rewards will appear in accounts starting Thursday, Aug. 11. Check out this FAQ from Epic for more details on how to link your Epic account to GFN.

Fortnite Dishtroyer Reward on GeForce NOW
The look on people’s faces when they saw this in-game item was priceless. One could say the reception was incredible…

Fortnite fans can try out GeForce NOW for free to obtain this reward, and play Fortnite across all compatible GeForce NOW devices, including on mobile with intuitive touch controls, Windows PC, macOS, iOS Safari, Android phones and tablets, Android TV, SHIELD TV, 2022 Samsung TVs and select LG TV models.

All members are eligible for this in-game reward, regardless of membership tier. For members with an RTX 3080 membership, taking out opponents with “The Dish-stroyer” will feel even more victorious — with ultra-low latency, eight-hour gaming sessions and streaming in 4K resolution at 60 frames per second, or 1440p at 120 FPS on the PC and Mac apps.

With 120 FPS streaming now broadly available on 120Hz Android devices, RTX 3080 members can stream Fortnite at higher frame rates to more phones and tablets for an even more responsive experience.

Keep the Victories Rolling

Endling Extinction is Forever on GeForce NOW
Who let the fox out? Play as a mother fox and defend her three cubs in this eco-conscious adventure.

This week also adds nine new games, including 3D platformer Hell Pie. GFN members can now see “Nate the demon” and “Nugget the angel” in all their fearsome glory, powered by ray-tracing technology for more vibrant gameplay. Check out the other titles now available to stream:

Before we go, we’ve got one last message to transmit that comes with a challenge. Let us know your response on Twitter or in the comments.

The post Get Battle Ready With New GeForce NOW Fortnite Reward appeared first on NVIDIA Blog.

Read More

Researchers Use GPUs to Give Earbud Users a ‘Mute Button’ for Background Noise

Thanks to earbuds you can have calls anywhere while doing anything. The problem: those on the other end of the call hear it all, too, from your roommate’s vacuum cleaner to background conversations at the cafe you’re working from.

Now, work by a trio of graduate students at the University of Washington who spent the pandemic cooped up together in a noisy apartment, lets those on the other end of the call hear just you — rather than all the stuff going on around you.

Users found that the system, dubbed “ClearBuds” — presented last month at the ACM International Conference on Mobile Systems, Applications, and Services — improved background noise suppression much better than a commercially available alternative.

“You’re removing your audio background the same way you can remove your visual background on a video call,” explained Vivek Jayaram, a doctoral student in the Paul G. Allen School of Computer Science & Engineering.

Outlined in a paper co-authored by the three roommates, all computer science and engineering graduate students at the University of Washington — Maruchi Kim, Ishan Chatterjee, and Jayaram — ClearBuds are different from other wireless earbuds in two big ways.

The ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures. Credit: Raymond Smith, University of Washington

First, ClearBuds use two microphones per earbud.

While most earbuds use two microphones on the same earbud, ClearBuds uses a microphone from both earbuds and creates two audio streams.

This creates higher spatial resolution for the system to better separate sounds coming from different directions, Kim explained. In other words, it makes it easier for the system to pick out the earbud wearer’s voice.

Second, the team created a neural network algorithm that can run on a mobile phone to process the audio streams to identify which sounds should be enhanced and which should be suppressed.

The researchers relied on two separate neural networks to do this.

The first neural network suppresses everything that isn’t a human voice.

The second enhances the speaker’s voice. The speaker can be identified because it’s coming from microphones in both earbuds at the same time.

Together, they effectively mask background noise and ensure the earbud wearer is heard loud and clear.

ClearBuds isolate a user’s voice from background noise by performing voice separation using a pair of wireless, synchronized earbuds. Source: Maruchi Kim, University of Washington

While the software the researchers created was lightweight enough to run on a mobile device, they relied on an NVIDIA TITAN desktop GPU to train the neural networks. They used both synthetic audio samples and real audio. Training took less than a day.

And the results, users reported, were dramatically better than commercially available earbuds, results that are winning recognition industrywide.

The team took second place for best paper at last month’s ACM MobSys 2022 conference. In addition to Kim, Chatterjee and Jayarm, the paper’s co-authors included Ira Kemelmacher-Shlizerman, an associate professor at the Allen School; Shwetak Patel, a professor in both the Allen School and the electrical and computer engineering department; and Shyam Gollakota and Steven Seitz, both professors in the Allen School.

Read the full paper here: https://dl.acm.org/doi/10.1145/3498361.3538933

To be sure, the system outlined in the paper can’t be adopted instantly. While many earbuds have two microphones per earbud, they only stream audio from one earbud. Industry standards are just catching up to the idea of processing multiple audio streams from earbuds.

Nevertheless, the researchers are hopeful their work, which is open source, will inspire others to couple neural networks and microphones to provide better quality audio calls.

The ideas could also be useful for isolating and enhancing conversations taking place over smart speakers by harnessing them for ad hoc microphone arrays, Kim said, and even tracking robot locations or search and rescue missions.

Sounds good to us.

Featured image credit:  Raymond Smith, University of Washington

The post Researchers Use GPUs to Give Earbud Users a ‘Mute Button’ for Background Noise appeared first on NVIDIA Blog.

Read More

Lucid Motors’ Mike Bell on Software-Defined Innovation for the Luxury EV Brand

AI and electric vehicle technology breakthroughs are transforming the automotive industry. These developments pave the way for new innovators, attracting technical prowess and design philosophies from Silicon Valley.

Mike Bell, senior vice president of digital at Lucid Motors, sees continuous innovation coupled with over-the-air updates as key to designing sustainable, award-winning intelligent vehicles that provide seamless automated driving experiences.

NVIDIA’s Katie Burke Washabaugh spoke with Bell on the latest AI Podcast episode, covering what it takes to stay ahead in the software-defined vehicle space.

Bell touched on future technology and its implications for the mass adoption of sustainable, AI-powered EVs — as well as what Lucid’s Silicon Valley roots bring to the intersection of innovation and transportation.



You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive
Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans
Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

The post Lucid Motors’ Mike Bell on Software-Defined Innovation for the Luxury EV Brand appeared first on NVIDIA Blog.

Read More

Living on the Edge: New Features for NVIDIA Fleet Command Deliver All-in-One Edge AI Management, Maintenance for Enterprises

NVIDIA Fleet Command — a cloud service for deploying, managing and scaling AI applications at the edge — today introduced new features that enhance the seamless management of edge AI deployments around the world.

With the scale of edge AI deployments, organizations can have up to thousands of independent edge locations that must be managed by IT teams — sometimes in far-flung locations like oil rigs, weather gauges, distributed retail stores or industrial facilities.

NVIDIA Fleet Command offers a simple, managed platform for container orchestration that makes it easy to provision and deploy AI applications and systems at thousands of distributed environments, all from a single cloud-based console.

But deployment is just the first step in managing AI applications at the edge. Optimizing these applications is a continuous process that involves applying patches, deploying new applications and rebooting edge systems.

To make these workflows seamless in a managed environment, Fleet Command now offers advanced remote management, multi-instance GPU provisioning and additional integrations with tools from industry collaborators.

Advanced Remote Management 

IT administrators now can access systems and applications with sophisticated security features. Remote management on Fleet Command offers access controls and timed sessions, eliminating vulnerabilities that come with traditional VPN connections. Administrators can securely monitor activity and troubleshoot issues at remote edge locations from the comfort of their offices.

Edge environments are extremely dynamic — which means administrators responsible for edge AI deployments need to be highly nimble to keep up with rapid changes and ensure little deployment downtime. This makes remote management a critical feature for every edge AI deployment.

Check out a complete walkthrough of the new remote management features and how they can be used to help administrators maintain and optimize even the largest edge deployments.

Multi-Instance GPU Provisioning 

Multi-Instance GPU, or MIG, partitions an NVIDIA GPU into several independent instances. MIG is now available on Fleet Command, letting administrators easily assign applications to each instance from the Fleet Command user interface. By allowing organizations to run multiple AI applications on the same GPU, MIG lets organizations right-size their deployments and get the most out of their edge infrastructure.

Learn more about how administrators can use MIG in Fleet Command to better optimize edge resources to scale new workloads with ease.

Working Together to Expand AI

New Fleet Command collaborations are also helping enterprises create a seamless workflow, from development to deployment at the edge.

Domino Data Lab provides an enterprise MLOps platform that allows data scientists to collaboratively develop, deploy and monitor AI models at scale using their preferred tools, languages and infrastructure. The Domino platform’s integration with Fleet Command gives data science and IT teams a single system of record and consistent workflow with which to manage models deployed to edge locations.

Milestone Systems, a leading provider of video management systems and NVIDIA Metropolis elite partner, created AI Bridge, an application programming interface gateway that makes it easy to give AI applications access to consolidated video feeds from dozens of camera streams. Now integrated with Fleet Command, Milestone AI Bridge can be easily deployed to any edge location.

IronYun, an NVIDIA Metropolis elite partner and top-tier member of the NVIDIA Partner Network, with its Vaidio AI platform applies advanced AI, evolved over multiple generations, to security, safety and operational applications worldwide. Vaidio is an open platform that works with any IP camera and integrates out of the box with dozens of market-leading video management systems. Vaidio can be deployed on premises, in the cloud, at the edge and in hybrid environments. Vaidio scales from one to thousands of cameras. Fleet Command makes it easier to deploy Vaidio AI at the edge and simplifies management at scale.

With these new features and expanded collaborations, Fleet Command ensures that the day-to-day process of maintaining, monitoring and optimizing edge deployments is straightforward and painless.

Test Drive Fleet Command

To try these features on Fleet Command, check out NVIDIA LaunchPad for free.

LaunchPad provides immediate, short-term access to a Fleet Command instance to easily deploy and monitor real applications on real servers using hands-on labs that walk users through the entire process — from infrastructure provisioning and optimization to application deployment for use cases like deploying vision AI at the edge of a network.

The post Living on the Edge: New Features for NVIDIA Fleet Command Deliver All-in-One Edge AI Management, Maintenance for Enterprises appeared first on NVIDIA Blog.

Read More

CORSAIR Integrates NVIDIA Broadcast’s Audio, Video AI Features in iCUE and Elgato Software This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Technology company CORSAIR and streaming sensation BigCheeseKIT step In the NVIDIA Studio this week.

A leader in high-performance gear and systems for gamers, content creators and PC enthusiasts, CORSAIR has integrated NVIDIA Broadcast technologies into its hardware and iCUE software. Similar AI enhancements have also been added to Elgato’s audio and video software, Wave Link and Camera Hub.

Powerful Broadcast AI audio and video effects transform content creation stations into home studios.

Creators and gamers with GeForce RTX GPUs can benefit from NVIDIA Broadcast’s AI enhancements to CORSAIR and Elgato microphones and cameras, elevating their live streams, voice chats and video conference calls.

Plus, entertainer Jakeem Johnson, better known by his Twitch name BigCheeseKIT, demonstrates how a GeForce RTX 3080 GPU elevates his thrilling streams with AI-powered benefits.

Advanced AI Audio

The NVIDIA Broadcast integration enables AI-powered noise removal and room echo removal audio features in CORSAIR iCUE and Elgato Wave Link that unlocks new levels of clarity and sharpness for an exceptional audio experience.

 

Noise removal in Elgato Wave Link is built as a virtual studio technology (VST), enabling users to apply the effect per audio channel, and supported in compatible creative apps such as Adobe Premiere Pro, Audition and Blackmagic DaVinci Resolve.

Running on Tensor Cores on GeForce RTX GPUs, the new features use AI to identify users’ voices, separating them from other ambient sounds. This results in noise cancellation that dramatically improves audio and video call quality. Background noises from fans, chatter, pets and more disappear, leaving the speaker’s voice crystal clear.

Broadcast also cancels room echoes, providing dampened, studio-quality acoustics in a wide range of environments without the need to sound-proof walls or ceilings.

CORSAIR’s integration takes a new version of these effects that can separate body sounds. This upgrade adds support to popular capabilities, like muting the friend who forgets to turn on the push-to-talk feature on a video call while they chew their lunch.

AI audio effects are ready to be integrated into nearly the entire lineup of CORSAIR headsets.

These Broadcast features are available on nearly the entire lineup of CORSAIR headsets. Users seeking a premium audio experience should consider headsets like the VOID RGB ELITE WIRELESS with 7.1 surround sound, HS80 RGB WIRELESS with spatial audio or the VIRTUOSO RGB WIRELESS SE.

The Elgato Wave XLR unlocks AI audio effects.

For Elgato creators, noise removal can now be enabled in the Wave Link app. This makes AI-enhanced audio possible for Wave mic users, plus XLR microphones thanks to the Elgato Wave XLR.

Unrestr(AI)ned Video Effects

NVIDIA Broadcast’s video technologies integrated into the Elgato Camera Hub include the virtual background feature.

The ‘background replacement’ AI video feature.

AI-enhanced filters powered by GeForce RTX GPUs offer better edge detection to produce a high-quality visual — much like those produced by a DSLR camera — using just a webcam. Supported effects include blur and replacing the background with a video or still image; eliminating the need for a greenscreen.

 

Background blur and background replacement are now available in Elgato Camera Hub. Creators can apply AI video effects with Facecam, or their studio camera using Cam Link 4K.

Set Up for Streaming Success

Accessing these NVIDIA Broadcast technologies is fast and simple.

If an eligible CORSAIR headset or the ST100 headset stand is recognized by iCUE, it will automatically prompt installation of the NVIDIA Broadcast Audio Effects.

Elgato Camera Hub now features a new Effects tab. Once selected, users will be prompted to download and install Broadcast Video Effects. For Elgato Wave Link, creators will first need to install the Broadcast Audio Effects, followed by the new noise removal VST.

After installation, Broadcast options will appear within iCUE, Wave Link and Camera Hub.

Check out the installation instructions and FAQ.

Broadcast features require GeForce RTX GPUs that can be found in the latest NVIDIA Studio laptops and desktops. These purpose-built systems feature vivid color displays, along with blazing-fast memory and storage to boost streams and all creative work.

Pick up an NVIDIA Studio system today to turn streams into dreams.

Stream Like a Boss In the NVIDIA Studio

If there’s one thing BigCheeseKIT encapsulates, it’s energy.

BigCheeseKIT enjoyed early success as a Golden Joystick award nominee, serving as an ambassador for Twitch and Norton Gaming. He said that the highlight of his career, undoubtedly, was joining T-Pain’s exclusive gaming label, Nappy Boy Gaming.

A natural entertainer, BigCheeseKIT’s presence, gaming knowledge and authenticity dazzle his 60,000+ subscribers. Powered by his GeForce RTX 3080 GPU and live-streaming optimizations from NVIDIA Studio — such as better performance in OBS Studio, BigCheeseKIT has the resources and know-how to host professional streams.

“It’s like having my own television channel, and I’m the host or entertainer,” said the artist.

BigCheeseKIT streams exclusively with OBS Studio, benefitting massively from the dedicated GPU-based NVIDIA Studio encoder (NVENC), which enables seamless streaming with maximum performance.

“Using NVENC with my live streams makes my quality 20x better,” said BigCheeseKIT. “I can definitely see the difference.”

“Quality and consistency,” BigCheeseKIT noted. “NVIDIA hasn’t failed me.”

OBS Studio’s advanced GPU-accelerated encoding also unlocks higher video quality for streaming and recorded videos. Once he started using it, BigCheeseKIT’s system immediately became built to broadcast.

For on-demand videos, BigCheeseKIT prefers to edit using VEGAS Pro. MAGIX’s professional video editing software takes advantage of GPU-accelerated video effects while using NVENC for faster encoding. Overall, the artist said that his creative workflow — charged by his GPU — became faster and easier, saving valuable time.

For aspiring streamers, BigCheeseKIT offered these words of wisdom: “Stream like everyone is watching. Be yourself, have fun and don’t let negativity get to you.”

Nappy Boy Gaming’s newest member: BigCheeseKIT.

Head over to BigCheeseKIT’s Twitch channel to subscribe, learn more and check out his videos.

NVIDIA Broadcast and the SDKs behind it — which enable third-party integrations like the ones described above — are part of the NVIDIA Studio tools that include AI-powered software and NVIDA Studio Drivers.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by signing up for the NVIDIA Studio newsletter.

The post CORSAIR Integrates NVIDIA Broadcast’s Audio, Video AI Features in iCUE and Elgato Software This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Animator Entertains and Explains With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Australian animator Marko Matosevic is taking dad jokes and breathing them into animated life with NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows.

Matosevic’s work is multifaceted: he’s a VR developer by day and a YouTuber by night, a lover of filmmaking and animation, and has a soft spot for sci-fi and dad jokes.

The animated shorts on Matosevic’s YouTube channels Markom3D and Deadset Digital are the culmination of those varied interests and pursuits.

To bring the above film to life, Matosevic harnessed Reallusion iClone and Character Creator for character creation, Perception Neuron V3 for body motion capture, and NVIDIA Omniverse and Epic Games Unreal Engine 5 for rendering.

After noting a lack of YouTube tutorials on how to use animation software like Blender, Matosevic also set himself up as an instructor. He says his goal is to help those at all developmental ranges — from beginners to advanced users — learn new skills and techniques in a concise manner. The following video is a tutorial of the previous film:

Matosevic says his ultimate goal in creating these animated shorts is to make his viewers “have a laugh.”

“Through rough times that we are going through at the moment, it is nice to just let yourself go for a few moments and enjoy a short animation,” he said. “Sure, the jokes may be terrible, and a lot of people groan, but I am a dad, and that is part of my responsibility.”

Optimized Rendering and Seamless Workflow

Initially, Matosevic relied primarily on Blender for 3D modeling and Unreal Engine 4 for his work. It wasn’t until he upgraded to an NVIDIA RTX 3080 Ti GPU that he saw the possibility of integrating the NVIDIA Omniverse platform into his toolbox.

“What really got me interested was that NVIDIA [Omniverse] had its own render engine, which I assumed that my 3080 would be optimized for,” he said. “I was able to create an amazing scene with little effort.”

With NVIDIA Omniverse, Matosevic can export whole scenes from Unreal Engine into Omniverse without having to deal with complex shaders, as he would’ve had to do if he were solely working in Blender.

Along with the iClone and Blender connectors, Matosevic uses NVIDIA Omniverse Machinima, an application that allows content creators to collaborate in real time to animate and manipulate characters along with their environments inside of virtual worlds.

“I like it because with a few simple clicks, I can start the rendering process and know that I am going to have something amazing when it is finished,” he said.

With Universal Scene Description, an open-source 3D scene description for creating virtual worlds, these applications and connectors work seamlessly together to bring elevated results.

“I have created animated short films using Blender and Unreal Engine 4, but Omniverse has just raised the quality to a new level,” he said.

Join In on the Creation

Creators across the world can download NVIDIA Omniverse for free and Enterprise teams can use the platform for their 3D projects.

Check out works from other artists using NVIDIA Omniverse and submit your own work with #MadeInOmniverse to be featured in the gallery.

Join us at SIGGRAPH 2022 to learn how Omniverse, and design and visualization solutions, are driving advanced breakthroughs in graphics workflows and GPU-accelerated software.

Connect your workflows to NVIDIA Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: Animator Entertains and Explains With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Action on Repeat: GFN Thursday Brings Loopmancer With RTX ON to the Cloud

Investigate the ultimate truth this GFN Thursday with Loopmancer, now streaming to all members on GeForce NOW. Stuck in a death loop, RTX 3080 and Priority members can search for the truth with RTX ON — including NVIDIA DLSS and ray-traced reflections.

Plus, players can enjoy the latest Genshin Impact event with the “Summer Fantasia” version 2.8 update. It’s all part of the nine new games joining the GeForce NOW library this week.

Enter the Dragon City

The cycle continues until the case is solved. Loopmancer is streaming on GeForce NOW, with RTX ON.

Playing as a detective in this roguelite-platformer action game, members will wake back up in their apartments each time they die, bathed in the neon lights of futuristic Dragon City. As the story progresses, reviewing what seemed like the correct choice in the past may lead to a different conclusion.

Face vicious gangsters, well-equipped mercs, crazy mutants, highly trained bionics and more while searching for clues, even on mobile devices. Unlock new weapons and abilities through endless reincarnations to enhance your fighting skills for fast-paced battles.

RTX 3080 and Priority members can experience Loopmancer with DLSS for improved image quality at higher frame rates, as well as real-time ray-tracing technology that simulates the realistic, physical behavior of light, even on underpowered devices and Macs. Every loop, detail and map – from the richly colored Dragon Town to the gloomy Shuigou Village – is rendered with beautiful cinematic quality.

Ready to initiate a new loop? Try out the Loopmancer demo in the Instant Play Free Demos row before diving into the full game. RTX 3080 and Priority members can even try the demo with RTX ON.

A Summertime Odyssey Awaits

With a cursed blade of unknown origin, a mysterious unsolved case and the familiar — but not too familiar — islands far at sea, the recent addition of Genshin Impact heats up with the version 2.8 “Summer Fantasia” update, now available.

Meet the newest Genshin character, Shikanoin Heizou, a young prodigy detective from the Tenryou Commission with sharp senses. Members can also cool off with the new sea-based “Summertime Odyssey” main event, explore the Golden Apple Archipelago, experience new stories and dress their best with new outfits.

RTX 3080 members can stream all of the fun at 4K resolution and 60 frames per second, or 1440p and 120 FPS from the PC and Mac native apps. They also get the perks of ultra-low latency that rivals console gaming and can catch all of the newest action with the maximized eight-hour play sessions.

Summer Gamin’, Havin’ a Blast

Neon Blight on GeForce NOW
 Fight through dystopian cyberspace and establish an exotic black market store in this rogue-lite, management, shoot ‘em up.

This week brings in a total of nine new titles for gamers to play.

With all of these awesome options to play and only so many hours in a day, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Action on Repeat: GFN Thursday Brings Loopmancer With RTX ON to the Cloud appeared first on NVIDIA Blog.

Read More