In the NVIDIA Studio: April Driver Launches Alongside New NVIDIA Studio Laptops and Featured 3D Artist

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

This week In the NVIDIA Studio, we’re launching the April NVIDIA Studio Driver with optimizations for the most popular 3D apps, including Unreal Engine 5, Cinema4D and Chaos Vantage. The driver also supports new NVIDIA Omniverse Connectors from Blender and Redshift.

Digital da Vincis looking to upgrade their canvases can learn more about the newly announced Lenovo ThinkPad P1 NVIDIA Studio laptop, or pick up the Asus ProArt Studiobook 16, MSI Creator Pro Z16 and Z17 — all available now.

These updates are part of the NVIDIA Studio advantage: dramatically accelerated 3D creative workflows that are essential to this week’s featured In the NVIDIA Studio creator, Andrew Averkin, lead 3D environment artist at NVIDIA.

April Showers Bring Studio Driver Powers

The April NVIDIA Studio Driver supports many of the latest creative app updates, starting with the highly anticipated launch of Unreal Engine 5.

Unreal Engine 5’s Lumen on full display.

NVIDIA RTX GPUs support Lumen — UE5’s new, fully dynamic global illumination system — for both software and hardware ray tracing. Along with Nanite, developers are empowered to create games and apps that contain massive amounts of geometric detail with fully dynamic global illumination. At GTC last month, NVIDIA introduced an updated Omniverse Connector that includes the ability to export the source geometry of Nanite meshes from Unreal Engine 5.

The City Sample is a free downloadable sample project that reveals how the city scene from ‘The Matrix Awakens: An Unreal Engine 5 Experience’ was built.

NVIDIA has collaborated with Epic Games to integrate a host of key RTX technologies with Unreal Engine 5. These plugins are also available on the Unreal Engine Marketplace. RTX-accelerated ray tracing and NVIDIA DLSS in the viewport make iterating on and refining new ideas simpler and faster. For the finished product, those same technologies power beautifully ray-traced graphics while AI enables higher frame rates.

With NVIDIA Reflex — a standard feature in UE5 that does not require a separate plugin or download — PC games running on RTX GPUs experience unimaginably low latency.

NVIDIA real-time denoisers offer real-time performance, increasing the efficiency of art pipelines. RTX global illumination produces realistic bounce lighting in real time, giving artists instant feedback in the Unreal Editor viewport. With the processing power of RTX GPUs, suite of high-quality RTX UE plugins, and the next-generation UE5, there’s no limit to creation.

Maxon’s Cinema 4D version S26 includes all-new cloth and rope dynamics, accelerated by NVIDIA RTX GPUs, allowing artists to model digital subjects with increased realism, faster.

In the time it took to render “The City” scene with an NVIDIA GeForce RTX 3090 GPU, the CPU alone took an hour longer!

Performance testing conducted by NVIDIA in April 2022 with AMD Ryzen Threadripper 3990X 64-Core Processor, 2895 Mhz, 128GB RAM. NVIDIA Driver 512.58.

Additional features include OpenColorIO adoption, a new camera and enhanced modeling tools.

Chaos Vantage, aided by real-time ray tracing exclusive to NVIDIA RTX GPUs, adds normal map support, a new feature for converting texture models to clay to focus on lighting, and ambient occlusion for shadows.

NVIDIA Omniverse Connector updates are giving real-time workflows new features and options. Blender adds new blend shape I/O support to ensure detailed, multifaceted subjects automate and animate correctly. Plus, new USD scale maps unlock large-scale cinematic visualization.

Rendered in Redshift with the NVIDIA RTX A6000 GPU in mere minutes; CPU alone would take over an hour.

Blender and Redshift have added hydra-based rendering. Artists can now use their renderer of choice within the viewport of all Omniverse apps.

New Studio Driver, Meet New Studio Laptops

April also brings a new Lenovo mobile workstation to the NVIDIA Studio lineup, plus the availability of three more.

The Lenovo ThinkPad P1 features a thin and light design, 16-inch panel and impressive performance powered by GeForce RTX and NVIDIA RTX professional graphics, equipped with up to the new NVIDIA RTX A5500 Laptop GPU.

Dolby Vision, HDR400 and a 165Hz display make the Lenovo ThinkPad P1 a great device for creators.

Studio laptops from other partners include the recently announced Asus ProArt Studiobook 16, MSI Creator Pro Z16 and Z17, all now available for purchase.

Walk Down Memory Lane With Andrew Averkin

Andrew Averkin is a Ukraine-based lead 3D environment artist at NVIDIA. He specializes in creating photorealistic 3D scenes, focused on realism that intentionally invokes warm feelings of nostalgia.

‘Boyhood’ by Andrew Averkin.

Averkin leads with empathy, a critical component of his flow state, saying he aims to create “artwork that combines warm feelings that fuel my passion and artistic determination.”

He created the piece below, called When We Were Kids, using the NVIDIA Omniverse Create app and Autodesk 3ds Max, accelerated by an NVIDIA RTX A6000 GPU.

Multiple light sources provide depth and shadows adding to the realism.

Here, Averkin harkens back to the pre-digital days of playing with toys and letting one’s imagination do the work.

Averkin first modeled When We Were Kids in Autodesk 3ds Max.

Closeups from the scene show exquisite attention to detail.

The RTX GPU-accelerated viewport and RTX-accelerated AI denoising in Autodesk 3ds Max enable fluid interactivity despite the massive file size.

Omniverse Create lets users assemble complex and physically accurate simulations and 3D scenes in real time.

Averkin then brought When We Were Kids into Omniverse Create to light, simulate and render his 3D scene in real time.

Omniverse allows 3D artists, like Averkin, to connect their favorite design tools to a single scene and simultaneously create and edit between the apps. The “Getting Started in NVIDIA Omniverse” series on the NVIDIA Studio YouTube channel is a great place to learn more.

“Most of the assets were taken from the Epic marketplace,” Averkin said. “My main goal was in playing with lighting scenarios, composition and moods.”

Averkin’s focus on nostalgia and photorealism help viewers feel the rawrs of yesteryear.

In Omniverse Create, Averkin used specialized lighting tools for his artwork, updating the original Autodesk 3ds Max file automatically with no need for messy and time-consuming file conversions and uploads, concluding with final files rendered at lightspeed with his RTX GPU.

Previously, Averkin worked at Axis Animation, Blur Studio and elsewhere. View his portfolio and favorite projects on ArtStation.

Dive Deeper In the NVIDIA Studio

Tons of resources are available to creators who want to learn more about the apps used by this week’s featured artist, and how RTX and GeForce RTX GPUs help accelerate creative workflows.

Take a behind-the-scenes look at The Storyteller, built in Omniverse and showcasing a stunning, photorealistic retro-style writer’s room.

Check out this tutorial from 3D artist Sir Wade Neistadt, who shows how to bring multi-app workflows into Omniverse using USD files, setting up Nucleus for live-linking tools.

View curated playlists on the Studio YouTube channel, plus hundreds more on the Omniverse YouTube channel. Follow NVIDIA Studio on Facebook, Twitter and Instagram, and get updates directly in your inbox by joining the NVIDIA Studio newsletter.

The post In the NVIDIA Studio: April Driver Launches Alongside New NVIDIA Studio Laptops and Featured 3D Artist appeared first on NVIDIA Blog.

Read More

Let Me Shoyu How It’s Done: Creating the NVIDIA Omniverse Ramen Shop

When brainstorming a scene to best showcase the groundbreaking capabilities of the Omniverse platform, some NVIDIA artists turned to a cherished memory: enjoying ramen together in a mom-and-pop shop down a side street in Tokyo.

Simmering pots of noodles, steaming dumplings, buzzing kitchen appliances, warm ambient lighting and glistening black ledger stools. These were all simulated in a true-to-reality virtual world by nearly two dozen NVIDIA artists and freelancers across the globe using NVIDIA Omniverse, a 3D design collaboration and world simulation platform.

The final scene — consisting of over 22 million triangles, 350 unique textured models and 3,000 4K-resolution texture maps — welcomes viewers into a virtual ramen shop featured in last month’s GTC keynote address by NVIDIA founder and CEO Jensen Huang.

The mouth-watering demo was created to highlight the NVIDIA RTX-powered real-time rendering and physics simulation capabilities of Omniverse, which scales performance and speed when running on multiple GPUs.

It’s a feast for the eyes, as all of the demo’s parts are physically accurate and photorealistic, from the kitchen appliances and order slips; to the shoyu ramen and chashu pork; to the stains on the pots and pans.

“Our team members were hungry just looking at the renders,” said Andrew Averkin, senior art manager and lead environment artist at NVIDIA, in a GTC session offering a behind-the-scenes look at the making of the Omniverse ramen shop.

The session — presented by Averkin and Gabriele Leone, senior art director at NVIDIA — is now available on demand.

Gathering the Ingredients for Reference

The team’s first step was to gather the artistic ingredients: visual references on which to base the 3D models and props for the scene.

An NVIDIA artist traveled to a real ramen restaurant in Tokyo and collected over 2,000 high-resolution reference images and videos, each capturing aspects from the kitchen’s distinct areas for cooking, cleaning, food preparation and storage.

Then, props artists modeled and textured 3D assets for all of the shop’s items, from the stoves and fridges to gas pipes and power plugs. Even the nutrition labels on bottled drinks and the buttons for the ticket machine from which visitors order meals were precisely replicated.

Drinks in a fridge at the virtual ramen shop, made using Omniverse Create, Adobe Substance 3D Painter, Autodesk 3ds Max, Blender, Maxon Cinema 4D, and RizomUV.

In just two months, NVIDIA artists across the world modeled 350 unique props for the scene, using a range of design software including Autodesk Maya, Autodesk 3ds Max, Blender, Maxon Cinema 4D and Pixologic Zbrush. Omniverse Connectors and Pixar’s Universal Scene Description format enabled the models to be seamlessly brought into the Omniverse Create app.

“The best way to think about Omniverse Create is to consider it a world-building tool,” Leone said. “It works with Omniverse Connectors, which allow artists to use whichever third-party apps they’re familiar with and connect their work seamlessly in Omniverse — taking creativity and experimentation to new levels.”

Adding Lighting and Texture Garnishes 

Artists then used Adobe Substance Painter to texture the materials. To make the props look used on a daily basis, the team whipped up details like dents on wooden counters, stickers peeling off appliances and sauce stains on pots.

“Some of our artists went as far as cooking some of the recipes themselves and taking references of their own pots to get a good idea of how sauce or burn stains might accumulate,” Averkin said.

Omniverse’s simulation capabilities enable light to reflect off of glass and other materials with true-to-reality physical accuracy. Plus, real-time photorealistic lighting rendered in 4K resolution created an orange warmth inside the cozy virtual shop, contrasting the rainy atmosphere that can be seen through the windows.

Artists used Omniverse Flow, a fluid simulation Omniverse Extension for smoke and fire, to bring the restaurant’s burning stoves and steaming plates to life. SideFX Houdini software helped to animate the boiling water, which was eventually brought into the virtual kitchen using an Omniverse Connector.

Broth boils in the virtual kitchen using visual effects offered by Houdini software.

And Omniverse Create’s camera animation feature allowed the artists to capture the final path-traced scene in real time, exactly as observed through the viewport.

Photorealistic lighting illuminates the virtual ramen shop, enabled by NVIDIA RTX-based ray tracing and path tracing.

Learn more about Omniverse by watching additional GTC sessions on demand — featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity and Walt Disney Studios.

Join in on the Creation

Creators across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Join the #MadeInMachinima contest, running through June 27, for a chance to win the latest NVIDIA Studio laptop.

Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Let Me Shoyu How It’s Done: Creating the NVIDIA Omniverse Ramen Shop appeared first on NVIDIA Blog.

Read More

Stellar Weather: Researchers Describe the Skies of Exoplanets

A paper released today describes in the greatest detail to date the atmospheres on distant planets.

Seeking the origins of what’s in and beyond the Milky Way, researchers surveyed 25 exoplanets, bodies that orbit stars far beyond our solar system. Specifically, they studied hot Jupiters, the largest and thus easiest to detect exoplanets, many sweltering at temperatures over 3,000 degrees Fahrenheit.

Their analysis of these torrid atmospheres used high performance computing with NVIDIA GPUs to advance understanding of all planets, including our own.

Hot Jupiters Shine New Lights

Hot Jupiters “offer an incredible opportunity to study physics in environmental conditions nearly impossible to reproduce on Earth,” said Quentin Changeat, lead author of the paper and a research fellow at University College London (UCL).

By analyzing trends across a large group of exoplanets, they shine new light on big questions.

“This work can help make better models of how the Earth and other planets came to be,” said Ahmed F. Al-Refaie, a co-author of the paper and head of numerical methods at the UCL Centre for Space Exochemistry Data.

Parsing Hubble’s Big Data

They used the most data ever employed in a survey of exoplanets — 1,000 hours of archival observations, mainly from the Hubble Space Telescope.

The hardest and, for Changeat, the most fascinating part of the process was determining what small set of models to run in a consistent way against data from all 25 exoplanets to get the most reliable and revealing results.

“There was an amazing period of exploration — I was finding all kinds of sometimes weird solutions — but it was really fast to get the answers using NVIDIA GPUs,” he said.

Millions of Calculations

Their overall results required heady math. Each of about 20 models had to run 250,000 times for all 25 exoplanets.

They used the Wilkes3 supercomputer at the University of Cambridge, which packs 320 NVIDIA A100 Tensor Core GPUs on an NVIDIA Quantum InfiniBand network.

“I expected the A100s might be double the performance of V100s and P100s I used previously, but honestly it was like an order of magnitude difference,” said Al-Refaie.

Orders of Magnitude Gains

A single A100 GPU gave a 200x performance boost compared to a CPU.

Packing 32 processes on each GPU, the team got the equivalent of a 6,400x speedup compared to a CPU. Each node on Wilkes3 delivered with its four A100s the equivalent of up to 25,600 CPU cores, he said.

The speedups are high because their application is amazingly parallel. It simulates on GPUs how hundreds of thousands of light wavelengths would travel through an exoplanet’s atmosphere

On A100s, their models complete in minutes work that would require weeks on CPUs.

The GPUs ran the complex physics models so fast that their bottleneck became a CPU-based system handling a much simpler task of determining statistically where to explore next.

“It was a little funny, and somewhat astounding, that simulating the atmosphere was not the hard part — that gave us an ability to really see what was in the data,” he said.

A Wealth of Software

Al-Refaie employed CUDA profilers to optimize jobs, PyCUDA to optimize the team’s code and cuBlas to speed up some math routines.

“With all the NVIDIA software available, there’s a wealth of things you can exploit, so the team is starting to spit out papers quickly now because we have the right tools,” he said.

They will need all the help they can get, as the work is poised to get much more challenging.

Getting a Better Telescope

The James Webb Space Telescope comes online in June. Unlike Hubble and all previous instruments, it’s specifically geared to observe exoplanets.

The team is already developing ways to work at higher resolutions to accommodate the expected data. For example, instead of using one-dimensional models, they will use two- or three-dimensional ones and account for more parameters like changes over time.

“If a planet has a storm, for example, we may not be able to see it with current data, but with the next generation data, we think we will,” said Changeat.

Exploring HPC+AI

The rising tide of data opens a door to apply deep learning, something the group’s AI experts are exploring.

It’s an exciting time, said Changeat, who’s joining the Space Telescope Science Institute in Baltimore as an ESA fellow to work directly with experts and engineers there.

“It’s really fun working with experts from many fields. We had space observers, data analysts, machine-learning and software experts on this team — that’s what made this paper possible,” Changeat said.

Learn more about the paper here.

Image at top courtesy of ESA/Hubble, N. Bartmann

The post Stellar Weather: Researchers Describe the Skies of Exoplanets appeared first on NVIDIA Blog.

Read More

By Land, Sea and Space: How 5 Startups Are Using AI to Help Save the Planet

Different parts of the globe are experiencing distinct climate challenges — severe drought, dangerous flooding, reduced biodiversity or dense air pollution.

The challenges are so great that no country can solve them on their own. But innovative startups worldwide are lighting the way, demonstrating how these daunting challenges can be better understood and addressed with AI.

Here’s how five — all among the 10,000+ members of NVIDIA Inception, a program designed to nurture cutting-edge startups — are looking out for the environment using NVIDIA-accelerated applications:

Blue Sky Analytics Builds Cloud Platform for Climate Action

India-based Blue Sky Analytics is building a geospatial intelligence platform that harnesses satellite data for environmental monitoring and climate risk assessment. The company provides developers with climate datasets to analyze air quality and estimate greenhouse gas emissions from fires  — with additional datasets in the works to forecast future biomass fires and monitor water capacity in lakes, rivers and glacial melts.

The company uses cloud-based NVIDIA GPUs to power its work. It’s a founding member of Climate TRACE, a global coalition led by Al Gore that aims to provide high-resolution global greenhouse gas emissions data in near real time. The startup leads Climate TRACE’s work examining how land use and land cover change due to fires.

Rhions Lab Protects Wildlife With Computer Vision

Kenya-based Rhions Lab uses AI to tackle challenges to biodiversity, including human-wildlife conflict, poaching and illegal wildlife trafficking. The company is adopting NVIDIA Jetson Nano modules for AI at the edge to support its conservation projects.

One of the company’s projects, Xoome, is an AI-powered camera trap that identifies wild animals, vehicles and civilians — sending alerts of poaching threats to on-duty wildlife rangers. Another initiative monitors beekeepers’ colonies with a range of sensors that capture acoustic data, vibrations, temperature and humidity within beehives. The platform can help beekeepers monitor bee colony health and fend off threats from thieves, whether honey badgers or humans.

TrueOcean Predicts Undersea Carbon Capture and Storage

German startup TrueOcean analyzes global-scale maritime data to inform innovation around natural ocean carbon sinks, renewable energy and shipping route optimization. The company is using AI to predict and quantify carbon absorption and storage in seagrass meadows and subsea geology. This makes it possible to greatly increase the carbon storage potential of Earth’s oceans.

TrueOcean uses AI solutions, including federated learning accelerated on NVIDIA DGX A100 systems, to help scientists predict, monitor and manage these sequestration efforts.

ASTERRA Saves Water With GPU-Accelerated Leak Detection

ASTERRA, based in Israel, has developed AI models that analyze satellite images to answer critical questions around water infrastructure. It’s equipping maintenance workers and engineers with the insights needed to find deficient water pipelines, assess underground moisture and locate leaks. The company uses NVIDIA GPUs through Amazon Web Services to develop and run its machine learning algorithms.

Since deploying its leak detection solution in 2016, ASTERRA has helped the water industry identify tens of thousands of leaks, conserving billions of gallons of drinkable water each year. Stopping leaks prevents ground pollution, reduces water wastage and even saves power. The company estimates its solution has reduced the water industry’s energy use by more than 420,000 megawatt hours since its launch.

Neu.ro Launches Renewable Energy-Powered AI Cloud

Another way to make a difference is by decreasing the carbon footprint of training AI models.

To help address this challenge, San Francisco-based Inception startup Neu.ro launched an NVIDIA DGX A100-powered AI cloud that’s powered entirely by geothermal and hydropower, with free-air cooling. Located in Iceland, the data center is being used for AI applications in telecommunications, retail, finance and healthcare.

The company has also developed a Green AI suite to help businesses monitor the environmental impact of AI projects, allowing developer teams to optimize compute usage to balance performance with carbon footprint.

Learn more about how GPU technology drives applications with social impact, including environmental projects. AI, data science and HPC startups can apply to join NVIDIA Inception.

The post By Land, Sea and Space: How 5 Startups Are Using AI to Help Save the Planet appeared first on NVIDIA Blog.

Read More

Tooth Tech: AI Takes Bite Out of Dental Slide Misses by Assisting Doctors

Your next trip to the dentist might offer a taste of AI.

Pearl, a West Hollywood startup, provides AI for dental images to assist in diagnosis. It landed FDA clearance last month, the first to get such a go-ahead for dentistry AI.

The approval paves the way for its use in clinics across the United States.

“It’s really a first of its kind for dentistry,” said Ophir Tanz, co-founder and CEO of Pearl. “But we also have similar regulatory approvals across 50 countries globally.”

Pearl’s software platform, available in the cloud as a service, enables dentists to run real-time screening of X-rays. Dentists can then review the AI findings and share them with patients to facilitate informed dentist-patient discussions about diagnosis and treatment planning.

Behind the scenes, NVIDIA GPU-driven convolutional neural networks developed by Pearl can spot not just tooth decay but many other dental issues, like cracked crowns and root abscess requiring a root canal.

Pearl’s AI offers dentist results. The startup’s FDA application showed that on average Pearl AI was capable of spotting 36 percent more pathologies and other dental issues than an average dentist. “And that’s important because in dentistry it’s extremely common and routine to miss a pathology,” said Tanz.

The company’s products include its Practice Intelligence, which enables dental practices to run AI on patient data to discover missed diagnoses and treatment opportunities. Pearl Protect can help screen for dental insurance fraud, waste and abuse, while Claims Review offers automated claims examination.

Pearl, founded in 2019, is a member of the NVIDIA Inception startup program, which provided it access to Deep Learning Institute courses, NVIDIA Developer Forums and technical workshops.

Hatching Dental AI

The son of a dentist, Tanz has a mouthful of a founding tale. The entrepreneur decided to pursue AI for dental radiology after talking shop on a visit with his dentist. A partner at the practice liked the idea so much he jumped on board as a co-founder.

Pearl co-founders Cambron Carter, Kyle Stanley and Ophir Tanz (left to right)

Tanz, who founded tech unicorn GumGum for AI to analyze images, video and text for better contextual advertising, was joined by GumGum colleague Cambron Carter, now CTO and co-founder at Pearl. Dentist Kyle Stanley, co-founder and chief clinical officer, rounds out the trio with clinical experience.

Pearl’s founders targeted a host of conditions commonly addressed in dental clinics. They labeled more than a million images to help train their proprietary CNN models, running on NVIDIA V100 Tensor Core GPUs in the cloud, to identify issues. Before that they had prototyped on local NVIDIA-powered workstations.

Inference is done on cloud-based GPUs, where Pearl’s system synchronizes with the dentist’s real-time and historial radiology data. “The dental vertical is still undergoing a transition to the cloud, and now we’re bringing them AI in the cloud — we represent a wave of technology that will propel the field of dentistry into the future,” said Carter.

Getting FDA approval wasn’t easy, he said. It required completing an expansive clinical trial. Pearl submitted four studies, each involving thousands of X-rays and over 80 expert dentists and radiologists.

Getting Second Opinion for Diagnosis

Pearl offers dentists a product called Second Opinion to aid in the detection of disease in radiography. Second Opinion can identify dozens of conditions to help validate dentist’s findings, according to Tanz.

“We’re the only company in the world that is able to diagnose pathology and detect disease in an AI-driven manner in the dental practice,” he said. “We’re driving a much more comprehensive diagnosis, and it’s a diagnostic aid for general practitioners.”

Second Opinion is taking root in clinics. Sage Dental, which has more than 60 offices across the East Coast, is a customer. Dental 365 is a customer with more than 60 offices in the region as well.

“Second Opinion is an extremely important tool for the future of dentistry,” said Cindy Roark, chief clinical officer at Sage. “Dentistry has needed consistency for a very long time. Diagnosis is highly variable, and variability leads to confusion and distrust from patients.”

Boosting Doctor-Patient Rapport 

Dentists review X-rays while patients are in the chair, pointing out any issues as they go. Even for experienced dentists, making sense of the grayscale imagery that forms the basis of most treatment plans can be challenging — only compounded by the many demands on their attention throughout a busy day juggling patients.

For patients, comprehending the indistinct gradations in X-rays that separate healthy tooth structures from unhealthy ones is even harder.

But with AI-aided images, dentists are able to present areas of concern outlined by simple, digestible bounding boxes. This ensures that their treatment plans have a sound basis, while providing patients with a much clearer picture of what exactly is going on in their X-rays.

“You’re able to have a highly visual sort of discussion and paint a visual narrative for patients so that they really start to understand what is going on in their mouth,” said Dr. Stanley.

The post Tooth Tech: AI Takes Bite Out of Dental Slide Misses by Assisting Doctors appeared first on NVIDIA Blog.

Read More

GFN Thursday Is Fit for the Gods: ‘God of War’ Arrives on GeForce NOW

The gods must be smiling this GFN Thursday — God of War today joins the GeForce NOW library.

Sony Interactive Entertainment and Santa Monica Studios’ masterpiece is available to stream from GeForce NOW servers, across nearly all devices and at up to 1440p and 120 frames per second for RTX 3080 members.

Get ready to experience Kratos’ latest adventure as part of nine titles joining this week.

The Story of a Generation Comes to the Cloud

This GFN Thursday, God of War (Steam) comes to GeForce NOW.

With his vengeance against the Gods of Olympus years behind him, play as Kratos, now living as a man in the realm of Norse Gods and monsters. Mentor his son, Atreus, to survive a dark, elemental world filled with fearsome creatures and use your weapons and abilities to protect him by engaging in grand and grueling combat.

God of War’s PC port is as much a masterpiece as the original game, and RTX 3080 members can experience it the way its developers intended. Members can explore the dark, elemental world of fearsome creatures at up to 1440p and 120 FPS on PC, and up to 4K on SHIELD TV. The power of AI in NVIDIA DLSS brings every environment to life with phenomenal graphics and uncompromised image quality. Engage in visceral, physical combat with ultra-low latency that rivals even local console experiences.

Streaming from the cloud, you can play one of the best action games on your Mac at up to 1440p or 1600p on supported devices. Or take the action with you by streaming to your mobile device at up to 120 FPS, with up to eight-hour gaming session lengths for RTX 3080 members.

Enter the Norse realm and play God of War today.

More? More.

This week also brings a new instant-play free game demos streaming on GeForce NOW.

Squish, bop and bounce around to the rhythms of an electronica soundscape in the demo for Lumote: The Mastermote Chronicles. Give the demo a try for free before adding the full title to your wishlist on Steam.

Experience the innovative games being developed by studios from across the greater China region, which will participate in the Enter the Dragon indie game festival. Starting today, play the Nobody – The Turnaround demo, and look for others to be added in the days ahead.

Finally, get ready for the upcoming launch of Terraformers with the instant-play free demo that went live on GeForce NOW last week.

MotoGP22 on GeForce NOW
Get your zoomies out and be the fastest on the track in the racing game MotoGP22.

In addition, members can look for the following games arriving in full this week:

Finally, in case you hadn’t guessed it before, we bet you can now. Let us know your guess on Twitter or in the comments below.

The post GFN Thursday Is Fit for the Gods: ‘God of War’ Arrives on GeForce NOW appeared first on NVIDIA Blog.

Read More

Welcome ‘In the NVIDIA Studio’: A Weekly Celebration of Extraordinary Artists, Their Inspiring Art and Innovative Techniques

Creating content is no longer tethered to using paint and stone as mediums, nor being in massive studios. Visual art can now be created anywhere, anytime.

But being creative is still challenging and time-consuming. NVIDIA is making artistic workflows easier and faster by giving creators tools that enable them to remain in their flow state.

That’s what NVIDIA Studio is — an ecosystem of creative app optimizations, GPU-accelerated features and AI-powered apps, powered by NVIDIA RTX GPUs and backed by world-class Studio Drivers.

Our new In the NVIDIA Studio,’ blog series celebrates creativity everywhere by spotlighting 3D animators, video editors, photographers and more, every week. We’ll showcase their inspirational and thought-provoking work, and detail how creators are using NVIDIA GPUs to go from concept to completion, faster than ever.

The series kicks off with 3D artist Jasmin Habezai-Fekri. Check out her work below, created with Unreal Engine, Adobe Substance 3D and Blender, accelerated by her GeForce RTX 2070 GPU.

Habezai-Fekri Dreams in 3D

‘Old Forgotten Library’ and ‘Shiba Statue’ highlight Habezai-Fekri’s use of vivid colors.

Based in Germany, Habezai-Fekri works in gaming as a 3D environment artist, making props and environments with hand-painted and stylized physically based rendering textures. She revels in creating fantasy and nature-themed scenes, accentuated by big, bold colors.

 

Habezai-Fekri’s passion is creating artwork with whimsical charm, piquing the interest of her audiences while creating a sense of immersion, rounding out her unique flair.

 

One such piece is Bird House — a creative fusion of styles and imagination.

With this piece, Habezai-Fekri was learning the ins and outs of using Unreal Engine, while trying to replicate “something very 2D-esque in a 3D space, giving it all a very painterly yet next-gen feeling.” Through iteration, she developed her foundational skills and found that having a set art direction and visual style gave it her own signature.

Prop development for ‘Bird House.’

Habezai-Fekri uses Blender software for modeling and employs Zbrush for her highpoly sculpts to help bring stylized details into textures and models. The fine details are critical for invoking the real-life emotions she hopes to cultivate. “Creating immersiveness is a huge aspect for me when making my art,” Habezai-Fekri said.

Hand-painted, stylized wood textures and details in ‘Bird House.’

Looking closer reveals Habezai-Fekri’s personal touches in the textures in Bird House — she hand-painted them in Adobe Substance 3D Painter. RTX GPU-accelerated light and ambient occlusion in Substance 3D helps speed up her process by outputting new textures in mere seconds.

 

“Having a hand-painted pass on my textures really enhances the assets and lets me channel that artistic side throughout a heavily technical process,” she said.

“In our industry, with new tools being released so frequently, it’s inevitable to constantly learn and expand your skill set. Being open to that from the start really helps to be more receptive to it.”

Habezai-Fekri’s work often uses vivid colors. To make it look inviting and friendly, she purposely saturates colors, even if the subject matter is not colorful by nature.

Habezai-Fekri also finds inspiration in trying new tools and workflows, particularly when she sees other artists and creatives doing amazing work.

By partnering with creative app developers, the NVIDIA Studio ecosystem regularly gives Habezai-Fekri new tools that help her create faster. For example, RTX-accelerated OptiX ray tracing in Blender’s viewport enables her to enjoy interactive, photorealistic rendering in real time.

RTX GPUs also deliver rendering speeds up to 2.5x faster with Blender Cycles 3.0. This means a lot less waiting and a lot more creating.

Everything comes together for Habezai-Fekri with the application of final textures and colors in Unreal Engine. NVIDIA RTX GPUs feature advanced capabilities like DLSS, which enhances interactivity of the viewport in Unreal Engine by using AI to upscale frames rendered at lower resolution, while still retaining detail.

Habezai-Fekri works for Airship Syndicate. Previously, she has been an artist at Square Enix and ArtStation. View her work on ArtStation, including a new learning course providing project insights.

NVIDIA Studio Resources

Habezai-Fekri is one of the artists spotlighted in the latest Studio Standouts video, “Stunning Art From Incredible Women Artists.”

See more amazing digital art in the video from Yulia Sokolova, Nourhan Ishmai, Ecem Okumus and Laura Escoin.

Learn more about texturing in Substance 3D Painter by exploring artist and Adobe Creative Director Vladimir Petkovic’s series, “From Texturing to Final Render in Adobe Substance Painter.”

Join the growing number of 3D artists collaborating around the globe in real time, and working in multiple apps simultaneously, with NVIDIA Omniverse.

Check back In the NVIDIA Studio every week to discover new featured artists, creative tips and tricks, and the latest NVIDIA Studio news. Follow NVIDIA Studio on Facebook, Twitter and Instagram, subscribe to the Studio YouTube channel and get updates directly in your inbox by joining the NVIDIA Studio newsletter.

The post Welcome ‘In the NVIDIA Studio’: A Weekly Celebration of Extraordinary Artists, Their Inspiring Art and Innovative Techniques appeared first on NVIDIA Blog.

Read More

Startup Transforms Meeting Notes With Time-Saving Features

Gil Makleff and Artem Koren are developing AI for meeting transcripts, creating time-savers like shareable highlights of the text that is often TL;DR (too long; didn’t read).

The Sembly founders conceived the idea after years of working in enterprise operational consulting at UMT Consulting Group, which was acquired by Ernst & Young.

“We had an intuition that if AI were applied to those operational conversations and able to make sense of them, the value gains to enterprises could be enormous,” said Koren, chief product officer at Sembly.

Sembly goes far beyond basic transcription, allowing people to skip meetings and receive speaker highlights and key action items for follow-ups.

The New York startup uses proprietary AI models to transcribe and analyze meetings, transforming them into actionable insights. It aims to supercharge teams who want to focus on delivering results rather than spending time compiling notes.

Sembly’s GPU-fueled automatic speech recognition AI can be used with popular video call services such as Zoom, Webex, Microsoft Teams and Google Meet. In a few clicks on the Sembly site, it can be synced to Outlook or Google calendars or used for calls in progress via e-mail, web app, or the Sembly mobile app.

The service delivers market-leading transcript accuracy and AI-driven analytics, including highlights to pinpoint important discussion topics. It also allows users to zero in on meeting speakers and easily share clips of individual passages with team members, enhancing collaboration.

Sembly, founded in 2019, is a member of the NVIDIA Inception startup program.

Improving Speaker Tracking With NeMo

One of the pain points Sembly addresses in transcripts is what’s known as diarization, or identifying the correct speaker in text, which can be problematic. The company had tried popular diarization systems from major software makers with negligible results.

Diarization is a key step in the meeting processing pipeline because many of Sembly’s natural language processing features rely on that text to be properly identified. Its Glance View feature, for instance, can identify key meeting topics and who raised them.

Attributing meeting topics to the wrong person throws a wrench in follow-ups on action items.

Harnessing NVIDIA NeMo —  an open source framework for building, training and fine-tuning GPU-accelerated speech and natural language understanding models — provided a significant leap in accuracy.

Using the NeMo conversational AI toolkit for diarization model training, running on NVIDIA A100 GPUs, dramatically improved its speaker tracking. Before applying Nemo, it had an 11 percent error rate in diarization. After implementation, its error rate declined to 5 percent.

Business Boost Amid Meeting Fatigue

With a shift to fewer face-to-face meetings and more virtual ones, companies are seeking ways to counter online meeting fatigue for employees, said Koren. That’s important for delivering more engaging workplace experiences, he added.

“There’s a concept of ‘meeting tourists’ in large organizations. And this is one of those things that we’re hoping Sembly will help to address,” he said.

Adopting Semby to easily highlight key points and speakers in transcripts for sharing gives workers more time back in the day, he said. And leaner operational technologies that help companies stay more focused on key business objectives offer competitive advantages, said Koren.

For those with bloated calendars and the need to try to dance between two meetings, Sembly can also assist. Sembly can be directed to attend a meeting instead of the user and come back with a summary and a list of key items, saving time while keeping teams more informed.

“Sometimes I’d like to attend two meetings that overlap — with Sembly, now I can,” Koren said.

The post Startup Transforms Meeting Notes With Time-Saving Features appeared first on NVIDIA Blog.

Read More

A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision

Talk about a bright idea. A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. 

In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. 

The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. 

The study’s authors explain that humans see light in the so-called “visible spectrum,” or light with wavelengths of between 400 and 700 nanometers.

Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can’t see. 

Information gathered by these cameras is then transposed to a display that shows a monochromatic representation of what the infrared camera detects, the researchers explain.

The team at UC Irvine developed an imaging algorithm that relies on deep learning to predict what humans would see using light captured by an infrared camera.

 

Researchers at the University of California, Irvine, aimed to use deep learning to predict visible spectrum images using infrared illumination alone. Source: Browne, et al. 

 

In other words, they’re able to digitally render a scene for humans using cameras operating in what, to humans, would be complete “darkness.” 

To do this, the researchers used a monochromatic camera sensitive to visible and near-infrared light to acquire an image dataset of printed images of faces. 

These images were gathered under multispectral illumination spanning standard visible red, green, blue and infrared wavelengths. 

The researchers then optimized a convolutional neural network with a U-Net-like architecture — a specialized convolutional neural network first developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg — to predict visible spectrum images from near-infrared images.

On the left, visible spectrum ground truth image composed of red, green and blue input images. On the right, predicted reconstructions for UNet-GAN, UNet and linear regression using three infrared input images. Source: Browne, et al. 

The system was trained using NVIDIA GPUs and 140 images of human faces for training, 40 for validation and 20 for testing.  

The result: the team successfully recreated color portraits of people taken by an infrared camera in darkened rooms. In other words, they created systems that could “see” color images in the dark.  

To be sure, these systems aren’t yet ready for general purpose use. These systems would need to be trained to predict the color of different kinds of objects — such as flowers or faces.

Nevertheless, the study could one day lead to night vision systems able to see color, just as we do in daylight, or allow scientists to study biological samples sensitive to visible light.

Featured image source: Browne, et al. 

The post A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision appeared first on NVIDIA Blog.

Read More

GFN Thursday Gears Up With More Electronic Arts Games on GeForce NOW

This GFN Thursday delivers more gr-EA-t games as two new titles from Electronic Arts join the GeForce NOW library.

Gamers can now enjoy Need for Speed HEAT  and Plants vs. Zombies Garden Warfare 2 streaming from GeForce NOW to underpowered PCs, Macs, Chromebooks, SHIELD TV and mobile devices.

It’s all part of the eight  total games coming to the cloud, starting your weekend off right.

Newest Additions From Electronic Arts

Get ready to play more beloved hits from EA this week.

Need For Speed Heat on GeForce NOW
The Electronic Arts collection expands this week with two new titles streaming on GeForce NOW, including Need for Speed HEAT.

Hustle by day and risk it all at night in Need for Speed HEAT (Steam and Origin). Compete and level up in the daytime race scene, then use the prize money to customize cars and ramp up the action in illicit, nighttime street races that build your reputation as you go up against the Cops swarming the city.

Ready the Peashooters and prepare for plant-based battle against zombies in Plants vs Zombies Garden Warfare 2 (Origin). This time, bring the fight to the zombies and help the plants reclaim a zombie-filled Suburbia from the clutches of Dr. Zomboss.

Stream these new additions and more Electronic Arts games across all your devices with unrivaled performance from the cloud and latency so low that it feels local by upgrading to the power of a GeForce NOW RTX 3080 membership.

All of the Games Coming This Week

Rach Simulator on GeForce NOW
Yippee ki-yay, gamers. Stream the immersive open-world title Ranch Simulator on GeForce NOW today.

In addition, members can look for the eight total new games ready to stream this week:

And, in case you missed it, members have been loving the new, instant-play free game demos streaming on GeForce NOW. Try out some of the hit titles streaming on the service and the top tech that comes with Priority and RTX 3080 membership features, like RTX in Ghostrunner and DLSS in Chorus, before purchasing the full PC versions.

Jump in with the newest instant play free demo arriving this week with Terraformers: First Steps on Mars – the prologue to the game Terraformers – before the full game releases next week.

Speaking of jumping in, we’ve got a question to start your weekend gaming off. Let us know your answer on Twitter or in the comments below.

The post GFN Thursday Gears Up With More Electronic Arts Games on GeForce NOW appeared first on NVIDIA Blog.

Read More