GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon

GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon

It’s a new month, which means GeForce NOW’s got the list of 22 new games arriving in December.

Rise up for Marvel’s Midnight Suns, from publisher 2K Games, streaming on GeForce NOW later this month.

Then get ready to move out, members. Battlefield 2042 is the latest game from the Electronic Arts catalog streaming on GeForce NOW. It arrives just in time for the free access weekend, running Dec. 1-4, and comes with a members-only reward.

These games lead the charge, among the six additions streaming this week.

Time to Assemble 

From the creators of XCOM, and published by 2K Games, Marvel’s Midnight Suns is a tactical role-playing game set in the darker, supernatural side of the Marvel Universe. It launches on Steam on Friday, Dec. 2, with GeForce NOW members getting into the action later this month.

Marvels Midnight Suns
A new Sun must rise.

Play as “The Hunter,” a legendary demon slayer with a mysterious past and the first-ever customizable superhero in the Marvel Universe. Put together a team of legendary Marvel heroes, including Scarlet Witch, Spiderman, Wolverine, Blade and Captain America.

These heroes must fight together to stop the Mother of Demons from completing an ancient prophecy. In revolutionary card-based tactical battles, players can use ability cards on enemies, themselves or the environment.

Stay tuned for updates on the game’s release on GeForce NOW.

Step Foot Onto the Battlefield

Prepare to charge into Battlefield 2042, the first-person shooter that marks the return to the iconic all-out warfare of the widely popular franchise from Electronic Arts. Available today along with the latest update, “Season 3: Escalation,” the game marks the 19th title from EA to join GeForce NOW.

Adapt and overcome in a near-future world transformed by disorder. Choose your role on the battlefield with class specialists and form a squad to bring a cutting-edge arsenal into dynamically changing battlegrounds of unprecedented scale and epic destruction.

With RTX ON, EA and DICE introduced ray-traced ambient occlusion in Battlefield 2042. This accurately adds shadows where game elements occlude light, whether between a soldier and a wall, a tank and the tarmac, or foliage and the ground. Members can use NVIDIA DLSS to get the definitive PC experience, with maxed-out graphics, high frame rates and uncompromised image quality.

The game comes with a special reward for GeForce NOW members. To opt in and receive rewards, log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game goodies.

Experience the action across compatible devices and take gaming to the max with all the perks of an RTX 3080 membership, like 4K resolution, RTX ON and maximized gaming sessions.

The More the Merrier

The Knight Witch on GeForce NOW
Cast devastating card-based spells, forge close bonds and make moral choices — all in a quest to save your home.

Members can look for the following six games available to play this week:

  • The Knight Witch (New release on Steam, Nov. 29)
  • Warhammer 40,000: Darktide (New Release on Steam, Nov. 30)
  • Fort Triumph (Free on Epic Games Store, Dec. 1-8)
  • Battlefield 2042 (Steam and Origin)
  • Alien Swarm: Reactive Drop (Steam)
  • Stormworks: Build and Rescue (Steam)

Then time to unwrap the rest of the list of 22 games coming this month: 

  • Marvel’s Midnight Suns (New release on Steam, coming soon)
  • Art of the Rail (New release on Steam, Dec. 4)
  • Swordship (New release on Steam, Dec. 5)
  • Knights of Honor II: Sovereign (New release on Steam, Dec. 6)
  • Chained Echoes (New release on Steam, Dec. 7)
  • IXION (New release on Steam, Dec. 7)
  • Togges (New release on Steam, Dec. 7)
  • SAMURAI MAIDEN (New release on Steam, Dec. 8)
  • Wavetale (New release on Steam, Dec. 12)
  • Master of Magic (New release on Steam, Dec. 13)
  • BRAWLHALLA (Ubisoft Connect)
  • Carrier Command 2  (Steam)
  • Cosmoteer: Starship Architect & Commander (Steam)
  • Dakar Desert Rally (Epic Game Store)
  • Dinkum (Steam)
  • Floodland (Steam)
  • Project Hospital (Steam)

Nothing Left Behind From November

On top of the 26 games announced in November, members can play the extra 10 games that were added to GeForce NOW last month:

And good things come in small packages — for the perfect stocking stuffer or last-minute gift, look no further than GeForce NOW. Physical or digital gift cards are always available, and tomorrow is the last day to get in on the “Green Thursday Black Friday” deal.

Before you start off a super weekend of gaming, there’s only one choice left to make. Let us know your pick on Twitter or in the comments below.

The post GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon appeared first on NVIDIA Blog.

Read More

Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing

Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing

The promise of quantum computing is to solve unsolvable problems. And companies are already making headway with hybrid approaches — those that combine classical and quantum computing — to tackle challenges like drug discovery for incurable diseases.

By accelerating drug molecule simulation and modeling with hybrid quantum computing, startup Qubit Pharmaceuticals is significantly reducing the time and investment needed to identify promising treatments in oncology, inflammatory diseases and antivirals.

Qubit is building a drug discovery platform using the NVIDIA QODA programming model for hybrid quantum-classical computers and the startup’s Atlas software suite. Atlas creates detailed simulations of physical molecules, accelerating calculations by a factor of 100,000 compared to traditional research methods.

Founded in 2020, the Paris and Boston-based company is a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology for cutting-edge startups.

Qubit has one of France’s largest GPU supercomputers for drug discovery, powered by NVIDIA DGX systems. The startup aims for pharmaceutical companies to begin testing their first drug candidates discovered through its GPU-accelerated research next year.

“By combining NVIDIA’s computational power and leading-edge software with Qubit’s simulation and molecular modeling capabilities, we are confident in our ability to dramatically reduce drug discovery time and cut its cost by a factor of 10,” said Robert Marino, president of Qubit Pharmaceuticals. “This unique collaboration should enable us to develop the first quantum physics algorithms applied to drug discovery.”

Tapping Unprecedented Computational Capabilities 

Computational drug discovery involves generating high-resolution simulations of potential drug molecules and predicting how well those molecules might bind to a target protein in the body.

For accurate results, researchers need to perform massive sampling, simulating hundreds of different conformations — possible spatial arrangements of a molecule’s atoms. They must also correctly model molecules’ force fields, the electric charges that predict affinity, or how a molecule will bind to another.

This simulation and modeling requires high performance computing, so Qubit selected an in-house supercomputer built with NVIDIA DGX systems and other NVIDIA-accelerated servers, totaling 200 NVIDIA Tensor Core GPUs. The supercomputer runs Qubit’s Atlas software, performing in just a few hours calculations that would take several years with conventional methods.

Atlas models quantum physics at the microscopic level to achieve maximum accuracy. The Qubit team is adopting NVIDIA QODA to explore the hybrid use of GPU-accelerated supercomputers and quantum computers, where QPUs, or quantum processing units, could one day speed up key software kernels for molecular modeling.

Using the NVIDIA cuQuantum SDK, Qubit’s developers can simulate quantum circuits, allowing the team to design algorithms ready to run on future quantum computers.

AI for Every Stage of Drug Discovery

Qubit estimates that while conventional research methods require pharmaceutical developers to start by synthesizing an average of 5,000 drug compounds before preclinical testing to bring a single drug to market, a simulation-based drug discovery approach could reduce the figure to about 200 — saving hundreds of millions of dollars and years of development time.

The company’s Atlas software includes AI algorithms for every stage of the drug discovery cycle. To support target characterization, where researchers analyze a protein that plays a role in disease, Atlas supports molecular dynamics simulations at microsecond timescales — helping scientists identify new pockets for drug molecules to bind with the protein.

During drug candidate screening and validation, researchers can use AI models that help narrow the field of potential molecules and generate novel compounds. Qubit is also developing additional filters that predict a candidate molecule’s druggability, safety and cross-reactivity.

Learn more about Qubit’s HPC and quantum-accelerated molecular dynamics software from company co-founders Jean-Philip Piquemal and Louis Lagardère through NVIDIA On-Demand.

Main image courtesy of Qubit Pharmaceuticals.

The post Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing appeared first on NVIDIA Blog.

Read More

Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X

Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X

Industrial leader Siemens is accelerating development of defect detection models with 3D synthetic data generation from NVIDIA Omniverse, the latest manufacturing gains to emerge from an extended partnership for the industrial metaverse that aims to advance digital twins.

The Siemens Xcelerator and NVIDIA Omniverse platforms are building connections to enable full-design-fidelity, live digital twins that connect software-defined AI systems from edge to cloud.

Europe’s largest industrial manufacturer manages a lot of moving parts, so AI-driven defect detection promises to boost quality assurance and yield at massive scale.

But building AI models requires hefty amounts of data, and producing labeled datasets for training models to detect defects is a time-consuming and expensive process. In most cases, such data may not cover all the types of defects or their locations.

Using NVIDIA Replicator and Siemens SynthAI technology, we can procedurally generate sets of photorealistic images using the digital models of our products and production resources and an integrated training pipeline to train ready-to-use models. This speeds up our set-up time for AI inspection models by a factor of five,” said Maximilian Metzner, global lead for autonomous manufacturing systems for electronics at GWE.

As a result, Siemens has begun tapping into NVIDIA Omniverse Replicator running on Amazon G5 instances for synthetic data generation, accelerating its AI model development times from taking “months” to “days,” according to the company.

Synthetic data is turbocharging model development. It’s boosting data sets for everything from German company Festo’s robotic arm work, to efforts at Amazon Robotics using synthetic data to train robots to identify packages.

At Siemens, synthetic data generation is being used beyond defect detection to assist in areas including, but not limited to, robotic bin picking, safety monitoring, welding and wiring inspections, and checking kits of parts.

“The better the synthetic data you have, the less real data you need — obtaining real data is a hassle, so you want to reduce that as much as possible without sacrificing accuracy,” said Alex Greenberg, director of advanced robotics simulation at Siemens Digital Industries Software.

Inspecting Motion Control Devices

The Siemens Motion Control Business Unit produces inverters, drive controllers and motors for more than 30,000 customers worldwide. The lead electronics plant, GWE, based in Erlangen, Germany, has been working on AI-enabled computer vision for defect detection using custom methods and different modes of synthetic data generation.

Common synthetic data generation methods, however, weren’t sufficient for production-ready robustness in some use-cases, leading to a need for real data acquisition and labeling, which could take months.

GWE worked with the Siemens’ Digital Industries Software division to find a better way to produce datasets.

“For many industrial use cases, products are changing rapidly. Materials are changing rapidly. It needs to be automated in a fast way and without a lot of know-how from the endpoint engineer,” said Zac Mann, advanced robotics simulation lead at Siemens Digital Industries Software.

Catching Printed Circuit Board Defects

The challenge at GWE is to catch defects early in the ramp-up of new products and production lines. Waiting for real errors to happen just to enhance the training-datasets is not an option.

One area of focus for defects in a printed circuit board (PCB) is examining the thermal paste that’s applied to some components on the PCB in order to help transfer heat quickly to the attached heatsink, away from the components.

To catch PCB defects, the Siemens Digital Industries Software team took another approach by relying on synthetic data driven by Omniverse Replicator.

With Omniverse, a platform for building custom 3D pipelines and simulating virtual worlds, Siemens can generate scenarios and much more realistic images easily, aided with RTX technology-enabled physics-based rendering and materials.

This enables Siemens to move more quickly and smoothly at developing to close the gap from simulation to reality, said Mann.

“Using Omniverse Replicator and Siemens SynthAI technology, we can procedurally generate sets of photorealistic images using the digital models of our products and production resources and an integrated training pipeline to train ready-to-use models. This speeds up our set-up time for AI inspection models by a factor of five and increases their robustness massively,” said Maximilian Metzner, global lead for autonomous manufacturing systems for electronics at GWE.

Tapping Into Randomization With SynthAI

GWE engineers can now take a 3D CAD model of the PCB and import that into Siemens’ SynthAI tool. SynthAI is designed to build data sets for training AI models.

Tapping into Replicator, SynthAI can access its powerful randomization features to vary the sizes and locations of defects, change lighting, color, texture and more to develop a robust dataset.

Once data is generated with Replicator, it can be run through a defect detection model for initial training. This enables GWE engineers to quickly test and iterate on models, requiring only a small set of data to begin.

“This gives you visibility earlier into the design phase, and it can shorten time to market, which is very important,”  said Greenberg.

Get started using NVIDIA Omniverse Replicator.

The post Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X appeared first on NVIDIA Blog.

Read More

3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’

3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’

3D artist, virtual reality expert, storyteller and educator Hsin-Chien Huang shares his unique creator journey and award-winning artwork Samsara this week In the NVIDIA Studio.

A Journey Unlike Any Other

Huang is a distinguished professor in the Department of Design at National Taiwan Normal University.

His creative journey included overcoming a number of obstacles, starting at age 4, when he lost sight in his right eye. His eyesight was impaired for over a decade before he regained it thanks to a Sri Lankan cornea donor.

This singular event proved inspirational, cementing virtual reality as his primary creative field, as it allows him to share with others the world as he uniquely sees it.

When he was getting his driver’s license, Huang registered himself as an organ donor, imagining that the cornea in his right eye will continue its journey to others who may receive it after his death.

Deep in the journey of ‘Samsara.’

In Samsara, one’s consciousness can travel within different individuals and animals.

Color, materials, sound and music are critical story elements that drive the narratives of his artwork, Huang said.

How did we get here?

“Discussing with musicians about adding sound to works always brings me new ideas to revise stories,” he said. “Elements may influence one another, and the process is like an upward spiral where each element develops and fosters each other simultaneously, slowly shaping the story.”

Cool, But How?

Working in VR can often be a nonlinear experience. Huang spends considerable time prototyping and iterating ideas to ensure that they’re feasible and can be shared.

He and his team will program and create multiple 3D animations and interactions. This helps them to examine if their works can convey the exact concept, invoking the body of emotions they hoped for.

Parametric modeling allows for faster, wide-scale edits.

The team makes use of various parametric modeling tools in Autodesk Maya, Houdini, iClone and Unity. The key in setting up 3D geometric objects is that shapes can be changed once parameters such as dimensions or curvatures are modified — removing the need to reshape the model from scratch.

This saves artists lots of time — especially in the conceptual stage — and is critical to the team’s workflow, Huang said.

“We use Unity for integration and interaction, and Xsens and Vicon for motion capture,” he said. Unity’s light baking and Autodesk Maya’s Arnold renderer both require powerful GPUs, and his GeForce RTX 3070 GPU was equal to the task.

The team’s photogrammetry software in RealityCapture also benefits greatly from NVIDIA CUDA acceleration.

Textures applied in Unity.

“Nowadays, a powerful GeForce RTX GPU is an indispensable tool for digital artists.” — Hsin-Chien Huang 

“Although the resolutions of these scanned models are low, it has the aesthetic of pixel art,” Huang said. He processed these models in Unity to give them a unique digital style. NVIDIA DLSS technology powered by his GeForce RTX GPU increases the interactivity of the viewport by using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail.

When it comes to creating textures, Huang recommends Adobe Substance 3D Painter, which can rapidly create quality, realistic textures for prototyping. RTX-accelerated light and ambient occlusion baking optimize his assets in mere seconds.

Photorealistic details made even more realistic with Topaz Labs Gigapixel AI.

Huang also uses Topaz Labs Gigapixel AI, which uses deep learning to offer better photo quality. Yet again his RTX GPU acccelerates AI for the sharpening of images while retaining high-fidelity details.

Huang is grateful for advancements in technology and their impact on creative possibilities.

“Nowadays, a powerful GeForce RTX GPU is an indispensable tool for digital artists,” he said.

Huang’s increasing popularity and extraordinary talent led him to Hollywood. In 2018, Huang performed a VR demo on hit TV show America’s Got Talent, which left an enormous impression on the judges and audience.

It was the first real-time motion capture and VR experience to be presented on a live stage. Huang said the pressure was intense during his performance as it was a live show and no mistakes could be tolerated.

“I could still sense the thrill and excitement on stage,” he recalled.

VR expert, storyteller and educator Hsin-Chien Huang.

Check out more of Huang’s artwork on his website.

Carry on, Carry on #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @RippaSats and his fun celebration of penguins.

Be sure to tag #WinterArtChallenge to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents

NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents

Two NVIDIA Research papers — one exploring diffusion-based generative AI models and another on training generalist AI agents — have been honored with NeurIPS 2022 Awards for their contributions to the field of AI and machine learning.

These are among more than 60+ talks, posters and workshops with NVIDIA authors being presented at the NeurIPs conference, taking place this week in New Orleans and next week online.

Synthetic data generation — for images, text or video — is a key theme across several of the NVIDIA-authored papers. Other topics include reinforcement learning, data collection and augmentation, weather models and federated learning.

“AI is an incredibly important technology, and NVIDIA is making fast progress across the gamut — from generative AI to autonomous AI agents,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “In generative AI, we are not only advancing our theoretical understanding of the underlying models, but are also making practical contributions that will reduce the effort of creating realistic virtual worlds and simulations.”

Reimagining the Design of Diffusion-Based Generative Models 

Diffusion-based models have emerged as a groundbreaking technique for generative AI. NVIDIA researchers won an Outstanding Main Track Paper award for work that analyzes the design of diffusion models, proposing improvements that can dramatically improve the efficiency and quality of these models.

The paper breaks down the components of a diffusion model into a modular design, helping developers identify processes that can be adjusted to improve the performance of the entire model. The researchers show that their modifications enable record scores on a metric that assesses the quality of AI-generated images.

Training Generalist AI Agents in a Minecraft-Based Simulation Suite

While researchers have long trained autonomous AI agents in video-game environments such as Starcraft, Dota and Go, these agents are usually specialists in only a few tasks. So NVIDIA researchers turned to Minecraft, the world’s most popular game, to develop a scalable training framework for a generalist agent — one that can successfully execute a wide variety of open-ended tasks.

Dubbed MineDojo, the framework enables an AI agent to learn Minecraft’s flexible gameplay using a massive online database of more than 7,000 wiki pages, millions of Reddit threads and 300,000 hours of recorded gameplay (shown in image at top). The project won an Outstanding Datasets and Benchmarks Paper Award from the NeurIPS committee.

As a proof of concept, the researchers behind MineDojo created a large-scale foundation model, called MineCLIP, that learned to associate YouTube footage of Minecraft gameplay with the video’s transcript, in which the player typically narrates the onscreen action. Using MineCLIP, the team was able to train a reinforcement learning agent capable of performing several tasks in Minecraft without human intervention.

Creating Complex 3D Shapes to Populate Virtual Worlds

Also at NeurIPS is GET3D, a generative AI model that instantly synthesizes 3D shapes based on the category of 2D images it’s trained on, such as buildings, cars or animals. The AI-generated objects have high-fidelity textures and complex geometric details — and are created in a triangle mesh format used in popular graphics software applications. This makes it easy for users to import the shapes into 3D renderers and game engines for further editing.

3D objects generated by GET3D

Named for its ability to Generate Explicit Textured 3D meshes, GET3D was trained on NVIDIA A100 Tensor Core GPUs using around 1 million 2D images of 3D shapes captured from different camera angles. The model can generate around 20 objects a second when running inference on a single NVIDIA GPU.

The AI-generated objects could be used to populate 3D representations of buildings, outdoor spaces or entire cities — digital spaces designed for industries such as gaming, robotics, architecture and social media.

Improving Inverse Rendering Pipelines With Control Over Materials, Lighting

At the most recent CVPR conference, held in New Orleans in June, NVIDIA Research introduced 3D MoMa, an inverse rendering method that enables developers to create 3D objects composed of three distinct parts: a 3D mesh model, materials overlaid on the model, and lighting.

The team has since achieved significant advancements in untangling materials and lighting from the 3D objects — which in turn improves creators’ abilities to edit the AI-generated shapes by swapping materials or adjusting lighting as the object moves around a scene.

The work, which relies on a more realistic shading model that leverages NVIDIA RTX GPU-accelerated ray tracing, is being presented as a poster at NeurIPS.

Enhancing Factual Accuracy of Language Models’ Generated Text 

Another accepted paper at NeurIPS examines a key challenge with pretrained language models: the factual accuracy of AI-generated text.

Language models trained for open-ended text generation often come up with text that includes nonfactual information, since the AI is simply making correlations between words to predict what comes next in a sentence. In the paper, NVIDIA researchers propose techniques to address this limitation, which is necessary before such models can be deployed for real-world applications.

The researchers built the first automatic benchmark to measure the factual accuracy of language models for open-ended text generation, and found that bigger language models with billions of parameters were more factual than smaller ones. The team proposed a new technique, factuality-enhanced training, along with a novel sampling algorithm that together help train language models to generate accurate text — and demonstrated a reduction in the rate of factual errors from 33% to around 15%. 

There are more than 300 NVIDIA researchers around the globe, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about NVIDIA Research and view NVIDIA’s full list of accepted papers at NeurIPS.

The post NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents appeared first on NVIDIA Blog.

Read More

MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps

MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps

Delivering AI-accelerated healthcare at scale will take thousands of neural networks working together to cover the breadth of human physiology, diseases and even hospital operations — a significant challenge in today’s smart hospital environment.

MONAI, an open-source medical-imaging AI framework with more than 650,000 downloads, accelerated by NVIDIA, is making it easier to integrate these models into clinical workflows with MONAI Application Packages, or MAPs.

Delivered through MONAI Deploy, a MAP is a way of packaging an AI model that makes it easy to deploy in an existing healthcare ecosystem.

“If someone wanted to deploy several AI models in an imaging department to help experts identify a dozen different conditions, or partially automate the creation of medical imaging reports, it would take an untenable amount of time and resources to get the right hardware and software infrastructure for each one,” said Dr. Ryan Moore at Cincinnati Children’s Hospital. “It used to be possible, but not feasible.”

MAPs simplify the process. When a developer packages an app using the MONAI Deploy Application software development kit, hospitals can easily run it on premises or in the cloud. The MAPs specification also integrates with healthcare IT standards such as DICOM for medical imaging interoperability.

“Until now, most AI models would remain in an R&D loop, rarely reaching patient care,” said Jorge Cardoso, chief technology officer at the London Medical Imaging & AI Centre for Value-Based Healthcare. “MONAI Deploy will help break that loop, making impactful clinical AI a more frequent reality.”

MONAI Deploy Adopted by Hospitals, Healthcare Startups

Healthcare institutions, academic medical centers and AI software developers around the world worldwide are adopting MONAI Deploy, including:

  • Cincinnati Children’s Hospital: The academic medical center is creating a MAP for an AI model that automates total cardiac volume segmentation from CT images, aiding pediatric heart transplant patients in a project funded by the National Institutes of Health.
  • National Health Service in England: The NHS Trusts have deployed its MONAI-based AI Deployment Engine platform, known as AIDE, across four hospitals to provide AI-enabled disease-detection tools to healthcare professionals serving 5 million patients a year.
  • Qure.ai: A member of the NVIDIA Inception program for startups, Qure.ai develops medical imaging AI models for use cases including lung cancer, traumatic brain injuries and tuberculosis. The company is using MAPs to package its solutions for deployment, accelerating its time to clinical impact.
  • SimBioSys: The Chicago-based Inception startup builds 3D virtual representations of patients’ tumors and is using MAPs for precision medicine AI applications that can help predict how a patient will respond to a specific treatment.
  • University of California, San Francisco: UCSF is developing MAPs for several AI models, with applications including hip fracture detection, liver and brain tumor segmentation, and knee and breast cancer classification.

Putting Medical Imaging AI on the MAP

The MAP specification was developed by the MONAI Deploy working group, a team of experts from more than a dozen medical imaging institutions, to benefit AI app developers as well as the clinical and infrastructure platforms that run AI apps.

For developers, MAPs can help accelerate AI model evolution by helping researchers easily package and test their models in a clinical environment. This allows them to collect real-world feedback that helps improve the AI.

For cloud service providers, supporting MAPs — which were designed using cloud-native technologies — enables researchers and companies using MONAI Deploy to run AI applications on their platform, either by using containers or with native app integration. Cloud platforms integrating MONAI Deploy and MAPs include:

  • Amazon HealthLake Imaging: The MAP connector has been integrated with the HealthLake Imaging service, allowing clinicians to view, process and segment medical images in real time.
  • Google Cloud: Google Cloud’s Medical Imaging Suite, designed to make healthcare imaging data more accessible, interoperable and useful, has integrated MONAI into its platform to enable clinicians to deploy AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images.
  • Nuance Precision Imaging Network, powered by Microsoft Azure: Nuance and NVIDIA recently announced a partnership bringing together MONAI and the Nuance Precision Imaging Network, a cloud platform that provides more than 12,000 healthcare facilities with access to AI-powered tools and insights.
  • Oracle Cloud Infrastructure: Oracle and NVIDIA recently announced a collaboration to bring accelerated compute solutions for healthcare, including MONAI Deploy, to Oracle Cloud Infrastructure. Developers can start building MAPs with MONAI Deploy today using NVIDIA containers on the Oracle Cloud Marketplace.

Get started with MONAI and discover how NVIDIA is helping build AI-powered medical imaging ecosystems at this week’s RSNA conference.

The post MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps appeared first on NVIDIA Blog.

Read More

NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals

NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals

A consortium of 10 National Health Service Trusts — the publicly funded healthcare system in England — is now deploying the MONAI-based AIDE platform across four of its hospitals, providing AI-enabled disease-detection tools to healthcare professionals serving 5 million patients a year.

AIDE, short for AI Deployment Engine, is expected to be rolled out next year across 11 NHS hospitals serving 18 million patients, bringing AI capabilities to clinicians. It’s built on MONAI, an open-source medical imaging AI framework co-developed by NVIDIA and the AI Centre, which allows AI applications to interface with hospital systems.

Together, MONAI and AIDE enable safe and effective validation, deployment and evaluation of medical imaging AI models, which the NHS will apply in diagnosing and treating cancers, stroke, dementia and other conditions. The platform is being deployed at the following facilities: Guy’s and St Thomas’s, King’s College Hospital, East Kent Hospital University and University College London Hospitals NHS Foundation Trusts.

“Deployment of this infrastructure for clinical AI tools is a hugely exciting step in integrating AI into healthcare services,” said James Teo, professor of neurology and data science at King’s College Hospital NHS. “These platforms will provide a scalable way for clinicians to deploy healthcare AI tools to support decision-making to improve the speed and precision of patient care. This is the start of a digital transformation journey with strong, safe and open foundations.”

MONAI Making Hospital Integration Easier

Introduced in 2019, MONAI is reducing the complexity of medical workflows from R&D to the clinic. It allows developers to easily build and deploy AI applications, resulting in a model ready for clinical integration, and making it easier to interpret medical exams and unlock new levels of knowledge about patients.

MONAI provides deep learning infrastructure and workflows optimized for medical imaging. MONAI, with more than 650,000 downloads, is used by leading healthcare institutions Guy’s and St Thomas’ Hospital and King’s College Hospital in the U.K., for its ability to harness the power and potential of medical imaging data to simplify and streamline the process for building AI models.

“Across the healthcare ecosystem, researchers, hospitals and startups are realizing the power of incorporating a streamlined AI pipeline into their work,” said Haris Shuaib, AI transformation lead at the AI Centre. “The open-source MONAI ecosystem is standardizing hundreds of AI algorithms for maximum interoperability and impact, enabling their deployment in just a few weeks instead of three-to-six months.”

Built in collaboration with the AI Centre for Value Based Healthcare — a consortium of universities, hospitals and industry partners led by King’s College London and Guy’s and St Thomas’ NHS Foundation Trust — AIDE brings the capabilities of AI to clinicians. This solution equips clinicians with improved information about patients, making healthcare data more accessible and interoperable, in order to improve patient care.

The AI Centre has already developed algorithms to improve diagnosis of COVID-19, breast cancer, brain tumor, stroke detection and dementia risk. AIDE connects approved AI algorithms to a patient’s medical record seamlessly and securely, with the data never leaving the hospital trust.

Once the clinical data has been analyzed, the results are sent back to the electronic healthcare record to support clinical decision-making. This provides another valuable data point for clinical multidisciplinary teams when reviewing patients’ cases. It’s hoped that AIDE can support speeding up this process to benefit patients.

“The AI Centre has done invaluable work towards integrating AI into national healthcare. Deploying MONAI is a critical milestone in our journey to enable the use of safe and robot AI innovations within the clinic,” said Professor Sebastien Ourselin, deputy director of the AI Centre. “This could only be achieved through our strong partnerships between academic and industry leaders like NVIDIA.”

The code for AIDE will be made open source and published on GitHub on Dec. 7. AIDE will be displayed in the South Hall of the McCormick Place convention center in Chicago as part of the RSNA Imaging AI in Practice demonstration.

Get started with MONAI and watch the NVIDIA RSNA special address.

The post NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals appeared first on NVIDIA Blog.

Read More

Turn Black Friday Into Green Thursday With New GeForce NOW Deal

Turn Black Friday Into Green Thursday With New GeForce NOW Deal

Black Friday is now Green Thursday with a great deal on GeForce NOW this week.

For a limited time, get a free $20-value GeForce NOW membership gift card with every purchase of a $50-value GeForce NOW membership gift card. Treat yourself and a buddy to high-performance cloud gaming — there’s never been a better time to share the love of GeForce NOW.

Plus, kick off a gaming-filled weekend with four new titles joining the GeForce NOW library.

Instant Streaming, Instant Savings

For one week only, from Nov. 23-Dec. 2, purchase a $50-value gift card — good toward a three-month RTX 3080 membership or a six-month Priority membership — and get a bonus $20-value GeForce NOW membership gift card for free, which is good toward a one-month RTX 3080 membership or a two-month Priority membership.

Recipients will be able to redeem these gift cards for the GeForce NOW membership level of their choice. The $20-value free gift card will be delivered as a digital code — providing instant savings for instant streaming. Learn more details.

GeForce NOW Green Thursday Gift Card Deal
Green is the new black with this time-limited Black Friday deal.

With a paid membership, gamers get access to stream over 1,400 PC games with longer gaming sessions and real-time ray tracing for supported games across nearly all devices, even those that aren’t game ready. Priority members can stream up to 1080p at 60 frames per second, and RTX 3080 members can stream up to 4K at 60 FPS or 1440p at 120 FPS.

This special offer is valid on $50-value digital or physical gift card purchases, making it a perfect stocking stuffer or last-minute gift. Snag the deal to make Black Friday shopping stress-free this year.

Time to Play

Evil West on GeForce NOW
Evil never sleeps … but it bleeds!

The best way to celebrate a shiny new GeForce NOW membership is with the new games available to stream this GFN Thursday. Start out with Evil West from Focus Entertainment, a vampire-hunting third-person action game set in a fantasy version of the Old West. Play as a lone hunter or co-op with a buddy to explore and eradicate the vampire threat while upgrading weapons and tools along the way.

Check out this week’s new games here:

  • Evil West (New release on Steam)
  • Ship of Fools (New release on Steam)
  • Crysis 2 Remastered (Steam)
  • Crysis 3 Remastered (Steam)

Before you dig into your weekend gaming, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Turn Black Friday Into Green Thursday With New GeForce NOW Deal appeared first on NVIDIA Blog.

Read More

What Is a Smart Hospital?

What Is a Smart Hospital?

Smart hospitals — which utilize data and AI insights to facilitate decision-making at each stage of the patient experience — can provide medical professionals with insights that enable better and faster care.

A smart hospital uses data and technology to accelerate and enhance the work healthcare professionals and hospital management are already doing, such as tracking hospital bed occupancy, monitoring patients’ vital signs and analyzing radiology scans.

What’s the Difference Between a Smart Hospital and a Traditional Hospital? 

Hospitals are continuously generating and collecting data, much of which is now digitized. This creates an opportunity for them to apply such technologies as data analytics and AI for improved insights.

Data that was once stored as a paper file with a patient’s medical history, lab results and immunization information is now stored as electronic health records, or EHRs. Digital CT and MRI scanners, as well as software including the PACS medical imaging storage system, are replacing analog radiology tools. And connected sensors in hospital rooms and operating theaters can record multiple continuous streams of data for real-time and retrospective analysis.

As hospitals transition to these digital tools, they’re poised to make the shift from a regular hospital to a smart hospital — one that not only collects data, but also analyzes it to provide valuable, timely insights.

Natural language processing models can rapidly pull insights from complex pathology reports to support cancer care. Data science can monitor emergency room wait times to resolve bottlenecks. AI-enabled robotics can assist surgeons in the operating room. And video analytics can detect when hand sanitizer supplies are running low or a patient needs attention — such as detecting the risk of falls in the hospital or at home.

What Are Some Benefits of a Smart Hospital?

Smart hospital technology benefits healthcare systems, medical professionals and patients in the following ways: 

  • Healthcare providers: Smart hospital data can be used to help healthcare facilities optimize their limited resources, increasing operational efficiency for a better patient-centric approach. Sensors can monitor patients when they’re alone in the room. AI algorithms can help inform which patients should be prioritized based on the severity of their case. And telehealth solutions can help deliver care to patients outside of hospital visits.
  • Clinicians: Smart hospital tools can enable doctors, nurses, medical imaging technicians and other healthcare experts to spend more time focusing on patient care by taking care of routine or laborious tasks, such as writing notes about each patient interaction, segmenting anatomical structures in an MRI or converting doctor’s notes into medical codes for insurance billing. They can also aid clinical decision-making with AI algorithms that provide a second opinion or triage recommendation for individual patients based on historical data.
  • Patients: Smart hospital technology can bring health services closer to the goal of consistent, high-quality patient care — anywhere in the world, from any doctor. Clinicians vary in skill level, areas of expertise, access to resources and time available per patient. By deploying AI and robotics to monitor patterns and automate time-consuming tasks, smart hospitals can allow clinicians to focus on interacting with their patients for a better experience.

How Can I Make My Hospital Smart? 

Running a smart hospital requires an entire ecosystem of hardware and software solutions working in harmony with clinician workflows. To accelerate and improve patient care, every application, device, sensor and AI model in the system must share data and insights across the institution.

Think of the smart hospital as an octopus. Its head is the organization’s secure server that stores and processes the entire facility’s data. Each of its tentacles is a different department — emergency room, ICU, operating room, radiology lab — covered in sensors (octopus suckers) that take in data from their surroundings.

If each tentacle operated in a silo, it would be impossible for the octopus to take rapid action across its entire body based on the information sensed by a single arm. Every tentacle sends data back to the octopus’ central brain, enabling the creature to flexibly respond to its changing environment.

In the same way, the smart hospital is a hub-and-spoke model, with sensors distributed across a facility that can send critical insights back to a central brain, helping inform facility-wide decisions. For instance, if camera feeds in an operating room show that a surgical procedure is almost complete, AI would alert staff in the recovery room to be ready for the patient’s arrival.

To power smart hospital solutions, medical device companies, academic medical centers and startups are turning to NVIDIA Clara, an end-to-end AI platform that integrates with the entire hospital network — from medical devices running real-time applications to secure servers that store and process data in the long term. It supports edge, data center and cloud infrastructure, numerous software libraries, and a global partner ecosystem to power the coming generation of smart hospitals.

Smart Hospital Operations and Patient Monitoring

A bustling hospital has innumerable moving parts — patients, staff, medicine and equipment — presenting an opportunity for AI automation to optimize operations around the facility.

While a doctor or nurse can’t be at a patient’s side at every moment of their hospital stay, a combination of intelligent video analytics and other smart sensors can closely monitor patients, alerting healthcare providers when the person is in distress and needs attention.

In an ICU, for instance, patients are connected to monitoring devices that continuously collect vital signs. Many of these continuously beep with various alerts, which can lead healthcare practitioners to sometimes overlook the alarm of a single sensor.

By instead aggregating the streaming data from multiple devices into a single feed, AI algorithms can analyze the data in real time, helping more quickly detect if a patient’s condition takes a sudden turn for the better or worse.

The Houston Methodist Institute for Academic Medicine is working with Mark III Systems, an Elite member of the NVIDIA Partner Network, to deploy an AI-based tool called DeepStroke that can detect stroke symptoms in triage more accurately and earlier based on a patient’s speech and facial movements. By integrating these AI models into the emergency room workflow, the hospital can more quickly identify the proper treatment for stroke patients, helping ensure clinicians don’t miss patients who would potentially benefit from life-saving treatments.

Using enterprise-grade solutions from Dell and NVIDIA — including GPU-accelerated Dell PowerEdge servers, the NVIDIA Fleet Command hybrid cloud system and the DeepStream software development kit for AI streaming analytics — Inception startup Artisight manages a smart hospital network including over 2,000 cameras and microphones at Northwestern Medicine.

One of Artisight’s models alerts nurses and physicians to patients at risk of harm. Another system, based on in-door positioning system data, automates clinic workflows to maximize staff productivity and improve patient satisfaction. A third detects preoperative, intraoperative and postoperative events to coordinate surgical throughput.

These systems make it easy to add functionality regardless of location: an AI-backed sensor network that monitors hospital rooms to prevent a patient from falling can also detect when hospital supplies are running low, or when an operating room needs to be cleaned.The systems even extend beyond the hospital walls via Artisight’s integrated teleconsult tools to monitor at-risk patients at home.

The last key element of healthcare operations is medical coding, the process of turning a clinician’s notes into a set of alphanumeric codes representing every diagnosis and procedure. These codes are of particular significance in the U.S., where they form the basis for the bills that doctors, clinics and hospitals submit to stakeholders including insurance providers and patients.

Inception startup Fathom has developed AI models to automate the painstaking process of medical coding, reducing costs while increasing speed and precision. Founded in 2016, the company works with the nation’s largest health systems, billing companies and physician groups, coding over 20 million patient encounters annually.

Medical Imaging in Smart Hospitals

Deep learning first gained its popularity as a tool for identifying objects in images. This is one of the earliest healthcare industry uses for the technology, too. There are dozens of AI models with regulatory approval in the medical imaging space, helping radiology departments in smart hospitals accelerate the analysis of CT, MRI and X-ray data.

AI can pre-screen scans, flagging areas that require a radiologist’s attention to save time — giving them more bandwidth to look at additional scans or explain results to patients. It can move critical cases like brain bleeds to the top of a radiologist’s worklist, shortening the time to diagnose and treat life-threatening cases. And it can enhance the resolution of radiology images, allowing clinicians to reduce the necessary dosage per patient.

Leading medical imaging companies and researchers are using NVIDIA technology to power next-generation applications that can be used in smart hospital environments.

Siemens Healthineers developed deep learning-based autocontouring solutions, enabling precise contouring of organs at risk in radiation therapy.

And Fujifilm Healthcare uses NVIDIA GPUs to power its Cardio StillShot software, which conducts precise cardiac imaging during a CT scan. To accelerate its work, the team used software including the NVIDIA Optical Flow SDK to estimate pixel-level motion and NVIDIA Nsight Compute to optimize performance.

Startups in NVIDIA Inception, too, are advancing medical imaging workflows with AI, such as Shanghai-based United Imaging Intelligence. The company’s uAI platform empowers devices, doctors and researchers with full-stack, full-spectrum AI applications, covering imaging, screening, follow-up, diagnosis, treatment and evaluation. Its uVision intelligent scanning system runs on the NVIDIA Jetson edge AI platform.

Learn more about startups using NVIDIA AI for medical imaging applications.

Digital and Robotic Surgery in Smart Hospitals

In a smart hospital’s operating room, intelligent video analytics and robotics are embedded to take in data and provide AI-powered alerts and guidance to surgeons.

Medical device developers and startups are working on tools to advance surgical training, help surgeons plan procedures ahead of time, provide real-time support and monitoring during an operation, and aid in post-surgery recordkeeping and retrospective analysis.

Paris-based robotic surgery company Moon Surgical is designing Maestro, an accessible, adaptive surgical-assistant robotics system that works with the equipment and workflows that operating rooms already have in place. The startup has adopted NVIDIA Clara Holoscan to save time and resources, helping compress its development timeline.

Activ Surgical has selected Holoscan to accelerate development of its AI and augmented-reality solution for real-time surgical guidance. The Boston-based company’s ActivSight technology allows surgeons to view critical physiological structures and functions, like blood flow, that cannot be seen with the naked eye.

And London-based Proximie will use Holoscan to enable telepresence in the operating room, bringing expert surgeons and AI solutions into each procedure. By integrating this information into surgical imaging systems, the company aims to reduce surgical complication rates, improving patient safety and care.

Telemedicine — Smart Hospital Technology at Home

Another part of smart hospital technology is ensuring patients who don’t need to be admitted to the hospital can receive care from home through wearables, smartphone apps, video appointments, phone calls and text-based messaging tools. Tools like these reduce the burden on healthcare facilities — particularly with the use of AI chatbots that can communicate effectively with patients.

Natural language processing AI is powering intelligent voice assistants and chatbots for telemedicine at companies like Curai, a member of the NVIDIA Inception global network of startups.

Curai is applying GPU-powered AI to connect patients, providers and care teams via a chat-based application. Patients can input information about their conditions, access their medical profiles and chat with providers 24/7. The app also supports providers by offering diagnostic and treatment suggestions based on Curai’s deep learning algorithms.

Curai’s main areas of AI focus have been natural language processing (for extracting data from medical conversations), medical reasoning (for providing diagnosis and treatment recommendations), and image processing and classification (largely for images uploaded by patients).

Virtual care tools like Curai’s can be used for preventative or convenient care at any time, or after a patient’s doctor visit to ensure they’re responding well to treatment.

Medical Research Using Smart Hospital Data 

The usefulness of smart hospital data doesn’t end when a patient is discharged — it can inform years of research, becoming part of an institution’s database that helps improve operational efficiency, preventative care, drug discovery and more. With collaborative tools like federated learning, the benefits can go beyond a single medical institution and improve research across the healthcare field globally.

Neurosurgical Atlas, the largest association of neurosurgeons in the world, aims to advance the care of patients suffering from neurosurgical disorders through new, efficient surgical techniques. The Atlas includes a library of surgery recordings and simulations that give neurosurgeons unprecedented understanding of potential pitfalls before conducting an operation, creating a new standard for technical excellence. In the future, Neurosurgical Atlas plans to enable digital twin representations specific to individual patients.

The University of Florida’s academic health center, UF Health, has used digital health records representing more than 50 million interactions with 2 million patients to train GatorTron, a model that can help identify patients for lifesaving clinical trials, predict and alert health teams about life-threatening conditions, and provide clinical decision support to doctors.

The electronic medical records were also used to develop SynGatorTron, a language model that can generate synthetic health records to help augment small datasets — or enable AI model sharing while preserving the privacy of real patient data.

In Texas, MD Anderson is harnessing hospital records for population data analysis. Using the NVIDIA NeMo toolkit for natural language processing, the researchers developed a conversational AI platform that performs genomic analysis with cancer omics data — including survival analysis, mutation analysis and sequencing data processing.

Learn more about smart hospital technology and subscribe to NVIDIA healthcare news

The post What Is a Smart Hospital? appeared first on NVIDIA Blog.

Read More

Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’

Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

In the NVIDIA Studio artists have sparked the imagination of and inspired countless creators to exceed their creative ambitions and do their best work.

We’re showcasing the work of these artists — who specialize in 3D modeling, AI, video editing and broadcasting — this week, as well as how the new GeForce RTX 40 Series line of GPUs makes the creative process easier and more efficient.

These powerful graphics cards are backed by NVIDIA Studio — an ecosystem of creative app optimizations, dedicated NVIDIA Studio Drivers and NVIDIA AI-powered apps. Check out the latest GeForce RTX 40 Series GPUs and NVIDIA Studio laptops for the best performance in content creation, gaming and more.

In addition, the community around NVIDIA Omniverse, a 3D design collaboration and simulation platform that enables artists to connect their favorite 3D tools for more seamless workflows, is partnering with NVIDIA Studio on the #WinterArtChallenge. Join the Omniverse team live on Twitch as they create a scene and answer questions on Wednesday, Nov. 30, at 11 a.m. PT. Add the event to your calendar.

Finally, just in time this holiday season, check out our latest NVIDIA Studio Standout featuring whimsical, realistic, food inspired artwork, and the artists behind it. We dare you not to get hungry.

GeForce RTX 4080 GPU Delivers Impressive Performance

Members of the press and content creators have been putting the new GeForce RTX 4080 GPU through a wide variety of creative workflows. Here’s a sampling of their reviews:

The new GeForce RTX 4080 GPU.

“The addition of AV1 encoding means that any 40-series GPU—and I mean any of them—is going to make your PC substantially faster at this kind of rendering compared to any of the other GPUs we’ve tested here.” Linus Tech Tips

“If you are using a non-RTX GPU, you are missing out on a massive suite of applications and support to give you limitless possibilities as a streamer, YouTuber, podcaster, artist, animator and more.”CG Magazine

“For 3D animators, there’s nothing better than a GeForce RTX 4080 in combo with NVIDIA STUDIO drivers and future DLSS 3 support for Twinmotion, V-Ray, Unity, Cinema 4D, Arnold, Adobe Designer, 3D Painter and 3D Sampler.”Tuttotech.net

“As far as I’m concerned this thing is a no-brainer for anyone who does graphic intensive work, works in video production, or does high end streaming.“ Jay Lippman

“Overall, the RTX 4080 16GB Founders Edition Graphics Card is an excellent choice for Content Creators and CG Artists who have been desperately looking for an upgrade over the past 2-3 years! For 3D GPU Rendering Workloads, in particular, we’re happy to finally see a GPU that deserves a recommendation.” CG Director

“As far as the 4080 goes for creative individuals, I’ve got no doubt that if you’re rendering 3D models or 4K video, you’re going to have a fantastic time with this GPU. There’s also now dual AV1 video encoders on board which means that you can stream at higher resolutions with the likes of Discord.”Press Start

Pick up the GeForce RTX 4080 GPU or a prebuilt system today using our Product Finder.

Character Creator Pablo Muñoz Gómez

Concept artist Pablo Muñoz Gómez is equally passionate about helping digital artists — teaching 3D classes and running the ZBrush Guides website — as he is about his own creative specialties: concept and character artistry.

Linework refinement from 2D to 3D in ZBrush.

HARVESTERS is a demo concept Gómez created to illustrate a complete ZBrush workflow for his students. He upgraded his render linework with color palette blocking and refinement, and finished with a Z-depth pass to create a depth-of-field effect.

Final shading in ‘HARVESTERS.’

Gómez also excels in photorealistic 3D character modeling, as evidenced in his piece Tadpole.

Gómez often uses Adobe Substance 3D Painter to apply colors and materials directly to his 3D models. NVIDIA Iray technology in the viewport enables Gómez to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by his hardware. Artists can expect even faster asset baking with GeForce RTX 40 Series GPUs.

 

For further customization, Gómez prefers to download assets from the vast Substance 3D Asset library and import into Substance 3D Sampler, adjusting a few sliders to create photorealistic materials. RTX-exclusive interactive ray tracing lets Gómez apply realistic effects in real time. Powered by GeForce RTX 40 Series GPUs, these tasks can be completed even faster than with the previous generation.

Smooth movement in the Adobe Substance 3D Stager viewport, thanks to RTX GPU acceleration.

With GeForce RTX 40 Series GPUs, 3D artists like Gómez can now build scenes in fully ray-traced environments with accurate physics and realistic materials — all in real time, without proxies, in the NVIDIA Omniverse beta.

DLSS 3 technology uses the AI-powered RTX Tensor Cores and a new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness and speeds up movement in the viewport. NVIDIA is also working with popular 3D apps Unity and Unreal Engine to integrate DLSS 3.

Gómez is the founder of ZBrush Guides and the 3D Concept Artist academy. View his courses, tutorials, projects and more on his website.

Karen X. Cheng Has an AI on the Future

Karen X. Cheng is an award-winning director on the forefront of using AI to design amazing visuals. Her innovative work produces eye-catching effects in social media videos for brands like Adobe, Beats by Dre and Instagram. Her videos have garnered over 500 million views.

Cheng was quick to embrace the AI-powered NVIDIA Canvas app — a free download available to anyone with a GeForce RTX GPU. With it, she easily created and shared photorealistic imagery. NVIDIA Canvas is powered by the GauGAN2 AI model and accelerated by Tensor Cores found exclusively on RTX GPUs.

Use AI to turn simple brushstrokes into realistic landscape images with NVIDIA Canvas.

The app uses AI to interpret basic lines and shapes, translating them into realistic landscape images and textures. Artists of all skill levels can use this advanced AI to quickly turn simple brushstrokes into realistic images, speeding up concept exploration and allowing for increased iteration. This frees up valuable time to visualize ideas.

Lately, Cheng’s focus has been on Instant NeRF technology, which uses AI models to transform 2D images into high-resolution 3D scenes nearly instantly.

She and her collaborators have been experimenting with it to bring 2D scenes to life in 3D, and the result was an extraordinary mirror NeRF complete with clouds and stunning camera movement.

Cheng and team also created a sidewalk NeRF that garnered over 1 million views on Instagram.

 

A NeRF is a computationally intensive algorithm that processes complex scenes. The new line of GeForce RTX 40 Series GPUs is a creator’s best bet to navigate these workflows and finalize artwork as quickly as possible.

Check out Cheng’s incredible collection of art on Instagram.

Lights, Camera, Action, WATCHHOLLIE

Compassionate, colorful, caps-lock incarnate — that’s WATCHHOLLIE. Trained as a video editor, WATCHHOLLIE experimented with a YouTube channel before discovering Twitch as a way to get back into gaming.

Her streams promote mental health awareness and inclusivity, establishing a safe place for members of the LGBTQ+ community like herself. She gives back to the creative community as a founder of WatchUs, a diversity-focused team that teaches aspiring creators how to grow their business, develop brand partnerships and improve their streaming setup.

WATCHHOLLIE and her fellow livestreamers can pick up GeForce RTX 40 Series GPUs featuring the eighth-generation NVIDIA video encoder (NVENC), which offers a 40% increase efficiency with AV1 encoding, unlocking higher resolution and crisper image quality. OBS Studio and Discord have enabled AV1 for 1440p and 4K resolution at 60 FPS.

In addition, GeForce RTX 40 Series GPUs feature dual encoders that allow creators to capture up to 8K60. When it’s time to cut a video on demand of livestreams, the dual encoders work in tandem to divide the work automatically, slashing export times nearly in half.

Blackmagic Design’s DaVinci Resolve, the popular Voukoder plug-in for Adobe Premiere Pro (WATCHHOLIE’s preferred software) and Jianying — the top video editing app in China — have all enabled dual encoder through encode presets to export final files, fast.

Gaming livestreamers using GeForce RTX 40 Series GPUs will experience an unprecedented gen-to-gen frame-rate boost in PC games alongside NVIDIA DLSS 3 technology, which accelerates performance by up to 4x.

Follow and subscribe to WATCHHOLLIE’s social media channels.

Join the #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Check out @Prayag_13’s winter scene full of whimsical holiday details:

Be sure to tag #WinterArtChallenge to join. Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More