Collaboration among researchers, like the scientific community itself, spans the globe.
Universities and enterprises sharing work over long distances require a common language and secure pipeline to get every device — from microscopes and sensors to servers and campus networks — to see and understand the data each is transmitting. The increasing amount of data that needs to be stored, transmitted and analyzed only compounds the challenge.
To overcome this problem, NVIDIA has introduced a high performance computing platform that combines edge computing and AI to capture and consolidate streaming data from scientific edge instruments, and then allow the devices to talk to each other over long distances.
The platform consists of three major components. NVIDIA Holoscan is a software development kit that data scientists and domain experts can use to build GPU-accelerated pipelines for sensors that stream data. MetroX-3 is a new long-haul system that extends the connectivity of the NVIDIA Quantum-2 InfiniBand platform. And NVIDIA BlueField-3 DPUs provide secure and intelligent data migration.
Researchers can use the new NVIDIA platform for HPC edge computing to securely communicate and collaborate on solving problems and bring their disparate devices and algorithms together to operate as one large supercomputer.
Holoscan for HPC at the Edge
Accelerated by GPU computing platforms — including NVIDIA IGX, HGX, DGX systems — NVIDIA Holoscan delivers the extreme performance required to process massive streams of data generated by the world’s scientific instruments.
NVIDIA Holoscan for HPC includes new APIs for C++ and Python that HPC researchers can use to build sensor data processing workflows that are flexible enough for non-image formats and scalable enough to translate raw data into real-time insights.
Holoscan also manages memory allocation to ensure zero-copy data exchanges, so developers can focus on the workflow logic and not worry about managing file and memory I/O.
The new features in Holoscan will be available to all the HPC developers next month. Sign up to be notified of early access to Holoscan 0.4 SDK.
MetroX-3 Goes the Distance
The NVIDIA MetroX-3 long-haul system, available next month, extends the latest cloud-native capabilities of the NVIDIA Quantum-2 InfiniBand platform from the edge to the HPC data center core. It enables GPUs between sites to securely share data over the InfiniBand network up to 25 miles (40km) away.
Taking advantage of native remote direct memory access, users can easily migrate data and compute jobs from one InfiniBand-connected mini-cluster to the main data center, or combine geographically dispersed compute clusters for higher overall performance and scalability.
Data center operators can efficiently provision, monitor and operate across all the InfiniBand-connected data center networks by using the NVIDIA Unified Fabric Manager to manage their MetroX-3 systems.
BlueField for Secure, Efficient HPC
NVIDIA BlueField data processing units offload, accelerate and isolate advanced networking, storage and security services to boost performance and efficiency for modern HPC.
During SC22, system software company Zettar is demonstrating its data migration and storage offload solution based on BlueField-3. Zettar software can consolidate data migration tasks to a data center footprint of 4U rack space, which today requires 13U with x86-based solutions.
The technologies powering the world’s 7 million data centers are changing rapidly. The latest have allowed IT organizations to reduce costs even while dealing with exponential data growth.
Simulation and digital twins can help data center designers, builders and operators create highly efficient and performant facilities. But building a digital twin that can accurately represent all components of an AI supercomputing facility is a massive, complex undertaking.
The NVIDIA Omniverse simulation platform helps address this challenge by streamlining the process for collaborative virtual design. An Omniverse demo at SC22 showcased how the people behind data centers can use this open development platform to enhance the design and development of complex supercomputing facilities.
Omniverse, for the first time, lets data center operators aggregate real-time data inputs from their core third-party computer-aided design, simulation and monitoring applications so they can see and work with their complete datasets in real time.
The demo shows how Omniverse allows users to tap into the power of accelerated computing, simulation and operational digital twins connected to real-time monitoring and AI. This enables teams to streamline facility design, accelerate construction and deployment, and optimize ongoing operations.
The demo also highlighted NVIDIA Air, a data center simulation platform designed to work in conjunction with Omniverse to simulate the network — the central nervous system of the data center. With NVIDIA Air, teams can model the entire network stack, allowing them to automate and validate network hardware and software prior to bring-up.
Creating Digital Twins to Elevate Design and Simulation
In planning and constructing one of NVIDIA’s latest AI supercomputers, multiple engineering CAD datasets were collected from third-party industry tools such as Autodesk Revit, PTC Creo and Trimble SketchUp. This allowed designers and engineers to view the Universal Scene Description-based model in full fidelity, and they could collaboratively iterate on the design in real time.
PATCH MANAGER is an enterprise software application for planning cabling, assets and physical layer point-to-point connectivity in network domains. With PATCH MANAGER connected to Omniverse, the complex topology of port-to-port connections, rack and node layouts, and cabling can be integrated directly into the live model. This enables data center engineers to see the full view of the model and its dependencies.
To predict airflow and heat transfers, engineers used Cadence 6SigmaDCX, a software for computational fluid dynamics. Engineers can also use AI surrogates trained with NVIDIA Modulus for “what-if” analysis in near-real time. This lets teams simulate changes in complex thermals and cooling, and they can see the results instantly.
And with NVIDIA Air, the exact network topology — including protocols, monitoring and automation — can be simulated and prevalidated.
Once construction of a data center is complete, its sensors, control system and telemetry can be connected to the digital twin inside Omniverse, enabling real-time monitoring of operations.
With a perfectly synchronized digital twin, engineers can simulate common dangers such as power peaking or cooling system failures. Operators can benefit from AI-recommended changes that optimize for key priorities like boosting energy efficiency and reducing carbon footprint. The digital twin also allows them to test and validate software and component upgrades before deploying to the physical data center.
Whether focused on tiny atoms or the immensity of outer space, supercomputing workloads benefit from the flexibility that the largest systems provide scientists and researchers.
To meet the needs of organizations with such large AI and high performance computing (HPC) workloads, Dell Technologies today unveiled the Dell PowerEdge XE9680 system — its first system with eight NVIDIA GPUs interconnected with NVIDIA NVLink — at SC22, an international supercomputing conference running through Friday.
The Dell PowerEdge XE9680 system is built on the NVIDIA HGX H100 architecture and packs eight NVIDIA H100 Tensor Core GPUs to serve the growing demand for large-scale AI and HPC workflows.
These include large language models for communications, chemistry and biology, as well as simulation and research in industries spanning aerospace, agriculture, climate, energy and manufacturing.
The XE9680 system is arriving alongside other new Dell servers announced today with NVIDIA Hopper architecture GPUs, including the Dell PowerEdge XE8640.
“Organizations working on advanced research and development need both speed and efficiency to accelerate discovery,” said Ian Buck, vice president of Hyperscale and High Performance Computing, NVIDIA. “Whether researchers are building more efficient rockets or investigating the behavior of molecules, Dell Technologies’ new PowerEdge systems provide the compute power and efficiency needed for massive AI and HPC workloads.”
“Dell Technologies and NVIDIA have been working together to serve customers for decades,” said Rajesh Pohani, vice president of portfolio and product management for PowerEdge, HPC and Core Compute at Dell Technologies. “As enterprise needs have grown, the forthcoming Dell PowerEdge servers with NVIDIA Hopper Tensor Core GPUs provide leaps in performance, scalability and security to accelerate the largest workloads.”
NVIDIA H100 to Turbocharge Dell Customer Data Centers
Fresh off setting world records in the MLPerf AI training benchmarks earlier this month, NVIDIA H100 is the world’s most advanced GPU. It’s packed with 80 billion transistors and features major advances to accelerate AI, HPC, memory bandwidth and interconnects at data center scale.
H100 is the engine of AI factories that organizations use to process and refine large datasets to produce intelligence and accelerate their AI-driven businesses. It features a dedicated Transformer Engine and fourth generation NVIDIA NVLink interconnect to accelerate exascale workloads.
Each system built on the NVIDIA HGX H100 platform features four or eight Hopper GPUs to deliver the highest AI performance with 3.5x more energy efficiency compared with the prior generation, saving development costs while accelerating discoveries.
Powerful Performance and Customer Options for AI, HPC Workloads
Dell systems power the work of leading organizations, and the forthcoming Hopper-based systems will broaden Dell’s portfolio of solutions for its customers around the world.
With its enhanced, air-cooled design and support for eight NVIDIA H100 GPUs with built-in NVLink connectivity, the PowerEdge XE9680 is purpose-built for optimal performance to help modernize operations and infrastructure to drive AI initiatives.
The PowerEdge XE8640, Dell’s new HGX H100 system with four Hopper GPUs, enables businesses to develop, train and deploy AI and machine learning models. A 4U rack system, the XE8540 delivers faster AI training performance and increased core capabilities with up to four PCIe Gen5 slots, NVIDIA Multi-Instance GPU (MIG) technology and NVIDIA GPUDirect Storage support.
Availability
The Dell PowerEdge XE9680 and XE8640 will be available from Dell starting in the first half of 2023.
Customers can now try NVIDIA H100 GPUs on Dell PowerEdge servers on NVIDIA LaunchPad, which provides free hands-on experiences and gives companies access to the latest hardware and NVIDIA AI software.
To take a first look at Dell’s new servers with NVIDIA H100 GPUs at SC22, visit Dell in booth 2443.
The holiday season is approaching, and GeForce NOW has everyone covered. This GFN Thursday brings an easy way to give the gift of gaming with GeForce NOW gift cards, for yourself or for a gamer in your life.
Plus, stream 10 new games from the cloud this week, including the first story downloadable content (DLC) for Dying Light 2.
No Time Like the Present
For those seeking the best present to give any gamer, look no further than a GeForce NOW membership.
With digital gift cards, NVIDIA makes it easy for anyone to give an upgrade to GeForce PC performance in the cloud at any time of the year. And just in time for the holidays, physical gift cards will be available as well. For a limited time, these new $50 physical gift cards will ship with a special GeForce NOW holiday gift box at no additional cost, perfect to put in someone’s stocking.
Powerful PC gaming, perfectly packaged.
These new gift cards can be redeemed for the membership level of preference, whether for three months of an RTX 3080 membership or six months of a Priority membership. Both let PC gamers stream over 1,400 games from popular digital gaming stores like Steam, Epic Games Store, Ubisoft Connect, Origin and GOG.com, all from GeForce-powered PCs in the cloud.
That means high-performance streaming on nearly any device, including PCs, Macs, Android mobile devices, iOS devices, SHIELD TV and Samsung and LG TVs. GeForce NOW is the only way to play Genshin Impact on Macs, one of the 100 free-to-play games in the GeForce NOW library.
Stream across nearly any device.
RTX 3080 members get extra gaming goodness with dedicated access to the highest-performance servers, eight-hour gaming sessions and the ability to stream up to 4K at 60 frames per second or 1440p at 120 FPS, all at ultra-low latency.
Gift cards can be redeemed with an active GFN membership. Gift one to yourself or a buddy for hours of fun cloud gaming.
Dying Light 2’s “Bloody Ties” DLC is available now, and GeForce NOW members can stream it today.
Become a Parkour champion to survive in this horror survival game.
Embark on a new story adventure and gain access to “The Carnage Hall” — an old opera building full of challenges and quests — including surprising new weapon types, character interactions and more discoveries to uncover.
Priority and RTX 3080 members can explore Villedor with NVIDIA DLSS and RTX ON for cinematic, real-time ray tracing — all while keeping an eye on their meter to avoid becoming infected themselves.
Put a Bow on It
Be a fearsome Necromancer in the dark world of The Unliving.
There’s always a new adventure streaming from the cloud. Here are the 10 titles joining the GeForce NOW library this week:
Anyone who’s taken a photo with a digital camera is likely familiar with a “noisy” image: discolored spots that make the photo lose clarity and sharpness.
Many photographers have tips and tricks to reduce noise in images, including fixing the settings on the camera lens or taking photos in different lighting. But it isn’t just photographs that can look discolored — noise is common in computer graphics, too.
Noise refers to the random variations of brightness and color that aren’t part of the original image. Removing noise from imagery — which is becoming more common in the field of image processing and computer vision — is known as denoising.
Image denoising uses advanced algorithms to remove noise from graphics and renders, making a huge difference to the quality of images. Photorealistic visuals and immersive renders could not be possible without denoising technology.
What Is Denoising?
In computer graphics, images can be made up of both useful information and noise. The latter reduces clarity. The ideal end product of denoising would be a crisp image that only preserves the useful information. When denoising an image, it’s also important to keep visual details and components such as edges, corners, textures and other sharp structures.
To reduce noise without affecting the visual details, three types of signals in an image must be targeted by denoising:
Diffuse — scattered lighting reflected in all directions;
Specular or reflections — lighting reflected in a particular direction; and
Infinite light-source shadows — sunlight, shadows and any other visible light source.
To create the clearest image, a user must cast thousands of rays in directions following the diffuse and specular signals. Often in real-time ray tracing, however, only one ray per pixel or even less is used.
Denoising is necessary in real-time ray tracing because of the relatively low ray counts to maintain interactive performance.
Noisy image with one ray per pixel.
How Does Denoising Work?
Image denoising is commonly based on three techniques: spatial filtering, temporal accumulation, and machine learning and deep learning reconstruction.
Example of a spatially and temporally denoised final image.
Spatial filtering selectively alters parts of an image by reusing similar neighboring pixels. The advantage of spatial filtering is that it doesn’t produce temporal lag, which is the inability to immediately respond to changing flow conditions. However, spatial filtering introduces blurriness and muddiness, as well as temporal instability, which refers to flickering and visual imperfections in the image.
Temporal accumulation reuses data from the previous frame to determine if there are any artifacts — or visual anomalies — in the current frame that can be corrected. Although temporal accumulation introduces temporal lag, it doesn’t produce blurriness. Instead, it adds temporal stability to reduce flickering and artifacts over multiple frames.
Example of temporal accumulation at 20 frames.
Machine learning and deep learning reconstruction uses a neural network to reconstruct the signal. The neural network is trained using various noisy and reference signals. Though the reconstructed signal for a single frame can look complete, it can become temporally unstable over time, so a form of temporal stabilization is needed.
Denoising in Images
Denoising provides users with immediate visual feedback, so they can see and interact with graphics and designs. This allows them to experiment with variables like light, materials, viewing angle and shadows.
Solutions like NVIDIA Real-Time Denoisers (NRD) make denoising techniques more accessible for developers to integrate into pipelines. NRD is a spatio-temporal denoising library that’s agnostic to application programming interfaces and designed to work with low rays per pixel.
NRD uses input signals and environmental conditions to deliver results comparable to ground-truth images. See NRD in action below:
With NRD, developers can achieve real-time results using a limited budget of rays per pixel. In the video above, viewers can see the heavy lifting that NRD does in real time to resolve image noise.
Popular games such as Dying Light 2 and Hitman III use NRD for denoising.
NRD highlighted in Techland’s Dying Light 2 Stay Human.
NRD supports the denoising of diffuse, specular or reflections, and shadow signals. The denoisers included in NRD are:
ReBLUR — based on the idea of self-stabilizing, recurrent blurring. It’s designed to work with diffuse and specular signals generated with low ray budgets.
SIGMA — a fast shadow denoiser. It supports shadows from any type of light source, like the sun and local lights.
ReLAX — preserves lighting details produced by NVIDIA RTX Direct Illumination, a framework that enables developers to render scenes with millions of dynamic area lights in real time. ReLAX also yields better temporal stability and remains responsive to changing lighting conditions.
Just like many businesses, the world of industrial scientific computing has a data problem.
Solving seemingly intractable challenges — from developing new energy sources and creating new modes of transportation, to addressing mission-critical issues such as driving operational efficiencies and improving customer support — requires massive amounts of high performance computing.
Instead of having to architect, engineer and build ever-more supercomputers, companies such as Electrolux, Denso, Samsung and Virgin Orbit are embracing benefits offered by Rescale’s cloud platform. This makes it possible to scale their accelerated computing in an energy-efficient way and to speed their innovation.
Addressing the industrial scientific community’s rising demand for AI in the cloud, NVIDIA founder and CEO Jensen Huang joined Rescale founder and CEO Joris Poort at the Rescale Big Compute virtual conference, where they announced that Rescale is adopting the NVIDIA AI software portfolio.
NVIDIA AI will bring new capabilities to Rescale’s HPC-as-a-service offerings, which include simulation and engineering software used by hundreds of customers across industries. NVIDIA is also accelerating the Rescale Compute Recommendation Engine announced today, which enables customers to identify the right infrastructure options to optimize cost and speed objectives.
“Fusing principled and data-driven methods, physics-ML AI models let us explore our design space at speeds and scales many orders of magnitude greater than ever before,” Huang said. “Rescale is at the intersection of these major trends. NVIDIA’s accelerated and AI computing platform perfectly complements Rescale to advance industrial scientific computing.”
“Engineers and scientists working on breakthrough innovations need integrated cloud platforms that put R&D software and accelerated computing at their fingertips,” said Poort. “We’ve helped customers speed discoveries and save costs with NVIDIA-accelerated HPC, and adding NVIDIA AI Enterprise to the Rescale platform will bring together the most advanced computing capabilities with the best of AI, and support an even broader range of AI-powered workflows R&D leaders can run on any cloud of their choice.”
Expanding HPC to New Horizons in the Cloud With NVIDIA AI
The companies announced that they are working to bring NVIDIA AI Enterprise to Rescale, broadening the cloud platform’s offerings to include NVIDIA-supported AI workflows and processing engines. Once it’s available, customers will be able to develop AI applications in any leading cloud, with support from NVIDIA.
The globally adopted software of the NVIDIA AI platform, NVIDIA AI Enterprise includes essential processing engines for each step of the AI workflow, from data processing and AI model training to simulation and large-scale deployment.
NVIDIA AI enables organizations to develop predictive models to complement and expand industrial HPC research and development with applications such as computer vision, route and supply chain optimization, robotics simulations and more.
The Rescale software catalog provides access to hundreds of NVIDIA-accelerated containerized applications and pretrained AI models on NVIDIA NGC, and allows customers to run simulations on demand and scale up or down as needed.
NVIDIA Modulus to Speed Physics-Based Machine Learning
Rescale now offers the NVIDIA Modulus framework for developing physics machine learning neural network models to support a broad range of engineering use cases.
Modulus blends the power of physics with data to build high-fidelity models that enable near-real-time simulations. With just a few clicks on the Rescale platform, Modulus will allow customers to run their entire AI-driven simulation workflow, from data pre-processing and model training to inference and model deployment.
On-Prem to Cloud Workflow Orchestration Expands Flexibility
Rescale is additionally integrating the NVIDIA Base Command Platform AI developer workflow management software, which can orchestrate workloads across clouds to on-premises NVIDIA DGX systems.
Rescale’s HPC-as-a-service platform is accelerated by NVIDIA on leading cloud service provider platforms, including Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure. Rescale is a member of the NVIDIA Inception program.
Two months after their debut sweeping MLPerf inference benchmarks, NVIDIA H100 Tensor Core GPUs set world records across enterprise AI workloads in the industry group’s latest tests of AI training.
Together, the results show H100 is the best choice for users who demand utmost performance when creating and deploying advanced AI models.
MLPerf is the industry standard for measuring AI performance. It’s backed by a broad group that includes Amazon, Arm, Baidu, Google, Harvard University, Intel, Meta, Microsoft, Stanford University and the University of Toronto.
NVIDIA H100 GPUs were up to 6.7x faster than A100 GPUs when they were first submitted for MLPerf Training.
H100 GPUs (aka Hopper) raised the bar in per-accelerator performance in MLPerf Training. They delivered up to 6.7x more performance than previous-generation GPUs when they were first submitted on MLPerf training. By the same comparison, today’s A100 GPUs pack 2.5x more muscle, thanks to advances in software.
Due in part to its Transformer Engine, Hopper excelled in training the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models.
MLPerf gives users the confidence to make informed buying decisions because the benchmarks cover today’s most popular AI workloads — computer vision, natural language processing, recommendation systems, reinforcement learning and more. The tests are peer reviewed, so users can rely on their results.
A100 GPUs Hit New Peak in HPC
In the separate suite of MLPerf HPC benchmarks, A100 GPUs swept all tests of training AI models in demanding scientific workloads run on supercomputers. The results show the NVIDIA AI platform’s ability to scale to the world’s toughest technical challenges.
For example, A100 GPUs trained AI models in the CosmoFlow test for astrophysics 9x faster than the best results two years ago in the first round of MLPerf HPC. In that same workload, the A100 also delivered up to a whopping 66x more throughput per chip than an alternative offering.
The HPC benchmarks train models for work in astrophysics, weather forecasting and molecular dynamics. They are among many technical fields, like drug discovery, adopting AI to advance science.
In tests around the globe, A100 GPUs led in both speed and throughput of training.
Supercomputer centers in Asia, Europe and the U.S. participated in the latest round of the MLPerf HPC tests. In its debut on the DeepCAM benchmarks, Dell Technologies showed strong results using NVIDIA A100 GPUs.
An Unparalleled Ecosystem
In the enterprise AI training benchmarks, a total of 11 companies, including the Microsoft Azure cloud service, made submissions using NVIDIA A100, A30 and A40 GPUs. System makers including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro used a total of nine NVIDIA-Certified Systems for their submissions.
In the latest round, at least three companies joined NVIDIA in submitting results on all eight MLPerf training workloads. That versatility is important because real-world applications often require a suite of diverse AI models.
NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors.
Under the Hood
The NVIDIA AI platform provides a full stack from chips to systems, software and services. That enables continuous performance improvements over time.
For example, submissions in the latest HPC tests applied a suite of software optimizations and techniques described in a technical article. Together they slashed runtime on one benchmark by 5x, to just 22 minutes from 101 minutes.
A second article describes how NVIDIA optimized its platform for the enterprise AI benchmarks. For example, we used NVIDIA DALI to efficiently load and pre-process data for a computer vision benchmark.
All the software used in the tests is available from the MLPerf repository, so anyone can get these world-class results. NVIDIA continuously folds these optimizations into containers available on NGC, a software hub for GPU applications.
Volvo Cars unveiled the Volvo EX90 SUV today in Stockholm, marking the beginning of a new era of electrification, technology and safety for the automaker. The flagship vehicle is redesigned from tip to tail — with a new powertrain, branding and software-defined AI compute — powered by the centralized NVIDIA DRIVE Orin platform.
The Volvo EX90 silhouette is in line with Volvo Cars’ design principle of form following function — and looks good at the same time.
Under the hood, it’s filled with cutting-edge technology for new advances in electrification, connectivity, core computing, safety and infotainment. The EX90 is the first Volvo car that is hardware-ready to deliver unsupervised autonomous driving.
These features come together to deliver an SUV that cements Volvo Cars in the next generation of software-defined vehicles.
“We used technology to reimagine the entire car,” said Volvo Cars CEO Jim Rowan. “The Volvo EX90 is the safest that Volvo has ever produced.”
Computer on Wheels
The Volvo EX90 looks smart and has the brains to back it up.
Volvo Cars’ proprietary software runs on NVIDIA DRIVE Orin to operate most of the core functions inside the car, including safety, infotainment and battery management. This intelligent architecture is designed to deliver a highly responsive and enjoyable experience for every passenger in the car.
The DRIVE Orin system-on-a-chip delivers 254 trillion operations per second — ample compute headroom for a software-defined architecture. It’s designed to handle the large number of applications and deep neural networks needed to achieve systematic safety standards such as ISO 26262 ASIL-D.
The Volvo EX90 isn’t just a new car. It’s a highly advanced computer on wheels, designed to improve over time as Volvo Cars adds more software features.
Just Getting Started
The Volvo EX90 is just the beginning of Volvo Cars’ plans for the software-defined future.
The automaker plans to launch a new EV every year through 2025, with the end goal of having a purely electric, software-defined lineup by 2030.
The new flagship SUV is available for preorder in select markets, launching the next phase in Volvo Cars’ leadership in premium design and safety.
Call it the ultimate example of a job that’s sometimes best done remotely. Wildlife researchers say rhinos are magnificent beasts, but they like to be left alone, especially when they’re with their young.
In the latest example of how researchers are using the latest technologies to track animals less invasively, a team of researchers has proposed harnessing high-flying AI-equipped drones to track the endangered black rhino through the wilds of Namibia.
While drones — and technology of just about every kind — have been harnessed to track African wildlife, the proposal promises to help gamekeepers move faster to protect rhinos and other megafauna from poachers.
AI Podcast host Noah Kravitz spoke to two of the authors of the paper.
Zoey Jewell is co-founder and president of wild track.org, a global network of biologists and conservationists dedicated to non-invasive wildlife monitoring techniques. And Alice Hua is a recent graduate of the School of Information at UC Berkeley in California, and an ML platform engineer at CrowdStrike.
It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.
Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.
Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.
Subscribe to the AI Podcast: Now Available on Amazon Music
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
The warm, friendly animation Mushroom Spirit is featured In the NVIDIA Studio this week, modeled by talented 3D illustrator Julie Greenberg, aka Juliestrator.
In addition, NVIDIA Omniverse, an open platform for virtual collaboration and real-time photorealistic simulation, just dropped a beta release for 3D artists.
And with the approaching winter season comes the next NVIDIA Studio community challenge. Join the #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #WinterArtChallenge to enter.
Winter is coming and so is our next Studio Community Challenge!
Join our Nov-Dec #WinterArtChallenge by sharing winter art that you’ve made, like this great one from @pra5han for a chance to be featured on our channels.
With new support for GeForce RTX 40 Series GPUs, NVIDIA Omniverse is faster, more accessible and more flexible than ever for collaborative 3D workflows across apps.
An example of what’s possible when talented 3D artists collaborate in Omniverse: a scene from the ‘NVIDIA Racer RTX’ demo.
NVIDIA DLSS 3, powered by the GeForce RTX 40 Series, is now available in Omniverse, enabling complete real-time ray-tracing workflows within the platform. The NVIDIA Ada Lovelace GPU architecture delivers a generational leap in performance and power that enables users to work in large-scale, virtual worlds with true interactivity — so creators can navigate viewports at full fidelity in real time.
The Omniverse Create app has new large world-authoring and animation improvements.
In Omniverse Machinima, creators gain AI superpowers with Audio2Gesture — an AI-powered tool that creates lifelike body movements based on an audio file.
PhysX 5, the technology behind Omniverse’s hyperrealistic physics simulation, features built-in audio for collisions, as well as improved cloth and deformable body simulations. Newly available as open source software, PhysX 5 enables artists and developers to modify, build and distribute custom physics engines.
The Omniverse Connect library has received updates to Omniverse Connectors, including Autodesk 3ds Max, Autodesk Maya, Autodesk Revit, Epic Games Unreal Engine, McNeel Rhino, Trimble SketchUp and Graphisoft Archicad. Connectors for Autodesk Alias and PTC Creo are also now available.
The updated Reallusion iClone 8.1.0’s live-sync Connector allows for seamless character interactions between iClone and Omniverse apps. And OTOY’s OctaneRender Hydra Relegate enables Omniverse users to access OctaneRender directly in Omniverse apps.
Juliestrator’s artistic inspiration comes from the examination of the different worlds that people create. “No matter if it’s the latest Netflix show or an artwork I see on Twitter, I love when a piece of art leaves space for my own imagination to fill in the gaps and come up with my own stories,” she said.
Mushroom Spirit was conceived as a sketch for last year’s Inktober challenge, which had the prompt of “spirit.” Rather than creating a ghost like many others, Juliestrator took a different approach. Mushroom Spirit was born as a cute nature spirit lurking in a forest, like the Kodama creatures from the Princess Mononoke film from which she drew inspiration.
Juliestrator gathered reference material using Pinterest. She then used PureRef’s overlay feature to help position reference imagery while modeling in Blender software. Though it’s rare for Juliestrator to sketch in 2D for 3D projects, she said Mushroom Spirit called for a more personal touch, so she generated a quick scribble in Procreate.
The origins of ‘Mushroom Spirit.’
Using Blender, she then entered the block-out phase — creating a rough-draft level built using simple 3D shapes, without details or polished art assets. This helped to keep base meshes clean, eliminating the need to create new meshes in the next round, which required only minor edits.
Getting the basic shapes down by blocking out ‘Mushroom Spirit’ in Blender.
At this point, many artists would typically start to model detailed scene elements, but Julistrator prioritizes coloring. “I’ve noticed how much color influences the compositions and mood of the artwork, so I try to make this important decision as early as possible,” the artist said.
Color modifications in Adobe Substance 3D Painter.
She used Adobe Substance 3D Painter software to apply a myriad of colors and experimental textures to her models. On her NVIDIA Studio laptop, the Razer Blade 15 Studio equipped with an NVIDIA Quadro RTX 5000 GPU, Juliestrator used RTX-accelerated light and ambient occlusion to bake assets in mere seconds.
She then refined the existing models in Blender. “This is where powerful hardware helps a lot,” she said. “The NVIDIA OptiX AI-accelerated denoiser helps me preview any changes I make in Blender almost instantly, which lets me test more ideas at the same time and as a result get better finished renders.”
Tinkering and tweaking color palettes in Blender.
Though she enjoys the modeling stage, Juliestrator said that the desire to refine an endless number of details can be overwhelming. As such, she deploys an “80/20 rule,” dedicating no more than 20% of the entire project’s timeline to detailed modeling. “That’s the magic of the 80/20 rule: tackle the correct 20%, and the other 80% often falls into place,” she said.
Juliestrator finally adjusts the composition in 3D — manipulating the light objects, rotating the camera and adding animations. She completed all of this quickly with an assist from RTX-accelerated OptiX ray tracing in the Blender viewport, using Blender Cycles for the fastest frame renders.
Animations in Blender during the final stage.
Blender is Juliestrator’s preferred 3D modeling app, she said, due to its ease of use and powerful AI features, as well as its accessibility. “I truly appreciate the efforts of the Blender Foundation and all of its partners in keeping Blender free and available to people from all over the world, to enhance anyone’s creativity,” she said.
Juliestrator chose to use an NVIDIA Studio laptop, a “porta-bella” system for efficiency and convenience, she said. “I needed a powerful computer that would let me use both Blender and a game engine like Unity or Unreal Engine 5, while staying mobile and on the go,” the artist added.
For more direction and inspiration for building 3D worlds, check out Juliestrator’s five-part tutorial, Modeling 3D New York Diorama, which covers the critical stages in 3D workflows: sketching composition, modeling details and more. The tutorials can be found on the NVIDIA Studio YouTube channel, which posts new videos every week.
And don’t forget to enter the NVIDIA Studio #WinterArtChallenge on Instagram, Twitter or Facebook.