NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22

NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22

Whether focused on tiny atoms or the immensity of outer space, supercomputing workloads benefit from the flexibility that the largest systems provide scientists and researchers.

To meet the needs of organizations with such large AI and high performance computing (HPC) workloads, Dell Technologies today unveiled the Dell PowerEdge XE9680 system — its first system with eight NVIDIA GPUs interconnected with NVIDIA NVLink — at SC22, an international supercomputing conference running through Friday.

The Dell PowerEdge XE9680 system is built on the NVIDIA HGX H100 architecture and packs eight NVIDIA H100 Tensor Core GPUs to serve the growing demand for large-scale AI and HPC workflows.

These include large language models for communications, chemistry and biology, as well as simulation and research in industries spanning aerospace, agriculture, climate, energy and manufacturing.

The XE9680 system is arriving alongside other new Dell servers announced today with NVIDIA Hopper architecture GPUs, including the Dell PowerEdge XE8640.

“Organizations working on advanced research and development need both speed and efficiency to accelerate discovery,” said Ian Buck, vice president of Hyperscale and High Performance Computing, NVIDIA. “Whether researchers are building more efficient rockets or investigating the behavior of molecules, Dell Technologies’ new PowerEdge systems provide the compute power and efficiency needed for massive AI and HPC workloads.”

“Dell Technologies and NVIDIA have been working together to serve customers for decades,” said Rajesh Pohani, vice president of portfolio and product management for PowerEdge, HPC and Core Compute at Dell Technologies. “As enterprise needs have grown, the forthcoming Dell PowerEdge servers with NVIDIA Hopper Tensor Core GPUs provide leaps in performance, scalability and security to accelerate the largest workloads.”

NVIDIA H100 to Turbocharge Dell Customer Data Centers

Fresh off setting world records in the MLPerf AI training benchmarks earlier this month, NVIDIA H100 is the world’s most advanced GPU. It’s packed with 80 billion transistors and features major advances to accelerate AI, HPC, memory bandwidth and interconnects at data center scale.

H100 is the engine of AI factories that organizations use to process and refine large datasets to produce intelligence and accelerate their AI-driven businesses. It features a dedicated Transformer Engine and fourth generation NVIDIA NVLink interconnect to accelerate exascale workloads.

Each system built on the NVIDIA HGX H100 platform features four or eight Hopper GPUs to deliver the highest AI performance with 3.5x more energy efficiency compared with the prior generation, saving development costs while accelerating discoveries.

Powerful Performance and Customer Options for AI, HPC Workloads

Dell systems power the work of leading organizations, and the forthcoming Hopper-based systems will broaden Dell’s portfolio of solutions for its customers around the world.

With its enhanced, air-cooled design and support for eight NVIDIA H100 GPUs with built-in NVLink connectivity, the PowerEdge XE9680 is purpose-built for optimal performance to help modernize operations and infrastructure to drive AI initiatives.

The PowerEdge XE8640, Dell’s new HGX H100 system with four Hopper GPUs, enables businesses to develop, train and deploy AI and machine learning models. A 4U rack system, the XE8540 delivers faster AI training performance and increased core capabilities with up to four PCIe Gen5 slots, NVIDIA Multi-Instance GPU  (MIG) technology and NVIDIA GPUDirect Storage support.

Availability

The Dell PowerEdge XE9680 and XE8640 will be available from Dell starting in the first half of 2023.

Customers can now try NVIDIA H100 GPUs on Dell PowerEdge servers on NVIDIA LaunchPad, which provides free hands-on experiences and gives companies access to the latest hardware and NVIDIA AI software.

To take a first look at Dell’s new servers with NVIDIA H100 GPUs at SC22, visit Dell in booth 2443.

The post NVIDIA and Dell Technologies Deliver AI and HPC Performance in Leaps and Bounds With Hopper, at SC22 appeared first on NVIDIA Blog.

Read More

Give the Gift of Gaming With GeForce NOW Gift Cards

Give the Gift of Gaming With GeForce NOW Gift Cards

The holiday season is approaching, and GeForce NOW has everyone covered. This GFN Thursday brings an easy way to give the gift of gaming with GeForce NOW gift cards, for yourself or for a gamer in your life.

Plus, stream 10 new games from the cloud this week, including the first story downloadable content (DLC) for Dying Light 2.

No Time Like the Present

For those seeking the best present to give any gamer, look no further than a GeForce NOW membership.

With digital gift cards, NVIDIA makes it easy for anyone to give an upgrade to GeForce PC performance in the cloud at any time of the year. And just in time for the holidays, physical gift cards will be available as well. For a limited time, these new $50 physical gift cards will ship with a special GeForce NOW holiday gift box at no additional cost, perfect to put in someone’s stocking.

Powerful PC gaming, perfectly packaged.

These new gift cards can be redeemed for the membership level of preference, whether for three months of an RTX 3080 membership or six months of a Priority membership. Both let PC gamers stream over 1,400 games from popular digital gaming stores like Steam, Epic Games Store, Ubisoft Connect, Origin and GOG.com, all from GeForce-powered PCs in the cloud.

That means high-performance streaming on nearly any device, including PCs, Macs, Android mobile devices, iOS devices, SHIELD TV and Samsung and LG TVs. GeForce NOW is the only way to play Genshin Impact on Macs, one of the 100 free-to-play games in the GeForce NOW library.

GeForce NOW Devices
Stream across nearly any device.

RTX 3080 members get extra gaming goodness with dedicated access to the highest-performance servers, eight-hour gaming sessions and the ability to stream up to 4K at 60 frames per second or 1440p at 120 FPS, all at ultra-low latency.

Gift cards can be redeemed with an active GFN membership. Gift one to yourself or a buddy for hours of fun cloud gaming.

Learn more about GeForce NOW gift cards and get started with gift giving today.

Stayin’ Alive

Dying Light 2’s “Bloody Ties” DLC is available now, and GeForce NOW members can stream it today.

Dying Light 2 on GeForce NOW
Become a Parkour champion to survive in this horror survival game.

Embark on a new story adventure and gain access to “The Carnage Hall” — an old opera building full of challenges and quests — including surprising new weapon types, character interactions and more discoveries to uncover.

Priority and RTX 3080 members can explore Villedor with NVIDIA DLSS and RTX ON for cinematic, real-time ray tracing — all while keeping an eye on their meter to avoid becoming infected themselves.

Put a Bow on It

The Unliving on GeForce NOW
Be a fearsome Necromancer in the dark world of The Unliving.

There’s always a new adventure streaming from the cloud. Here are the 10 titles joining the GeForce NOW library this week:

  • The Unliving (New release on Steam)
  • A LIttle to the Left (New release on Steam)
  • Alba: A Wildlife Adventure (Free on Epic Games from Nov. 10-17)
  • Shadow Tactics: Blades of the Shogun (Free on Epic Games from Nov. 10-17)
  • Yum Yum Cookstar (New release on Steam, Nov. 11)
  • Guns, Gore and Cannoli 2 (Steam)
  • Heads Will Roll: Downfall (Steam)
  • Hidden Through Time (Steam)
  • The Legend of Tianding (Steam)
  • Railgrade (Epic Games)

Members can still upgrade to a six-month Priority membership for 40% off the normal price. Better hurry though, as this offer ends on Sunday, Nov. 20.

Before we wrap up this GFN Thursday, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Give the Gift of Gaming With GeForce NOW Gift Cards appeared first on NVIDIA Blog.

Read More

What Is Denoising?

What Is Denoising?

Anyone who’s taken a photo with a digital camera is likely familiar with a “noisy” image: discolored spots that make the photo lose clarity and sharpness.

Many photographers have tips and tricks to reduce noise in images, including fixing the settings on the camera lens or taking photos in different lighting. But it isn’t just photographs that can look discolored — noise is common in computer graphics, too.

Noise refers to the random variations of brightness and color that aren’t part of the original image. Removing noise from imagery — which is becoming more common in the field of image processing and computer vision — is known as denoising.

Image denoising uses advanced algorithms to remove noise from graphics and renders, making a huge difference to the quality of images. Photorealistic visuals and immersive renders could not be possible without denoising technology.

What Is Denoising?

In computer graphics, images can be made up of both useful information and noise. The latter reduces clarity. The ideal end product of denoising would be a crisp image that only preserves the useful information. When denoising an image, it’s also important to keep visual details and components such as edges, corners, textures and other sharp structures.

To reduce noise without affecting the visual details, three types of signals in an image must be targeted by denoising:

  • Diffuse — scattered lighting reflected in all directions;
  • Specular or reflections — lighting reflected in a particular direction; and
  • Infinite light-source shadows — sunlight, shadows and any other visible light source.

To create the clearest image, a user must cast thousands of rays in directions following the diffuse and specular signals. Often in real-time ray tracing, however, only one ray per pixel or even less is used.

Denoising is necessary in real-time ray tracing because of the relatively low ray counts to maintain interactive performance.

Noisy image with one ray per pixel.

How Does Denoising Work?

Image denoising is commonly based on three techniques: spatial filtering, temporal accumulation, and machine learning and deep learning reconstruction.

Example of a spatially and temporally denoised final image.

Spatial filtering selectively alters parts of an image by reusing similar neighboring pixels. The advantage of spatial filtering is that it doesn’t produce temporal lag, which is the inability to immediately respond to changing flow conditions. However, spatial filtering introduces blurriness and muddiness, as well as temporal instability, which refers to flickering and visual imperfections in the image.

Temporal accumulation reuses data from the previous frame to determine if there are any artifacts — or visual anomalies — in the current frame that can be corrected. Although temporal accumulation introduces temporal lag, it doesn’t produce blurriness. Instead, it adds temporal stability to reduce flickering and artifacts over multiple frames.

Example of temporal accumulation at 20 frames.

Machine learning and deep learning reconstruction uses a neural network to reconstruct the signal. The neural network is trained using various noisy and reference signals. Though the reconstructed signal for a single frame can look  complete, it can become temporally unstable over time, so a form of temporal stabilization is needed.

Denoising in Images

Denoising provides users with immediate visual feedback, so they can see and interact with graphics and designs. This allows them to experiment with variables like light, materials, viewing angle and shadows.

Solutions like NVIDIA Real-Time Denoisers (NRD) make denoising techniques more accessible for developers to integrate into pipelines. NRD is a spatio-temporal denoising library that’s agnostic to application programming interfaces and designed to work with low rays per pixel.

NRD uses input signals and environmental conditions to deliver results comparable to ground-truth images. See NRD in action below:

With NRD, developers can achieve real-time results using a limited budget of rays per pixel. In the video above, viewers can see the heavy lifting that NRD does in real time to resolve image noise.

Popular games such as Dying Light 2 and Hitman III use NRD for denoising.

NRD highlighted in Techland’s Dying Light 2 Stay Human.

NRD supports the denoising of diffuse, specular or reflections, and shadow signals. The denoisers included in NRD are:

  • ReBLUR — based on the idea of self-stabilizing, recurrent blurring. It’s designed to work with diffuse and specular signals generated with low ray budgets.
  • SIGMA — a fast shadow denoiser. It supports shadows from any type of light source, like the sun and local lights.
  • ReLAX — preserves lighting details produced by NVIDIA RTX Direct Illumination, a framework that enables developers to render scenes with millions of dynamic area lights in real time. ReLAX also yields better temporal stability and remains responsive to changing lighting conditions.

See NRD in action with Hitman 3:

Learn about more technologies in game development.

The post What Is Denoising? appeared first on NVIDIA Blog.

Read More

NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform

NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform

Just like many businesses, the world of industrial scientific computing has a data problem.

Solving seemingly intractable challenges — from developing new energy sources and creating new modes of transportation, to addressing mission-critical issues such as driving operational efficiencies and improving customer support — requires massive amounts of high performance computing.

Instead of having to architect, engineer and build ever-more supercomputers, companies such as Electrolux, Denso, Samsung and Virgin Orbit are embracing benefits offered by Rescale’s cloud platform. This makes it possible to scale their accelerated computing in an energy-efficient way and to speed their innovation.

Addressing the industrial scientific community’s rising demand for AI in the cloud, NVIDIA founder and CEO Jensen Huang joined Rescale founder and CEO Joris Poort at the Rescale Big Compute virtual conference, where they announced that Rescale is adopting the NVIDIA AI software portfolio.

NVIDIA AI will bring new capabilities to Rescale’s HPC-as-a-service offerings, which include simulation and engineering software used by hundreds of customers across industries. NVIDIA is also accelerating the Rescale Compute Recommendation Engine announced today, which enables customers to identify the right infrastructure options to optimize cost and speed objectives.

“Fusing principled and data-driven methods, physics-ML AI models let us explore our design space at speeds and scales many orders of magnitude greater than ever before,” Huang said. “Rescale is at the intersection of these major trends. NVIDIA’s accelerated and AI computing platform perfectly complements Rescale to advance industrial scientific computing.”

“Engineers and scientists working on breakthrough innovations need integrated cloud platforms that put R&D software and accelerated computing at their fingertips,” said Poort. “We’ve helped customers speed discoveries and save costs with NVIDIA-accelerated HPC, and adding NVIDIA AI Enterprise to the Rescale platform will bring together the most advanced computing capabilities with the best of AI, and support an even broader range of AI-powered workflows R&D leaders can run on any cloud of their choice.”

Expanding HPC to New Horizons in the Cloud With NVIDIA AI

The companies announced that they are working to bring NVIDIA AI Enterprise to Rescale, broadening the cloud platform’s offerings to include NVIDIA-supported AI workflows and processing engines. Once it’s available, customers will be able to develop AI applications in any leading cloud, with support from NVIDIA.

The globally adopted software of the NVIDIA AI platform, NVIDIA AI Enterprise includes essential processing engines for each step of the AI workflow, from data processing and AI model training to simulation and large-scale deployment.

NVIDIA AI enables organizations to develop predictive models to complement and expand industrial HPC research and development with applications such as computer vision, route and supply chain optimization, robotics simulations and more.

The Rescale software catalog provides access to hundreds of NVIDIA-accelerated containerized applications and pretrained AI models on NVIDIA NGC, and allows customers to run simulations on demand and scale up or down as needed.

NVIDIA Modulus to Speed Physics-Based Machine Learning

Rescale now offers the NVIDIA Modulus framework for developing physics machine learning neural network models to support a broad range of engineering use cases.

Modulus blends the power of physics with data to build high-fidelity models that enable near-real-time simulations. With just a few clicks on the Rescale platform, Modulus will allow customers to run their entire AI-driven simulation workflow, from data pre-processing and model training to inference and model deployment.

On-Prem to Cloud Workflow Orchestration Expands Flexibility

Rescale is additionally integrating the NVIDIA Base Command Platform AI developer workflow management software, which can orchestrate workloads across clouds to on-premises NVIDIA DGX systems.

Rescale’s HPC-as-a-service platform is accelerated by NVIDIA on leading cloud service provider platforms, including Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure. Rescale is a member of the NVIDIA Inception program.

To learn more, watch Huang and Poort discuss the news in the replay of the Big Compute keynote address.

The post NVIDIA AI Turbocharges Industrial Research, Scientific Discovery in the Cloud on Rescale HPC-as-a-Service Platform appeared first on NVIDIA Blog.

Read More

NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training

NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training

Two months after their debut sweeping MLPerf inference benchmarks, NVIDIA H100 Tensor Core GPUs set world records across enterprise AI workloads in the industry group’s latest tests of AI training.

Together, the results show H100 is the best choice for users who demand utmost performance when creating and deploying advanced AI models.

MLPerf is the industry standard for measuring AI performance. It’s backed by a broad group that includes Amazon, Arm, Baidu, Google, Harvard University, Intel, Meta, Microsoft, Stanford University and the University of Toronto.

In a related MLPerf benchmark also released today, NVIDIA A100 Tensor Core GPUs raised the bar they set last year in high performance computing (HPC).

Hopper sweeps MLPerf for AI Training
NVIDIA H100 GPUs were up to 6.7x faster than A100 GPUs when they were first submitted for MLPerf Training.

H100 GPUs (aka Hopper) raised the bar in per-accelerator performance in MLPerf Training. They delivered up to 6.7x more performance than previous-generation GPUs when they were first submitted on MLPerf training. By the same comparison, today’s A100 GPUs pack 2.5x more muscle, thanks to advances in software.

Due in part to its Transformer Engine, Hopper excelled in training the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models.

MLPerf gives users the confidence to make informed buying decisions because the benchmarks cover today’s most popular AI workloads — computer vision, natural language processing, recommendation systems, reinforcement learning and more. The tests are peer reviewed, so users can rely on their results.

A100 GPUs Hit New Peak in HPC

In the separate suite of MLPerf HPC benchmarks, A100 GPUs swept all tests of training AI models in demanding scientific workloads run on supercomputers. The results show the NVIDIA AI platform’s ability to scale to the world’s toughest technical challenges.

For example, A100 GPUs trained AI models in the CosmoFlow test for astrophysics 9x faster than the best results two years ago in the first round of MLPerf HPC. In that same workload, the A100 also delivered up to a whopping 66x more throughput per chip than an alternative offering.

The HPC benchmarks train models for work in astrophysics, weather forecasting and molecular dynamics. They are among many technical fields, like drug discovery, adopting AI to advance science.

A100 leads in MLPerf HPC
In tests around the globe, A100 GPUs led in both speed and throughput of training.

Supercomputer centers in Asia, Europe and the U.S. participated in the latest round of the MLPerf HPC tests. In its debut on the DeepCAM benchmarks, Dell Technologies showed strong results using NVIDIA A100 GPUs.

An Unparalleled Ecosystem

In the enterprise AI training benchmarks, a total of 11 companies, including the Microsoft Azure cloud service, made submissions using NVIDIA A100, A30 and A40 GPUs. System makers including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro used a total of nine NVIDIA-Certified Systems for their submissions.

In the latest round, at least three companies joined NVIDIA in submitting results on all eight MLPerf training workloads. That versatility is important because real-world applications often require a suite of diverse AI models.

NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors.

Under the Hood

The NVIDIA AI platform provides a full stack from chips to systems, software and services. That enables continuous performance improvements over time.

For example, submissions in the latest HPC tests applied a suite of software optimizations and techniques described in a technical article. Together they slashed runtime on one benchmark by 5x, to just 22 minutes from 101 minutes.

A second article describes how NVIDIA optimized its platform for the enterprise AI benchmarks. For example, we used NVIDIA DALI  to efficiently load and pre-process data for a computer vision benchmark.

All the software used in the tests is available from the MLPerf repository, so anyone can get these world-class results. NVIDIA continuously folds these optimizations into containers available on NGC, a software hub for GPU applications.

The post NVIDIA Hopper, Ampere GPUs Sweep Benchmarks in AI Training appeared first on NVIDIA Blog.

Read More

New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE

New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE

It’s a new age for safety.

Volvo Cars unveiled the Volvo EX90 SUV today in Stockholm, marking the beginning of a new era of electrification, technology and safety for the automaker. The flagship vehicle is redesigned from tip to tail — with a new powertrain, branding and software-defined AI compute — powered by the centralized NVIDIA DRIVE Orin platform.

The Volvo EX90 silhouette is in line with Volvo Cars’ design principle of form following function — and looks good at the same time.

Under the hood, it’s filled with cutting-edge technology for new advances in electrification, connectivity, core computing, safety and infotainment. The EX90 is the first Volvo car that is hardware-ready to deliver unsupervised autonomous driving.

These features come together to deliver an SUV that cements Volvo Cars in the next generation of software-defined vehicles.

“We used technology to reimagine the entire car,” said Volvo Cars CEO Jim Rowan. “The Volvo EX90 is the safest that Volvo has ever produced.”

Computer on Wheels

The Volvo EX90 looks smart and has the brains to back it up.

Volvo Cars’ proprietary software runs on NVIDIA DRIVE Orin to operate most of the core functions inside the car, including safety, infotainment and battery management. This intelligent architecture is designed to deliver a highly responsive and enjoyable experience for every passenger in the car.

The DRIVE Orin system-on-a-chip delivers 254 trillion operations per second — ample compute headroom for a software-defined architecture. It’s designed to handle the large number of applications and deep neural networks needed to achieve systematic safety standards such as ISO 26262 ASIL-D.

The Volvo EX90 isn’t just a new car. It’s a highly advanced computer on wheels, designed to improve over time as Volvo Cars adds more software features.

Just Getting Started

The Volvo EX90 is just the beginning of Volvo Cars’ plans for the software-defined future.

The automaker plans to launch a new EV every year through 2025, with the end goal of having a purely electric, software-defined lineup by 2030.

The new flagship SUV is available for preorder in select markets, launching the next phase in Volvo Cars’ leadership in premium design and safety.

The post New Volvo EX90 SUV Heralds AI Era for Swedish Automaker, Built on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More

HORN Free! Roaming Rhinos Could Be Guarded by AI Drones

HORN Free! Roaming Rhinos Could Be Guarded by AI Drones

Call it the ultimate example of a job that’s sometimes best done remotely. Wildlife researchers say rhinos are magnificent beasts, but they like to be left alone, especially when they’re with their young.

In the latest example of how researchers are using the latest technologies to track animals less invasively, a team of researchers has proposed harnessing high-flying AI-equipped drones to track the endangered black rhino through the wilds of Namibia.

In a paper published earlier this year in the journal PeerJ, the researchers show the potential of drone-based AI to identify animals in even the remotest areas and provide real-time updates on their status from the air.

While drones — and technology of just about every kind — have been harnessed to track African wildlife, the proposal promises to help gamekeepers move faster to protect rhinos and other megafauna from poachers.

AI Podcast host Noah Kravitz spoke to two of the authors of the paper.

Zoey Jewell is co-founder and president of wild track.org, a global network of biologists and conservationists dedicated to non-invasive wildlife monitoring techniques. And Alice Hua is a recent graduate of the School of Information at UC Berkeley in California, and an ML platform engineer at CrowdStrike.

And for more, read the full paper at https://peerj.com/articles/13779/.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species With NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post HORN Free! Roaming Rhinos Could Be Guarded by AI Drones appeared first on NVIDIA Blog.

Read More

3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The warm, friendly animation Mushroom Spirit is featured In the NVIDIA Studio this week, modeled by talented 3D illustrator Julie Greenberg, aka Juliestrator.

In addition, NVIDIA Omniverse, an open platform for virtual collaboration and real-time photorealistic simulation, just dropped a beta release for 3D artists.

And with the approaching winter season comes the next NVIDIA Studio community challenge. Join the #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #WinterArtChallenge to enter.

New in NVIDIA Omniverse

With new support for GeForce RTX 40 Series GPUs, NVIDIA Omniverse is faster, more accessible and more flexible than ever for collaborative 3D workflows across apps.

An example of what’s possible when talented 3D artists collaborate in Omniverse: a scene from the ‘NVIDIA Racer RTX’ demo.

NVIDIA DLSS 3, powered by the GeForce RTX 40 Series, is now available in Omniverse, enabling complete real-time ray-tracing workflows within the platform. The NVIDIA Ada Lovelace GPU architecture delivers a generational leap in performance and power that enables users to work in large-scale, virtual worlds with true interactivity — so creators can navigate viewports at full fidelity in real time.

The Omniverse Create app has new large world-authoring and animation improvements.

In Omniverse Machinima, creators gain AI superpowers with Audio2Gesture — an AI-powered tool that creates lifelike body movements based on an audio file.

PhysX 5, the technology behind Omniverse’s hyperrealistic physics simulation, features built-in audio for collisions, as well as improved cloth and deformable body simulations. Newly available as open source software, PhysX 5 enables artists and developers to modify, build and distribute custom physics engines.

The Omniverse Connect library has received updates to Omniverse Connectors, including Autodesk 3ds Max, Autodesk Maya, Autodesk Revit, Epic Games Unreal Engine, McNeel Rhino, Trimble SketchUp and Graphisoft Archicad. Connectors for Autodesk Alias and PTC Creo are also now available.

The updated Reallusion iClone 8.1.0’s live-sync Connector allows for seamless character interactions between iClone and Omniverse apps. And OTOY’s OctaneRender Hydra Relegate enables Omniverse users to access OctaneRender directly in Omniverse apps.

Learn more about the Omniverse release and tune into the Twitch livestream detailing announcements on Wednesday, Nov. 9. Download Omniverse, which is free for NVIDIA RTX and GeForce RTX GPU owners.

Featuring a Fun-gi 

Juliestrator’s artistic inspiration comes from the examination of the different worlds that people create. “No matter if it’s the latest Netflix show or an artwork I see on Twitter, I love when a piece of art leaves space for my own imagination to fill in the gaps and come up with my own stories,” she said.

Mushroom Spirit was conceived as a sketch for last year’s Inktober challenge, which had the prompt of “spirit.” Rather than creating a ghost like many others, Juliestrator took a different approach. Mushroom Spirit was born as a cute nature spirit lurking in a forest, like the Kodama creatures from the Princess Mononoke film from which she drew inspiration.

Juliestrator gathered reference material using Pinterest. She then used PureRef’s overlay feature to help position reference imagery while modeling in Blender software. Though it’s rare for Juliestrator to sketch in 2D for 3D projects, she said Mushroom Spirit called for a more personal touch, so she generated a quick scribble in Procreate.

The origins of ‘Mushroom Spirit.’

Using Blender, she then entered the block-out phase — creating a rough-draft level built using simple 3D shapes, without details or polished art assets. This helped to keep base meshes clean, eliminating the need to create new meshes in the next round, which required only minor edits.

Getting the basic shapes down by blocking out ‘Mushroom Spirit’ in Blender.

At this point, many artists would typically start to model detailed scene elements, but Julistrator prioritizes coloring. “I’ve noticed how much color influences the compositions and mood of the artwork, so I try to make this important decision as early as possible,” the artist said.

Color modifications in Adobe Substance 3D Painter.

She used Adobe Substance 3D Painter software to apply a myriad of colors and experimental textures to her models. On her NVIDIA Studio laptop, the Razer Blade 15 Studio equipped with an NVIDIA Quadro RTX 5000 GPU, Juliestrator used RTX-accelerated light and ambient occlusion to bake assets in mere seconds.

She then refined the existing models in Blender. “This is where powerful hardware helps a lot,” she said. “The NVIDIA OptiX AI-accelerated denoiser helps me preview any changes I make in Blender almost instantly, which lets me test more ideas at the same time and as a result get better finished renders.”

Tinkering and tweaking color palettes in Blender.

Though she enjoys the modeling stage, Juliestrator said that the desire to refine an endless number of details can be overwhelming. As such, she deploys an “80/20 rule,” dedicating no more than 20% of the entire project’s timeline to detailed modeling. “That’s the magic of the 80/20 rule: tackle the correct 20%, and the other 80% often falls into place,” she said.

Juliestrator finally adjusts the composition in 3D — manipulating the light objects, rotating the camera and adding animations. She completed all of this quickly with an assist from RTX-accelerated OptiX ray tracing in the Blender viewport, using Blender Cycles for the fastest frame renders.

Animations in Blender during the final stage.

Blender is Juliestrator’s preferred 3D modeling app, she said, due to its ease of use and powerful AI features, as well as its accessibility. “I truly appreciate the efforts of the Blender Foundation and all of its partners in keeping Blender free and available to people from all over the world, to enhance anyone’s creativity,” she said.

 

Juliestrator chose to use an NVIDIA Studio laptop, a “porta-bella” system for efficiency and convenience, she said. “I needed a powerful computer that would let me use both Blender and a game engine like Unity or Unreal Engine 5, while staying mobile and on the go,” the artist added.

Illustrator Julie Greenberg, aka Juliestrator.

Check out Juliestrator’s portfolio and social media links.

For more direction and inspiration for building 3D worlds, check out Juliestrator’s five-part tutorial, Modeling 3D New York Diorama, which covers the critical stages in 3D workflows: sketching composition, modeling details and more. The tutorials can be found on the NVIDIA Studio YouTube channel, which posts new videos every week.

And don’t forget to enter the NVIDIA Studio #WinterArtChallenge on Instagram, Twitter or Facebook.

The post 3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

“DIY” and “supercomputer” aren’t words typically used together.

But a do-it-yourself supercomputer is exactly what students built at Southern Methodist University, in Dallas, using 16 NVIDIA Jetson Nano modules, four power supplies, more than 60 handmade wires, a network switch and some cooling fans.

The project, dubbed SMU’s “baby supercomputer,” aims to help educate those who may never get hands-on with a normal-sized supercomputer, which can sometimes fill a warehouse, or be locked in a data center or in the cloud.

Instead, this mini supercomputer fits comfortably on a desk, allowing students to tinker with it and learn about what makes up a cluster. A touch screen displays a dashboard with the status of all of its nodes.

“We started this project to demonstrate the nuts and bolts of what goes into a computer cluster,” said Eric Godat, team lead for research and data science in the internal IT organization at SMU.

Next week, the baby supercomputer will be on display at SC22, a supercomputing conference taking place in Dallas, just down the highway from SMU.

The SMU team will host a booth to talk to researchers, vendors and students about the university’s high-performance computing programs and the recent deployment of its NVIDIA DGX SuperPOD for AI-accelerated research.

Plus, in collaboration with Mark III Systems — a member of the NVIDIA Partner Network — the SMU Office of Information Technology will provide conference attendees with a tour of the campus data center to showcase the DGX SuperPOD in action. Learn details at SMU’s booth #3834.

“We’re bringing the baby supercomputer to the conference to get people to stop by and ask, ‘Oh, what’s that?’” said Godat, who served as a mentor for Conner Ozenne, a senior computer science major at SMU and one of the brains behind the cluster.

“I started studying computer science in high school because programming fulfilled the foreign language requirement,” said Ozenne, who now aims to integrate AI and machine learning with web design for his career. “Doing those first projects as a high school freshman, I immediately knew this is what I wanted to do for the rest of my life.”

Ozenne is a STAR at SMU — a Student Technology Associate in Residence. He first pitched the design and budget for the baby supercomputer to Godat’s team two summers ago. With a grant of a couple thousand dollars and a whole lot of enthusiasm, he got to work.

Birth of a Baby Supercomputer

Ozenne, in collaboration with another student, built the baby supercomputer from scratch.

“They had to learn how to strip wires and not shock themselves — they put together everything from the power supplies to the networking all by themselves,” Godat said. With a smile, he added, “We only started one small fire.”

The first iteration was a mess of wires on a table connecting the NVIDIA Jetson Nano developer kits, with cardboard boxes as heatsinks, Ozenne said.

“We chose to use NVIDIA Jetson modules because no other small compute devices have onboard GPUs, which would let us tackle more AI and machine learning problems,” he added.

Soon Ozenne gave the baby supercomputer case upgrades: from cardboard to foam to acrylic plates, which he laser cut from 3D vector files in SMU’s innovation gym, a makerspace for students.

“It was my first time doing all of this, and it was a great learning experience, with lots of fun nights in the lab,” Ozenne said.

A Work in Progress

In just four months, the project went from nothing to something that resembled a supercomputer, according to Ozenne. But the project is ongoing.

The team is now developing the mini cluster’s software stack, with the help of the NVIDIA JetPack software development kit, and prepping it to accomplish some small-scale machine learning tasks. Plus, the baby supercomputer could level up with the recently announced NVIDIA Jetson Orin Nano modules.

“Our NVIDIA DGX SuperPOD just opened up on campus, so we don’t really need this baby supercomputer to be an actual compute environment,” Godat said. “But the mini cluster is an effective teaching tool for how all this stuff really works — it lets students experiment with stripping the wires, managing a parallel file system, reimaging cards and deploying cluster software.”

SMU’s NVIDIA DGX SuperPOD, which includes 160 NVIDIA A100 Tensor Core GPUs, is in an alpha-rollout phase for faculty, who are using it to train AI models for molecular dynamics, computational chemistry, astrophysics, quantum mechanics and a slew of other research topics.

Godat collaborates with the NVIDIA DGX team to flexibly configure the DGX SuperPOD to support tens of different AI, machine learning, data processing and HPC projects.

“I love it, because every day is different — I could be working on an AI-related project in the school of the arts, and the next day I’m in the law school, and the next I’m in the particle physics department,” said Godat, who himself has a Ph.D. in theoretical particle physics from SMU.

“There are applications for AI everywhere,” Ozenne agreed.

Learn more from Godat and other experts on designing an AI Center of Excellence in this NVIDIA GTC session available on demand.

Join NVIDIA at SC22 to explore partner booths on the show floor and engage with virtual content all week — including a special address, demos and other sessions.

The post Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Rafi Nizam

3D artist Rafi Nizam has worn many hats since starting his career as a web designer more than two decades ago, back when “designing for the web was still wild,” as he put it.

He’s now becoming a leader in the next wave of creation — using extended reality and virtual production — with the help of NVIDIA Omniverse, a platform for building and connecting custom 3D pipelines.

The London-based showrunner, creative consultant and entertainment executive previously worked at advertising agencies and led creative teams at Sony Pictures, BBC and NBCUniversal.

In addition to being an award-winning independent animator, director, character designer and storyteller who serves as chief creative officer at Masterpiece Studio, he’s head of story at game developer Opis Group, and showrunner at Lunar-X, a next-gen entertainment company.

Plus, in recent years, he’s taken on what he considers his most important role of all — being a father. And his art is now often inspired by family.

“Being present in the moment with my children and observing the world without preconceptions often sparks ideas for me,” Nizam said.

His animated shorts have so far focused on themes of self care and finding stillness amidst chaos. He’s at work on a new computer-graphics-animated series, ArtSquad, in which fun-loving, vibrant 3D characters form a band, playing instruments made of classroom objects and solving problems through the power of art.

“The myriad of 3D apps in my animation pipeline can sync and come together in Omniverse using the Universal Scene Description framework,” he said. “This interoperability allows me to be 10x more productive when visualizing my show concepts — and I’ve cut my outsourcing costs by 50%, as Omniverse enables me to render, lookdev, lay out scenes and manipulate cameras by myself.”

From Concept to Creation

Nizam said he often starts his projects with “good ol’ pencil and paper on a Post-it note or napkin, whenever inspiration strikes.”

He then takes his ideas to a drawing desk, where he creates a simple sketch before honing in on pre-production using digital content-creation apps like Adobe Illustrator, Adobe Photoshop and Procreate.

Nizam next creates 3D production assets from his 2D sketches, manipulating them in virtual reality using Adobe Substance 3D Modeler software.

“Things start to move pretty rapidly from here,” he said, “because VR is such an intuitive way to make 3D assets. Plus, rigging and texturing in the Masterpiece Studio creative suite and Adobe Substance 3D can be near automatic.”

The artist uses the Omniverse Create XR spatial computing app to lay out his scenes in VR. He blocks out character actions, designs sets and finalizes textures using Unreal Engine 5, Autodesk Maya and Blender software.

Performance capture through Perception Neuron Studio quickly gets Nizam close to final animation. And with the easily extensible USD framework, Nizam brings his 3D assets into the Omniverse Create app for rapid look development. Here he enhances character animation with built-in hyperrealistic physics and renders final shots in real time.

“Omniverse offers me an easy entry point to USD-based workflows, live collaboration across disciplines, rapid visualization, real-time rendering, an accessible physics engine and the easy modification of preset simulations,” Nizam said. “I can’t wait to get back in and try out more ideas.”

At home, Nizam uses an NVIDIA Studio workstation powered by an NVIDIA RTX A6000 GPU. To create on the go, the artist turns to his NVIDIA Studio laptop from ASUS, equipped with a GeForce RTX 3060 GPU.

In addition, his entire workflow is accelerated by NVIDIA Studio, a platform of NVIDIA RTX and AI-accelerated creator apps, Studio Drivers and a suite of exclusive creative tools.

When not creating transmedia projects and franchises for his clients, Nizam can be found mentoring young creators for Sony Talent League, playing make believe with his children or chilling with his two cats, Hamlet and Omelette.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More