3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The warm, friendly animation Mushroom Spirit is featured In the NVIDIA Studio this week, modeled by talented 3D illustrator Julie Greenberg, aka Juliestrator.

In addition, NVIDIA Omniverse, an open platform for virtual collaboration and real-time photorealistic simulation, just dropped a beta release for 3D artists.

And with the approaching winter season comes the next NVIDIA Studio community challenge. Join the #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on the NVIDIA Studio social media channels. Be sure to tag #WinterArtChallenge to enter.

New in NVIDIA Omniverse

With new support for GeForce RTX 40 Series GPUs, NVIDIA Omniverse is faster, more accessible and more flexible than ever for collaborative 3D workflows across apps.

An example of what’s possible when talented 3D artists collaborate in Omniverse: a scene from the ‘NVIDIA Racer RTX’ demo.

NVIDIA DLSS 3, powered by the GeForce RTX 40 Series, is now available in Omniverse, enabling complete real-time ray-tracing workflows within the platform. The NVIDIA Ada Lovelace GPU architecture delivers a generational leap in performance and power that enables users to work in large-scale, virtual worlds with true interactivity — so creators can navigate viewports at full fidelity in real time.

The Omniverse Create app has new large world-authoring and animation improvements.

In Omniverse Machinima, creators gain AI superpowers with Audio2Gesture — an AI-powered tool that creates lifelike body movements based on an audio file.

PhysX 5, the technology behind Omniverse’s hyperrealistic physics simulation, features built-in audio for collisions, as well as improved cloth and deformable body simulations. Newly available as open source software, PhysX 5 enables artists and developers to modify, build and distribute custom physics engines.

The Omniverse Connect library has received updates to Omniverse Connectors, including Autodesk 3ds Max, Autodesk Maya, Autodesk Revit, Epic Games Unreal Engine, McNeel Rhino, Trimble SketchUp and Graphisoft Archicad. Connectors for Autodesk Alias and PTC Creo are also now available.

The updated Reallusion iClone 8.1.0’s live-sync Connector allows for seamless character interactions between iClone and Omniverse apps. And OTOY’s OctaneRender Hydra Relegate enables Omniverse users to access OctaneRender directly in Omniverse apps.

Learn more about the Omniverse release and tune into the Twitch livestream detailing announcements on Wednesday, Nov. 9. Download Omniverse, which is free for NVIDIA RTX and GeForce RTX GPU owners.

Featuring a Fun-gi 

Juliestrator’s artistic inspiration comes from the examination of the different worlds that people create. “No matter if it’s the latest Netflix show or an artwork I see on Twitter, I love when a piece of art leaves space for my own imagination to fill in the gaps and come up with my own stories,” she said.

Mushroom Spirit was conceived as a sketch for last year’s Inktober challenge, which had the prompt of “spirit.” Rather than creating a ghost like many others, Juliestrator took a different approach. Mushroom Spirit was born as a cute nature spirit lurking in a forest, like the Kodama creatures from the Princess Mononoke film from which she drew inspiration.

Juliestrator gathered reference material using Pinterest. She then used PureRef’s overlay feature to help position reference imagery while modeling in Blender software. Though it’s rare for Juliestrator to sketch in 2D for 3D projects, she said Mushroom Spirit called for a more personal touch, so she generated a quick scribble in Procreate.

The origins of ‘Mushroom Spirit.’

Using Blender, she then entered the block-out phase — creating a rough-draft level built using simple 3D shapes, without details or polished art assets. This helped to keep base meshes clean, eliminating the need to create new meshes in the next round, which required only minor edits.

Getting the basic shapes down by blocking out ‘Mushroom Spirit’ in Blender.

At this point, many artists would typically start to model detailed scene elements, but Julistrator prioritizes coloring. “I’ve noticed how much color influences the compositions and mood of the artwork, so I try to make this important decision as early as possible,” the artist said.

Color modifications in Adobe Substance 3D Painter.

She used Adobe Substance 3D Painter software to apply a myriad of colors and experimental textures to her models. On her NVIDIA Studio laptop, the Razer Blade 15 Studio equipped with an NVIDIA Quadro RTX 5000 GPU, Juliestrator used RTX-accelerated light and ambient occlusion to bake assets in mere seconds.

She then refined the existing models in Blender. “This is where powerful hardware helps a lot,” she said. “The NVIDIA OptiX AI-accelerated denoiser helps me preview any changes I make in Blender almost instantly, which lets me test more ideas at the same time and as a result get better finished renders.”

Tinkering and tweaking color palettes in Blender.

Though she enjoys the modeling stage, Juliestrator said that the desire to refine an endless number of details can be overwhelming. As such, she deploys an “80/20 rule,” dedicating no more than 20% of the entire project’s timeline to detailed modeling. “That’s the magic of the 80/20 rule: tackle the correct 20%, and the other 80% often falls into place,” she said.

Juliestrator finally adjusts the composition in 3D — manipulating the light objects, rotating the camera and adding animations. She completed all of this quickly with an assist from RTX-accelerated OptiX ray tracing in the Blender viewport, using Blender Cycles for the fastest frame renders.

Animations in Blender during the final stage.

Blender is Juliestrator’s preferred 3D modeling app, she said, due to its ease of use and powerful AI features, as well as its accessibility. “I truly appreciate the efforts of the Blender Foundation and all of its partners in keeping Blender free and available to people from all over the world, to enhance anyone’s creativity,” she said.

 

Juliestrator chose to use an NVIDIA Studio laptop, a “porta-bella” system for efficiency and convenience, she said. “I needed a powerful computer that would let me use both Blender and a game engine like Unity or Unreal Engine 5, while staying mobile and on the go,” the artist added.

Illustrator Julie Greenberg, aka Juliestrator.

Check out Juliestrator’s portfolio and social media links.

For more direction and inspiration for building 3D worlds, check out Juliestrator’s five-part tutorial, Modeling 3D New York Diorama, which covers the critical stages in 3D workflows: sketching composition, modeling details and more. The tutorials can be found on the NVIDIA Studio YouTube channel, which posts new videos every week.

And don’t forget to enter the NVIDIA Studio #WinterArtChallenge on Instagram, Twitter or Facebook.

The post 3D Illustrator Juliestrator Makes Marvelous Mushroom Magic This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform

“DIY” and “supercomputer” aren’t words typically used together.

But a do-it-yourself supercomputer is exactly what students built at Southern Methodist University, in Dallas, using 16 NVIDIA Jetson Nano modules, four power supplies, more than 60 handmade wires, a network switch and some cooling fans.

The project, dubbed SMU’s “baby supercomputer,” aims to help educate those who may never get hands-on with a normal-sized supercomputer, which can sometimes fill a warehouse, or be locked in a data center or in the cloud.

Instead, this mini supercomputer fits comfortably on a desk, allowing students to tinker with it and learn about what makes up a cluster. A touch screen displays a dashboard with the status of all of its nodes.

“We started this project to demonstrate the nuts and bolts of what goes into a computer cluster,” said Eric Godat, team lead for research and data science in the internal IT organization at SMU.

Next week, the baby supercomputer will be on display at SC22, a supercomputing conference taking place in Dallas, just down the highway from SMU.

The SMU team will host a booth to talk to researchers, vendors and students about the university’s high-performance computing programs and the recent deployment of its NVIDIA DGX SuperPOD for AI-accelerated research.

Plus, in collaboration with Mark III Systems — a member of the NVIDIA Partner Network — the SMU Office of Information Technology will provide conference attendees with a tour of the campus data center to showcase the DGX SuperPOD in action. Learn details at SMU’s booth #3834.

“We’re bringing the baby supercomputer to the conference to get people to stop by and ask, ‘Oh, what’s that?’” said Godat, who served as a mentor for Conner Ozenne, a senior computer science major at SMU and one of the brains behind the cluster.

“I started studying computer science in high school because programming fulfilled the foreign language requirement,” said Ozenne, who now aims to integrate AI and machine learning with web design for his career. “Doing those first projects as a high school freshman, I immediately knew this is what I wanted to do for the rest of my life.”

Ozenne is a STAR at SMU — a Student Technology Associate in Residence. He first pitched the design and budget for the baby supercomputer to Godat’s team two summers ago. With a grant of a couple thousand dollars and a whole lot of enthusiasm, he got to work.

Birth of a Baby Supercomputer

Ozenne, in collaboration with another student, built the baby supercomputer from scratch.

“They had to learn how to strip wires and not shock themselves — they put together everything from the power supplies to the networking all by themselves,” Godat said. With a smile, he added, “We only started one small fire.”

The first iteration was a mess of wires on a table connecting the NVIDIA Jetson Nano developer kits, with cardboard boxes as heatsinks, Ozenne said.

“We chose to use NVIDIA Jetson modules because no other small compute devices have onboard GPUs, which would let us tackle more AI and machine learning problems,” he added.

Soon Ozenne gave the baby supercomputer case upgrades: from cardboard to foam to acrylic plates, which he laser cut from 3D vector files in SMU’s innovation gym, a makerspace for students.

“It was my first time doing all of this, and it was a great learning experience, with lots of fun nights in the lab,” Ozenne said.

A Work in Progress

In just four months, the project went from nothing to something that resembled a supercomputer, according to Ozenne. But the project is ongoing.

The team is now developing the mini cluster’s software stack, with the help of the NVIDIA JetPack software development kit, and prepping it to accomplish some small-scale machine learning tasks. Plus, the baby supercomputer could level up with the recently announced NVIDIA Jetson Orin Nano modules.

“Our NVIDIA DGX SuperPOD just opened up on campus, so we don’t really need this baby supercomputer to be an actual compute environment,” Godat said. “But the mini cluster is an effective teaching tool for how all this stuff really works — it lets students experiment with stripping the wires, managing a parallel file system, reimaging cards and deploying cluster software.”

SMU’s NVIDIA DGX SuperPOD, which includes 160 NVIDIA A100 Tensor Core GPUs, is in an alpha-rollout phase for faculty, who are using it to train AI models for molecular dynamics, computational chemistry, astrophysics, quantum mechanics and a slew of other research topics.

Godat collaborates with the NVIDIA DGX team to flexibly configure the DGX SuperPOD to support tens of different AI, machine learning, data processing and HPC projects.

“I love it, because every day is different — I could be working on an AI-related project in the school of the arts, and the next day I’m in the law school, and the next I’m in the particle physics department,” said Godat, who himself has a Ph.D. in theoretical particle physics from SMU.

“There are applications for AI everywhere,” Ozenne agreed.

Learn more from Godat and other experts on designing an AI Center of Excellence in this NVIDIA GTC session available on demand.

Join NVIDIA at SC22 to explore partner booths on the show floor and engage with virtual content all week — including a special address, demos and other sessions.

The post Tiny Computer, Huge Learnings: Students at SMU Build Baby Supercomputer With NVIDIA Jetson Edge AI Platform appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Rafi Nizam

3D artist Rafi Nizam has worn many hats since starting his career as a web designer more than two decades ago, back when “designing for the web was still wild,” as he put it.

He’s now becoming a leader in the next wave of creation — using extended reality and virtual production — with the help of NVIDIA Omniverse, a platform for building and connecting custom 3D pipelines.

The London-based showrunner, creative consultant and entertainment executive previously worked at advertising agencies and led creative teams at Sony Pictures, BBC and NBCUniversal.

In addition to being an award-winning independent animator, director, character designer and storyteller who serves as chief creative officer at Masterpiece Studio, he’s head of story at game developer Opis Group, and showrunner at Lunar-X, a next-gen entertainment company.

Plus, in recent years, he’s taken on what he considers his most important role of all — being a father. And his art is now often inspired by family.

“Being present in the moment with my children and observing the world without preconceptions often sparks ideas for me,” Nizam said.

His animated shorts have so far focused on themes of self care and finding stillness amidst chaos. He’s at work on a new computer-graphics-animated series, ArtSquad, in which fun-loving, vibrant 3D characters form a band, playing instruments made of classroom objects and solving problems through the power of art.

“The myriad of 3D apps in my animation pipeline can sync and come together in Omniverse using the Universal Scene Description framework,” he said. “This interoperability allows me to be 10x more productive when visualizing my show concepts — and I’ve cut my outsourcing costs by 50%, as Omniverse enables me to render, lookdev, lay out scenes and manipulate cameras by myself.”

From Concept to Creation

Nizam said he often starts his projects with “good ol’ pencil and paper on a Post-it note or napkin, whenever inspiration strikes.”

He then takes his ideas to a drawing desk, where he creates a simple sketch before honing in on pre-production using digital content-creation apps like Adobe Illustrator, Adobe Photoshop and Procreate.

Nizam next creates 3D production assets from his 2D sketches, manipulating them in virtual reality using Adobe Substance 3D Modeler software.

“Things start to move pretty rapidly from here,” he said, “because VR is such an intuitive way to make 3D assets. Plus, rigging and texturing in the Masterpiece Studio creative suite and Adobe Substance 3D can be near automatic.”

The artist uses the Omniverse Create XR spatial computing app to lay out his scenes in VR. He blocks out character actions, designs sets and finalizes textures using Unreal Engine 5, Autodesk Maya and Blender software.

Performance capture through Perception Neuron Studio quickly gets Nizam close to final animation. And with the easily extensible USD framework, Nizam brings his 3D assets into the Omniverse Create app for rapid look development. Here he enhances character animation with built-in hyperrealistic physics and renders final shots in real time.

“Omniverse offers me an easy entry point to USD-based workflows, live collaboration across disciplines, rapid visualization, real-time rendering, an accessible physics engine and the easy modification of preset simulations,” Nizam said. “I can’t wait to get back in and try out more ideas.”

At home, Nizam uses an NVIDIA Studio workstation powered by an NVIDIA RTX A6000 GPU. To create on the go, the artist turns to his NVIDIA Studio laptop from ASUS, equipped with a GeForce RTX 3060 GPU.

In addition, his entire workflow is accelerated by NVIDIA Studio, a platform of NVIDIA RTX and AI-accelerated creator apps, Studio Drivers and a suite of exclusive creative tools.

When not creating transmedia projects and franchises for his clients, Nizam can be found mentoring young creators for Sony Talent League, playing make believe with his children or chilling with his two cats, Hamlet and Omelette.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

The numbers are in, and they paint a picture of data centers going a deeper shade of green, thanks to energy-efficient networks accelerated with data processing units (DPUs).

A suite of tests run with help from Ericsson, RedHat and VMware show power reductions up to 24% on servers using NVIDIA BlueField-2 DPUs. In one case, they delivered 54x the performance of CPUs.

The work, described in a recent whitepaper, offloaded core networking jobs from power-hungry host processors to DPUs designed to run them more efficiently.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient. It’s the latest of a handful of optimizations, described in the whitepaper, for data centers moving into the era of green computing.

DPUs Tested on VMware vSphere

Seeing the trend toward energy-efficient networks, VMware enabled DPUs to run its virtualization software, used by thousands of companies worldwide. NVIDIA has run several tests with VMware since its vSphere 8 software release this fall.

For example, on VMware vSphere Distributed Services Engine —  software that offloads and accelerates networking and security functions using DPUs — BlueField-2 delivered higher performance while freeing up 20% of the CPU’s resources required without DPUs.

That means users can deploy fewer servers to run the same workload, or run more applications on the same servers.

Power Costs Cut Nearly $2 Million

Few data centers face a more demanding job than those run by telecoms providers. Their networks shuttle every bit of data that smartphone users generate or request between their cellular networks and the internet.

Researchers at Ericsson tested whether operators could reduce their power consumption on this massive workload using SmartNICs, the network interface cards that handle DPU functions. Their test let CPUs slow down or sleep while an NVIDIA ConnectX SmartNIC handled the networking tasks.

The results, detailed in a recent article, were stunning.

Energy consumption of server CPUs fell 24%, from 190 to 145 watts on a fully loaded network. This single DPU application could cut power costs by nearly $2 million over three years for a large data center.

Ericsson tests with Bluefield DPUs
Ericsson ran the user-plane function for 5G networks on DPUs in three scenarios.

In the article, Ericsson’s CTO, Erik Ekudden, underscored the importance of the work.

“There’s a growing sense of urgency among communication service providers to find and implement innovative solutions that reduce network energy consumption,” he wrote. And the DPU techniques “save energy across a wide range of traffic conditions.”

70% Less Overhead, 54x More Performance

Results were even more dramatic for tests on Red Hat OpenShift, used by half of all Fortune 500 banks, airlines and telcos to manage software containers.

In the tests, BlueField-2 DPUs handled virtualization, encryption and networking jobs needed to manage these portable packages of applications and code.

The DPUs slashed networking demands on CPUs by 70%, freeing them up to run other applications. What’s more, they accelerated networking jobs by a whopping 54x.

A technical blog provides more detail on the tests.

Speeding the Way to Zero Trust

Across every industry, businesses are embracing a philosophy of zero trust to improve network security. So, NVIDIA tested IPsec, one of the most popular data center encryption protocols, on BlueField DPUs.

The test showed data centers could improve performance and cut power consumption 21% for servers and 34% for clients on networks running IPsec on DPUs. For large data centers, that could translate to nearly $9 million in savings on electric bills over three years.

NVIDIA and its partners continue to put DPUs to the test in an expanding portfolio of use cases, but the big picture is clear.

“In a world facing rising energy costs and rising demand for green IT infrastructure, the use of DPUs will become increasingly popular,” the whitepaper concludes.

It’s good to know the numbers, but seeing is believing. So apply to run your own test of DPUs on VMware’s vSphere.

The post Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency appeared first on NVIDIA Blog.

Read More

Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction

Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction

Skycatch, a San Francisco-based startup, has been helping companies mine both data and minerals for nearly a decade.

The software-maker is now digging into the creation of digital twins, with an initial focus on the mining and construction industry, using the NVIDIA Omniverse platform for connecting and building custom 3D pipelines.

SkyVerse, which is a part of Skycatch’s vision AI platform, is a combination of computer vision software and custom Omniverse extensions that enables users to enrich and animate virtual worlds of mines and other sites with near-real-time geospatial data.

“With Omniverse, we can turn massive amounts of non-visual data into dynamic visual information that’s easy to contextualize and consume,” said Christian Sanz, founder and CEO of Skycatch. “We can truly recreate the physical world.”

SkyVerse can help industrial sites simulate variables such as weather, broken machines and more up to five years into the future — while learning from happenings up to five years in the past, Sanz said.

The platform automates the entire visualization pipeline for mining and construction environments.

First, it processes data from drones, lidar and other sensors across the environment, whether at the edge using the NVIDIA Jetson platform or in the cloud.

It then creates 3D meshes from 2D images, using neural networks built from NVIDIA’s pretrained models to remove unneeded objects like dump trucks and other equipment from the visualizations.

Next, SkyVerse stitches this into a single 3D model that’s converted to the Universal Scene Description (USD) framework. The master model is then brought into Omniverse Enterprise for the creation of a digital twin that’s live-synced with real-world telemetry data.

“The simulation of machines in the environment, different weather conditions, traffic jams — no other platform has enabled this, but all of it is possible in Omniverse with hyperreal physics and object mass, which is really groundbreaking,” Sanz said.

Skycatch is a Premier partner in NVIDIA Inception, a free, global program that nurtures startups revolutionizing industries with cutting-edge technologies. Premier partners receive additional go-to-market support, exposure to venture capital firms and technical expertise to help them scale faster.

Processing and Visualizing Data

Companies have deployed Skycatch’s fully automated technologies to gather insights from aerial data across tens of thousands of sites at several top mining companies.

The Skycatch team first determines optimal positioning of the data-collection sensors across mine vehicles using the NVIDIA Isaac Sim platform, a robotics simulation and synthetic data generation (SDG) tool for developing, testing and training AI-based robots.

“Isaac Sim has saved us a year’s worth of testing time — going into the field, placing a sensor, testing how it functions and repeating the process,” Sanz said.

The team also plans to integrate the Omniverse Replicator software development kit into SkyVerse to generate physically accurate 3D synthetic data and build SDG tools to accelerate the training of perception networks beyond the robotics domain.

Once data from a site is collected, SkyVerse uses edge devices powered by the NVIDIA Jetson Nano and Jetson AGX Xavier modules to automatically process up to terabytes of it per day and turn it into kilobyte-size analytics that can be easily transferred to frontline users.

This data processing was sped up 3x by the NVIDIA CUDA parallel computing platform, according to Sanz. The team is also looking to deploy the new Jetson Orin modules for next-level performance.

“It’s not humanly possible to go through tens of thousands of images a day and extract critical analytics from them,” Sanz said. “So we’re helping to expand human eyesight with neural networks.”

Using pretrained models from the NVIDIA TAO Toolkit, Skycatch also built neural networks that can remove extraneous objects and vehicles from the visualizations, and texturize over these spots in the 3D mesh.

The digital terrain model, which has sub-five-centimeter precision, can then be brought into Omniverse for the creation of a digital twin using the easily extensible USD framework, custom SkyVerse Omniverse extensions and NVIDIA RTX GPUs.

“It took just around three months to build the Omniverse extensions, despite the complexity of our extensions’ capabilities, thanks to access to technical experts through NVIDIA Inception,” Sanz said.

Skycatch is working with one of Canada’s leading mining companies, Teck Resources, to implement the use of Omniverse-based digital twins for its project sites.

“Teck Resources has been using Skycatch’s compute engine across all of our mine sites globally and is now expanding visualization and simulation capabilities with SkyVerse and our own digital twin strategy,” said Preston Miller, lead of technology and innovation at Teck Resources. “Delivering near-real-time visual data will allow Teck teams to quickly contextualize mine sites and make faster operational decisions on mission-critical, time-sensitive projects.”

The Omniverse extensions built by Skycatch will be available soon — learn more.

Safety and Sustainability

AI-powered data analysis and digital twins can make operational processes for mining and construction companies safer, more sustainable and more efficient.

For example, according to Sanz, mining companies need the ability to quickly locate the toe and crest (or bottom and top) of “benches,” narrow strips of land beside an open-pit mine. When a machine is automated to go in and out of a mine, it must be programmed to stay 10 meters away from the crest at all times to avoid the risk of sliding, Sanz said.

Previously, surveying and analyzing landforms to determine precise toes and crests typically took up to five days. With the help of NVIDIA AI, SkyVerse can now generate this information within minutes, Sanz said.

In addition, SkyVerse eliminates 10,000 open-pit interactions for customers per year, per site, Sanz said. These are situations in which humans and vehicles can intersect within a mine, posing a safety threat.

“At its core, Skycatch’s goal is to provide context and full awareness for what’s going on at a mining or construction site in near-real time — and better environmental context leads to enhanced safety for workers,” Sanz said.

Skycatch aims to boost sustainability efforts for the mining industry, too.

“In addition to mining companies, governmental organizations want visibility into how mines are operating — whether their surrounding environments are properly taken care of — and our platform offers these insights,” Sanz said.

Plus, minerals like cobalt, nickel and lithium are required for electrification and the energy transition. These all come from mine sites, Sanz said, which can become safer and more efficient with the help of SkyVerse’s digital twins and vision AI.

Dive deeper into technology for a sustainable future with Skycatch and other Inception partners in the on-demand webinar, Powering Energy Startup Success With NVIDIA Inception.

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Learn more about and apply to join NVIDIA Inception.

The post Unearthing Data: Vision AI Startup Digs Into Digital Twins for Mining and Construction appeared first on NVIDIA Blog.

Read More

Check Out 26 New Games Streaming on GeForce NOW in November

Check Out 26 New Games Streaming on GeForce NOW in November

It’s a brand new month, which means this GFN Thursday is all about the new games streaming from the cloud.

In November, 26 titles will join the GeForce NOW library. Kick off with 11 additions this week, like Total War: THREE KINGDOMS and new content updates for Genshin Impact and Apex Legends.

Plus, leading 5G provider Rain has announced it will be introducing “GeForce NOW powered by Rain” to South Africa early next year. Look forward to more updates to come.

And don’t miss out on the 40% discount for GeForce NOW 6-month Priority memberships. This offer is only available for a limited time.

Build Your Empire

Lead the charge this week with Creative Assembly and Sega’s Total War: THREE KINGDOMS, a turn-based, empire-building strategy game and the 13th entry in the award-winning Total War franchise. Become one of many great leaders from history and conquer enemies to build a formidable empire.

Total War Three Kingdoms
Resolve an epic conflict in ancient China to unify the country and rebuild the empire.

The game is set in ancient China, and gamers must save the country from the oppressive rule of a warlord. Choose from a cast of a dozen legendary heroic characters to unify the nation and dominate enemies. Each has their own agenda, and there are plenty of different tactics for players to employ.

Extend your campaign with up to six-hour gaming sessions at 1080p 60 frames per second for Priority members. With an RTX 3080 membership, gain support for 1440p 120 FPS streaming and up to 8-hour sessions, with performance that will bring foes to their knees.

Sega Row on GeForce NOW
Members can find ‘Total War: THREE KINGDOMSand other Sega games from a dedicated row in the GeForce NOW app.

More to Explore

Alongside the 11 new games streaming this week, members can jump into updates for the hottest free-to-play titles on GeForce NOW.

Genshin Impact Version 3.2, “Akasha Pulses, the Kalpa Flame Rises,” is available to stream on the cloud. This latest update introduces the last chapter of the Sumeru Archon Quest, two new playable characters — Nahida and Layla — as well as new events and game play. Stream it now from devices, whether PC, Mac, Chromebook or on mobile with enhanced touch controls.

Genshin Impact 3.2 on GeForce NOW
Catch the conclusion of the main storyline for Sumeru, the newest region added to ‘Genshin Impact.’

Or squad up in Apex Legends: Eclipse, available to stream now on the cloud. Season 15 brings with it the new Broken Moon map, the newest defensive Legend — Catalyst — and much more.

Apex Legends on GeForce NOW
Don’t mess-a with Tressa.

Also, after working closely with Square Enix, we’re happy to share that members can stream STAR OCEAN THE DIVINE FORCE on GeForce NOW beginning this week.

Here’s the full list of games joining this week:

  • Against the Storm (Epic Games and New release on Steam)
  • Horse Tales: Emerald Valley Ranch (New release on Steam, Nov. 3)
  • Space Tail: Every Journey Leads Home (New release on Steam, Nov. 3)
  • The Chant (New release on Steam, Nov. 3)
  • The Entropy Centre (New release on Steam, Nov. 3)
  • WRC Generations — The FIA WRC Official Game (New Release on Steam, Nov. 3)
  • Filament (Free on Epic Games, Nov. 3-10)
  • STAR OCEAN THE DIVINE FORCE (Steam)
  • PAGUI (Steam)
  • RISK: Global Domination (Steam)
  • Total War: THREE KINGDOMS (Steam)

Arriving in November

But wait, there’s more! Among the total 26 games joining GeForce NOW in November is the highly anticipated Warhammer 40,000: Darktide, with support for NVIDIA RTX and DLSS.

Here’s a sneak peak:

  • The Unliving (New release on Steam, Nov. 7)
  • TERRACOTTA (New release on Steam and Epic Games, Nov. 7)
  • A Little to the Left (New Release on Steam, Nov. 8)
  • Yum Yum Cookstar (New Release on Steam, Nov. 11)
  • Nobody — The Turnaround (New release on Steam, Nov. 17)
  • Goat Simulator 3 (New release on Epic Games, Nov. 17)
  • Evil West (New release on Steam, Nov. 22)
  • Colortone: Remixed (New Release on Steam, Nov. 30)
  • Warhammer 40,000: Darktide (New Release on Steam, Nov. 30)
  • Heads Will Roll: Downfall (Steam)
  • Guns Gore and Cannoli 2 (Steam)
  • Hidden Through TIme (Steam)
  • Cave Blazers (Steam)
  • Railgrade (Epic Games)
  • The Legend of Tianding (Steam)

While The Unliving was originally announced in October, the release date of the game shifted to Monday, Nov. 7.

Howlin’ for More

October brought more treats for members. Don’t miss the 14 extra titles added last month. 

With all of these sweet new titles coming to the cloud, getting your game on is as easy as pie. Speaking of pie, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Check Out 26 New Games Streaming on GeForce NOW in November appeared first on NVIDIA Blog.

Read More

Stormy Weather? Scientist Sharpens Forecasts With AI

Stormy Weather? Scientist Sharpens Forecasts With AI

Editor’s note: This is the first in a series of blogs on researchers advancing science in the expanding universe of high performance computing.

A perpetual shower of random raindrops falls inside a three-foot metal ring Dale Durran erected outside his front door (shown above). It’s a symbol of his passion for finding order in the seeming chaos of the planet’s weather.

A part-time sculptor and full-time professor of atmospheric science at the University of Washington, Durran has co-authored dozens of papers describing patterns in Earth’s ever-changing skies. It’s a field for those who crave a confounding challenge trying to express with math the endless dance of air and water.

meteorologist Dale Durran
Dale Durran

In 2019, Durran acquired a new tool, AI. He teamed up with a grad student and a Microsoft researcher to build the first model to demonstrate deep learning’s potential to predict the weather.

Though crude, the model outperformed the complex equations used for the first computer-based forecasts. The descendants of those equations now run on the world’s biggest supercomputers. In contrast, AI slashes the traditional load of required calculations and works faster on much smaller systems.

“It was a dramatic revelation that said we better jump into this with both feet,” Durran recalled.

Sunny Outlook for AI

Last year, the team took their work to the next level. Their latest neural network can process 320 six-week forecasts in less than a minute on the four NVIDIA A100 Tensor Core GPUs in an NVIDIA DGX Station. That’s more than 6x the 51 forecasts today’s supercomputers synthesize to make weather predictions.

In a show of how rapidly the technology is evolving, the model was able to forecast, almost as well as traditional methods, what became the path of Hurricane Irma through the Caribbean in 2017. The same model also could crank out a week’s forecast in a tenth of a second on a single NVIDIA V100 Tensor Core GPU.

AI forecasts Hurricane Irma's path
Durran’s latest work used AI to forecast Hurricane Irma’s path in Florida more efficiently and nearly as accurately as traditional methods.

Durran foresees AI crunching thousands of forecasts simultaneously to deliver a clearer statistical picture with radically fewer resources than conventional equations. Some suggest the performance advances will be measured in as many as five orders of magnitude and use a fraction of the power.

AI Ingests Satellite Data

The next big step could radically widen the lens for weather watchers.

The complex equations today’s predictions use can’t readily handle the growing wealth of satellite data on details like cloud patterns, soil moisture and drought stress in plants. Durran believes AI models can.

One of his graduate students hopes to demonstrate this winter an AI model that directly incorporates satellite data on global cloud cover. If successful, it could point the way for AI to improve forecasts using the deluge of data types now being collected from space.

In a separate effort, researchers at the University of Washington are using deep learning to apply a grid astronomers use to track stars to their work understanding the atmosphere. The novel mesh could help map out a whole new style of weather forecasting, Durran said.

Harvest of a Good Season

In nearly 40 years as an educator, Durran has mentored dozens of students and wrote two highly rated textbooks on fluid dynamics, the math used to understand the weather and climate.

One of his students, Gretchen Mullendore, now heads a lab at the U.S. National Center for Atmospheric Research, working with top researchers to improve weather forecasting models.

“I was lucky to work with Dale in the late 1990s and early 2000s on adapting numerical weather prediction to the latest hardware at the time,” said Mullendore. “I am so thankful to have had an advisor that showed me it’s cool to be excited by science and computers.”

Carrying on a Legacy

Durran is slated to receive in January the American Meteorological Society’s most prestigious honor, the Jule G. Charney Medal. It’s named after the scientist who worked with John von Neumann to develop in the 1950s the algorithms weather forecasters still use today.

Charney was also author in 1979 of one of the earliest scientific papers on global warming. Following in his footsteps, Durran wrote two editorials last year for The Washington Post to help a broad audience understand the impacts of climate change and rising CO2 emissions.

The editorials articulate a passion he discovered at his first job in 1976, creating computer models of air pollution trends. “I decided I’d rather work on the front end of that problem,” he said of his career shift to meteorology.

It’s a field notoriously bedeviled by effects as subtle as a butterfly’s wings that motivates his passion to advance science.

The post Stormy Weather? Scientist Sharpens Forecasts With AI appeared first on NVIDIA Blog.

Read More

GeForce RTX 40 Series Receives Massive Creator App Benefits This Week ‘In the NVIDIA Studio’

GeForce RTX 40 Series Receives Massive Creator App Benefits This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Artists deploying the critically acclaimed GeForce RTX 4090 GPUs are primed to receive significant performance boosts in key creative apps. OBS Studio and Google Chrome enabled AV1 encoding; Topaz AI-powered apps and ON1 software added Tensor Core acceleration; and VTube Studio integrated NVIDIA Broadcast augmented-reality features that enable high-quality, seamless control of avatars.

Plus, a special spook-tober edition of In the NVIDIA Studio features two talented 3D artists and their Halloween-themed creations this week.

3D and special effects artist Eric Tualle, better known as ATOM, shares his short film sequence, Mr. Pumpkin, which was created to test motion-capture techniques and the render speeds of his new GeForce RTX 4090 GPU.

NVIDIA 3D artist Sabour Amirazodi might be the biggest Halloween fan ever. Look no further than the extraordinary, stunning light show he creates for his neighborhood every year. His workflow is powered by a mobile workstation equipped with an NVIDIA RTX A5000 GPU.

Finally, check out the #From2Dto3D challenge highlights, including NVIDIA Studio’s favorite inspirational artwork brought to life in beautiful 3D.

Tricks and Treats With RTX

The new GeForce RTX 40 Series GPUs feature incredible upgrades in dedicated hardware, including third-generation RT Cores, fourth-generation Tensor Cores and an eighth-generation NVIDIA dual AV1 encoder. These advancements deliver turbocharged creative workflows and accelerate creative apps in 3D modeling, video editing and more with AI-powered features.

OBS Studio sftware version 28.1 added AV1 encoding support with the new NVIDIA Encoder, known as NVENC, delivering 40% better livestreaming efficiency. These livestreams will appear as if bandwidth was increased by 40% — a big boost in image quality. Plus, AV1 added high dynamic range support.

Google Chrome released an update to enable AV1 encoding on all its browser apps, also offering a 40% gain in livestreaming efficiency.

 

VTube Studio recently integrated the NVIDIA Broadcast app, adding AI-powered face tracking. Using the app requires only a simple webcam, which eliminates the need for expensive, specialized hardware. This gives many more artists the tools to become a VTuber and existing ones better avatars that match their expressions.

Topaz Lab’s Video AI v3.0 release introduced an AI stabilization model that reduces shaky camera motion by estimating camera movement and transforming frames for smoother video footage. The update also introduced an AI slow-motion model, called Apollo, which builds on past motion models by handling nonlinear motion and motion blur.

Furthermore, v3.0 added functionality that enables multiple AI models to tackle a single project simultaneously. For example, an AI model can upscale footage while enabling stabilization. These features and models run up to 1.25x faster on NVIDIA GPUs with the adoption of the NVIDIA TensorRT software development kit. The app also now supports the popular dual NVIDIA AV1 encoder, enabling users to run previews from multiple video input files and export several projects simultaneously.

NVIDIA also collaborated with photo-editing software company ON1 to bring a massive performance boost to the ON1 Resize app. Advanced effects can now be applied more than 2x faster, and additional enhancements are in the works.

Artists Give ’Em Pumpkin to Talk About

ATOM has been creating content for more than a decade. His work is influenced by his love of older TV shows, moody artwork and the darker emotions found in human nature.

“Human emotions have always inspired me in art, especially negative ones, because that’s what makes us stronger in life,” he said. “That’s why I like dark things.”

His short film Mr. Pumpkin playfully experiments with motion capture, bringing the title character and his technical tribulations to life. ATOM wanted to ensure the atmosphere was right for this film. He created the tone of a mysterious forest at night, full of volumetric light and mist. Mr. Pumpkin himself can be instantly recognized as the hero of Halloween.

 

Photogrammetry — a method of generating 3D models using a series of photographs — continues to be adopted as a bonafide method for creating quality 3D assets quickly. It’s where ATOM’s journey began, with a real-life pumpkin.

ATOM captured video of the pumpkin within a homemade motion-square setup that rotated his prop for a complete scan. The artist then uploaded the footage to Adobe After Effects and exported the frames into an image sequence within Adobe Substance 3D Sampler before uploading them to Maxon’s Cinema 4D.

 

“It’s a real revolution to be able to do this kind of motion capture at home, when previously it would have required hiring a full motion-capture studio,” noted ATOM.

 

With a full-fidelity model, ATOM refined the pumpkin — sculpting until the shape was perfect. He then adjusted textures and colors to reach his ideal look. Even lighting the scene was quick and easy, he said, thanks to the GPU-accelerated viewport that ensures smooth interactivity with complex 3D models due to his GeForce RTX 4090 GPU.

 

ATOM applied volumetric effects such as clouds, fog and fire with virtually no slowdown, underlining the importance of GPUs in 3D content creation.

 

After animating and locking out the remaining scene elements, ATOM exported files to Topaz Labs Video AI. RTX-accelerated AI enlargement of footage retained high-fidelity details and high temporal stability while up-resing to 4K resolution.

ATOM adores sharing techniques with the creative community and helping others learn. “I’m trying to transmit as much as I can about the world of 3D, cinema and everything that is visually beautiful,” he said.

For his workflow, NVIDIA Studio and RTX GPUs remain critical or, as he says, “a central element in digital creation … its place is paramount in all critical creative apps the community uses today.”

3D and special effects artist ATOM.

Check out ATOM’s tutorials and content on his YouTube channel.

The ‘Haunted Sanctuary’ Awaits

As a creative director and visual effect producer, NVIDIA artist Sabour Amirazodi brought his 16+ years of multi-platform experience in location-based entertainment and media production to his own home, creating an incredible Halloween installation. Make sure to have the volume on when watching this video showcasing his Haunted Sanctuary:

The project required projection mapping, so the artist used GPU-accelerated MadMapper software and its structured light-scan feature to map custom visuals onto the wide surface of his house.

Amirazodi accomplished this by connecting a DSLR camera to his mobile workstation powered by an NVIDIA RTX A5000 GPU. The camera shot a series of lines, took pictures and translated to the projector’s point of view an image on which to base a 3D model. Basic camera matching tools found in Cinema 4D helped recreate the scene.

 

Amirazodi used the lidar camera on his mobile device to scan his house while walking around it. He then created a complete 3D model for more refined mapping and exported it as an FBX file.

Amirazodi worked within Cinema 4D and OTOY OctaneRender to texture, rig, animate, light and render scenes. The GPU-accelerated viewport ensured smooth interactivity with the complex 3D models.

 

Amirazodi then moved to the composite stage, importing his cache of models into Adobe After Effects. With the software’s over 45 GPU-accelerated effects, his RTX A5000 GPU assisted in touching up scenes faster, especially when correcting color and reducing unwanted noise.

To make this installation possible, Amirazodi had to render a staggering 225GB worth of video files, consisting of approximately 18,000 frames in 4K resolution, using Cinema 4D with OctaneRender.

OTOY’s OctaneRender is RTX accelerated, and ray tracing delivers lightning-quick exports. “There’s no way I would have been able to render all of those frames without my RTX A5000 GPU,” the artist said.

When asked why he went through all this effort, Amirazodi gave a touching answer: “My kids,” he said. “With the pandemic, we couldn’t fulfill our tradition of attending the Disneyland haunted house, so I had to bring the haunted house back to my home.”

NVIDIA artist Sabour Amirazodi.

Amirazodi’s advice to prospective artists is simple — pick a theme and stick with it. “Gather inspiration from your favorite creative community, like TurboSquid, ArtStation or Sketchfab, then just start creating and getting things going,” he said. “Let instincts take over to more quickly discover your flow state.”

Amirazodi specializes in video editing, 3D modeling and interactive experiences. Check out the creative savant’s work on IMDb.

2D to 3D, Easy Peasy

NVIDIA Studio extends a warm thank you to all the #From2Dto3D challenge participants, including:

@AnaCarolina_Art — The alien model that helped land your first full-time industry job is simply stunning.

@yetismack3d — The union of a minion and a xenomorph may be unholy, but it’s beautiful nonetheless.

@eyedesyn — From one of our editors, “Oh my gosh, that’s adorable!” Evoking emotion through art is an artist’s dream, well done.

Follow NVIDIA Studio on Instagram, Twitter and Facebook for regular artistic inspiration, and be the first to learn more about the upcoming winter challenge.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

The post GeForce RTX 40 Series Receives Massive Creator App Benefits This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Think Fast: Lotus Eletre Tops Charts in Driving and AI Compute Speeds, Powered by NVIDIA DRIVE Orin

Think Fast: Lotus Eletre Tops Charts in Driving and AI Compute Speeds, Powered by NVIDIA DRIVE Orin

One of the biggest names in racing is going even bigger.

Performance automaker Lotus launched its first SUV, the Eletre, earlier this week. The fully electric vehicle sacrifices little in terms of speed and outperforms when it comes to technology.

It features an immersive digital cockpit, lengthy battery range of up to 370 miles and autonomous-driving capabilities powered by the NVIDIA DRIVE Orin system-on-a-chip.

The Eletre’s autonomous-driving system is designed for more than easier commutes. Lotus plans to train the vehicle to complete the world-famous Nürburgring racetrack in Germany entirely on its own. Working with Lotus Group autonomous driving subsidiary ROBO Galaxy, Lotus is able to quickly iterate on deep neural network development to optimize the performance of the high-performance hardware system.

With a top speed of 165 miles per hour and an acceleration that starts at 0 to 62 mph in 4.5 seconds for the base trim — and can be as fast as 2.95 seconds for performance versions — this isn’t an average SUV.

Intelligent Performance

The Lotus Eletre thinks as fast as it drives.

It comes equipped with lidar to comprehensively perceive the surrounding environment. That driving data is processed by two DRIVE Orin systems-on-a-chip, for a total of 508 trillion operations per second of performance.

With this level of AI compute, the Eletre can run the deep neural networks and applications necessary for autonomous driving in real time, with additional headroom for new capabilities that can be added over the air.

Drivers of the performance Eletre S can sit back and enjoy the 23-speaker KEF premium audio system while the SUV’s intelligent-driving capabilities take over.

Eventually, they can fire all the proverbial cylinders of the 905 horsepower dual motor — and dual DRIVE Orin — and take the autonomous-driving system to the track.

Ahead of the Curve

Lotus is bringing its racing heritage into the software-defined era with the Eletre. This future is arriving in just months.

Customer deliveries will begin in China and Europe in the first half of next year, with expansion to North America and other global markets in 2024.

The post Think Fast: Lotus Eletre Tops Charts in Driving and AI Compute Speeds, Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

Neural NETA: Automaker Selects NVIDIA DRIVE Orin for AI-Powered Vehicles

One of China’s popular battery-electric startups now has the brains to boot.

NETA Auto, a Zheijiang-based electric automaker, this week announced it will build its future electric vehicles on the NVIDIA DRIVE Orin platform. These EVs will be software defined, with automated driving and intelligent features that will be continuously upgraded via over-the-air updates.

This extends next-generation vehicle technology to thousands of new customers. NETA has been leading battery-EV sales among new market entrants for the past three months, delivering a total of 200,000 EVs since it began production in late 2018.

NETA aims to make travel more comfortable with innovative technologies that break the norm. Hallmarks of NETA vehicles include 5G connectivity and digital assistants.

Last year, NETA Auto released the Shanhai Platform, an independently developed smart automotive architecture. The first model based on this platform, the NETA S, launched in July.

With the addition of DRIVE Orin, these vehicles will have centralized, high-performance compute to enable even greater capabilities.

Street Smarts

Traditionally, implementing the latest technology in new vehicles requires lengthy product cycles and the updating of distributed computers throughout the car.

With centralized, software-defined compute, this process has been reimagined. The vehicle’s intelligent functions run on a single, high-performance AI compute platform. When new software is developed and validated, it can be installed via over-the-air updates, even after the car leaves the dealership.

The DRIVE Orin system-on-a-chip delivers 254 trillion operations per second — ample compute headroom for a software-defined architecture. It’s designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.

Equipped with the performance of DRIVE Orin, NETA vehicles will have limitless possibilities.

Aiming Higher

In addition to designing its vehicles with DRIVE Orin, NETA is working with NVIDIA technologies to develop advanced autonomous-driving capabilities.

The companies are collaborating on the design and development of a centralized cross-domain fusion computing platform for level 4 autonomy.

Zhang Yong, NETA Auto co-founder and CEO, with Liu Tong, NVIDIA automotive general manager in China, at the October signing ceremony.

“NETA Auto is at a new stage of development and sees technological innovation as one of the biggest enablers moving this industry forward,” said Zhang Yong, co-founder and CEO of NETA. “The close cooperation with NVIDIA will give NETA Auto a strong boost in bringing intelligent, technology-rich vehicles to market worldwide.”

The post Neural NETA: Automaker Selects NVIDIA DRIVE Orin for AI-Powered Vehicles appeared first on NVIDIA Blog.

Read More