Mown Away: Startup Rolls Out Autonomous Lawnmower With Cutting Edge Tech

Jack Morrison and Isaac Roberts, co-founders of Replica Labs, were restless two years after their 3D vision startup was acquired, seeking another adventure. Then, in 2018, when Morrison was mowing his lawn, it struck him: autonomous lawn mowers.

The two, along with Davis Foster, co-founded Scythe Robotics. The company, based in Boulder, Colo., has a 40-person team working with robotics and computer vision to deliver what it believes to be the first commercial electric self-driving mower service.

Scythe’s machine, dubbed M.52, collects a dizzying array of data from eight cameras and more than a dozen other sensors, processed by NVIDIA Jetson AGX Xavier edge AI computing modules.

The company plans to rent its machines to customers much like is done in a software-as-as-service model, but based on acreage of cut grass, reducing upfront costs.

“I thought, if I didn’t enjoy mowing the lawn, what about the folks who are doing it every day. Wasn’t there something better they could do with their time?” said Morrison. “It turned out there’s a strong resounding ‘yes’ from the industry.”

The startup, a member of the NVIDIA Inception program, says it already has thousands of reservations for its on-demand robots. Meanwhile, it has a handful of pilots, including one with Clean Scapes, a large Austin-based commercial landscaping company.

Scythe’s electric machines are coming as regulatory and corporate governance concerns highlight the need for cleaner landscaping technologies.

What M.52 Can Do

Scythe’s M.52 machine is about as state of the art as it gets. Its cameras support vision on all sides, and its dozen sensors include ultrasonics, accelerometers, gyroscopes, magnetometers, GPS and wheel encoders.

To begin a job, the M.52 needs to only be manually driven on the perimeter of an area just once. Scythe’s robot mower relies on its cameras, GPS and wheel encoders to help plot out maps of its environment with simultaneous localization and mapping, or SLAM.

After that, the operator can direct the M.52 to an area and specify a direction and stripe pattern, and it completes the job unsupervised. If it encounters an obstacle that shouldn’t be there — like a bicycle on the grass — it can send alerts for an assist.

“Jetson AGX Xavier is a big enabler of all of this, as it can be used in powerful machines, brings a lot of compute, really low power, and hooks into the whole ecosystem of autonomous machine sensors,” said Morrison.

Scythe’s robo-mowers pack enough battery for eight hours of use on a charge, which can come from a standard level 2 EV charger. And the company says fewer moving parts than combustion engine mowers means less maintenance and a longer service life.

Also, the machine can go 12 miles per hour and includes a platform for operators to stand on for a ride. It’s designed for jobs sites where mowers may need to travel some distance to get to the area of work.

Riding the RaaS Wave

The U.S. market for landscaping services is expected to reach more than $115 billion this year, up from about $70 billion in 2012, according to IBISWorld research.

Scythe is among an emerging class of startups offering robots as a service, or RaaS, in which customers pay according to usage.

It’s also among companies operating robotics services whose systems share similarities with AVs, Morrison said.

“An advantage of Xavier is using these automotive-grade camera standards that allow the imagery to come across with really low latency,” he said. “We are able to turn things around from photon to motion quite quickly.”

Earth-Friendly Machines

Trends toward lower carbon footprints in businesses are a catalyst for Scythe, said Morrison. LEED certification for greener buildings counts the equipment used in maintenance — such as landscaping — driving interest for electric equipment.

Legislation from California, which aims to prohibit sales of gas-driven tools by 2024, factors in as well. A gas-powered commercial lawn mower driven one hour emitted as much of certain pollutants as driving a passenger vehicle 300 miles, according to a fact sheet released from the California Air Resources Board in 2017.

“There’s a lot of excitement around what we are doing because it’s becoming a necessity, and those in landscaping businesses know that they won’t be able to secure the equipment they are used to,” said Morrison.

Learn more about Scythe Robotics in this GTC presentation.

The post Mown Away: Startup Rolls Out Autonomous Lawnmower With Cutting Edge Tech appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: 3D Artist Creates Towering Work With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Edward McEvenue

Edward McEvenue grew up making claymations in LEGO towns. Now, he’s creating photorealistic animations in virtual cities, drawing on more than a decade of experience in the motion graphics industry.

The Toronto-based 3D artist learned about NVIDIA Omniverse, a 3D design collaboration and world simulation platform, a few months ago on social media. Within weeks he was using it to build virtual cities.

McEvenue used the Omniverse Create app to make a 3D representation of New York City, composed of 32 million polygons:

And for the following scene, he animated a high-rise building with fracture-and-reveal visual effects:

Light reflects off the building’s glass windows photorealistically, and vehicles whiz by in a physically accurate way — thanks to ray tracing and path tracing enabled by RTX technology, as well as Omniverse’s physics-based simulation capabilities.

McEvenue’s artistic process incorporates many third-party applications, including Autodesk 3ds Max for modeling and animation; Epic Games Unreal Engine for scene layout; tyFlow for visual effects and particle simulations; Redshift and V-Ray for rendering; and Adobe After Effects for post-production compositing.

All of these tools can be connected seamlessly in Omniverse, which is built on Pixar’s Universal Scene Description, an easily extensible, open-source 3D scene description and file format.

Real-Time Rendering and Multi-App Workflows

EDSTUDIOS, McEvenue’s freelance company, creates 3D animations and motion graphics for films, advertisements and other visual projects.

He says Omniverse helps shave a week off his average delivery time for projects, since it eliminates the need to send animations to a separate render farm, and allows him to bring in assets from his favorite design applications.

“It’s a huge relief to finally feel like rendering isn’t a bottleneck for creativity anymore,” McEvenue said. “With Omniverse’s real-time rendering capabilities, you get instant feedback, which is incredibly freeing for the artistic process as it allows creators to focus time and energy into making their designs rather than facing long waits for the beautiful images to display.”

A virtual representation of the ancient sculpture ‘Laocoön and His Sons’ — made by McEvenue with Autodesk 3ds Max, Unreal Engine 5 and Omniverse Create.

McEvenue says he can output more visuals on one computer running Omniverse faster than he previously could with nine. He uses an NVIDIA RTX 3080 Ti GPU and NVIDIA Studio Drivers to accelerate his workflow.

In addition to freelance projects, McEvenue has worked on commercial campaigns across Canada, the U.S. and Uganda as a co-founder of the film production company DAY JOB.

“Omniverse takes away a lot of the friction in dealing with exporting formats, recreating shaders or linking materials,” McEvenue said. “And it’s very intuitively designed, so it’s incredibly easy to get up and running.”

Watch a Twitch stream replay that dives deeper into McEvenue’s workflow, as well as highlights the launch of the Unreal Engine 5 Omniverse Connector.

Join in on the Creation

Creators across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Join the #MadeInMachinima contest, running through June 27, for a chance to win the latest NVIDIA Studio laptop.

Learn more about Omniverse by watching GTC sessions on demand — featuring visionaries from the Omniverse team, Adobe, Autodesk, Bentley Systems, Epic Games, Pixar, Unity, Walt Disney Studios and more.

Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: 3D Artist Creates Towering Work With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

How DNEG Helped Win Another Visual-Effects Oscar by Bringing ‘Dune’ to Life With NVIDIA RTX

Featuring stunning visuals from futuristic interstellar worlds, including colossal sand creatures, Dune captivated audiences around the world.

The sci-fi film picked up six Oscars last month at the 94th Academy Awards, including for Best Sound and Visual Effects. Adapted from Frank Herbert’s 1965 novel of the same name, Dune tells the story of Paul Atreides, a heroic character whose family travels to the dangerous planet of Arrakis.

To bring the story to life, now seven-time Academy Award-winning studio DNEG used a blend of practical and visual effects, creating spectacular graphics that capture the dystopian worlds of Dune.

The film’s production visual effects supervisor, Paul Lambert, said that his focus was on seamlessly augmenting or enhancing what was already accomplished with the beautiful production design and cinematography — grounding the visual effects in reality. DNEG contributed to 28 sequences and over 1,000 VFX shots in the film, and the artists worked from multiple locations using NVIDIA RTX Virtual Workstations.

A Mix of Sand and Simulations

Sand, inevitably, was going to play a major part in Dune, and 18 tons of it were used to make the film. But the VFX team also digitally replicated every aspect of it to perfectly blend simulated sand into the shots.

“One of the things we came up against early on was getting that essence of scale,” said Paul Salvini, global CTO of DNEG. “In simulations, each grain of sand is literally the size of a pixel, which means we needed huge volumes of sand, and that turned into petabytes of data.”

All imagery courtesy of DNEG.

Beyond replicating the look and feel of sand, the team needed to realistically capture its movement. This became even more challenging when it came to finding a way to depict massive sandworms moving through the desert.

The artists spent months building, modeling and sculpting the creature into shape. They took inspiration from baleen whales — months of research revealed that when a colossal object moves through sand, the environment around it behaves like water, similar to how a whale moves through the ocean.

DNEG then simulated each sand particle to see how it would cascade off a sandworm, or how the dunes would ripple as the creature moved around. For the latter, the practical effects team created a sand-displacement effect by placing a vibrating metal plate under real sand, and the VFX team expanded it to simulate the effect on a much larger scale.

“It’s tricky to do, because it’s super complex and computationally expensive to figure out how one grain of sand is connected to another grain — and have all of this act on a massive scale,” said Salvini. “It was an iterative process, and it takes a long time to actually simulate all of these particles.”

DNEG used a combination of Dell Precision workstations and Dell PowerEdge R740 servers with NVIDIA RTX and server GPUs to iterate quickly and make changes, ensuring the simulations with the sandworm looked realistic.

To add more realism to the creature, the artists looked to the bristly teeth of baleen whales. The VFX team modeled different versions of the teeth and used a scattering system in the Houdini app, which allowed them to populate the worm’s mouth at render time.

Using Isotropix Clarisse and NVIDIA RTX, DNEG artists rendered graphics in hours instead of days. This allowed them to receive feedback on the visuals nearly instantly. It also helped increase their number of iterations, enabling final shots and high-quality images at a much quicker pace.

Enhancing Production Workflows With Virtualization

DNEG was one of the first studios to implement NVIDIA virtual GPUs at scale with NVIDIA RTX Virtual Workstation software. NVIDIA RTX-powered virtual workstations deliver incredible flexibility, allowing DNEG to adjust the number of users on a particular server based on the current workload.

Virtual machines are also cost effective. As newer GPUs and expanded software packages enter the data center, DNEG can deploy these to its users to maintain optimal performance for each artist.

“To give our artists more compute power, we can easily increase NVIDIA vGPU profile sizes and reduce the number of users we put on each server,” said Daire Byrne, global head of systems at DNEG. “We don’t need to replace any equipment to keep working with maximum performance.”

And because creators can securely log into RTX-powered virtual workstations from anywhere in the world, DNEG artists can work remotely, while still maintaining high productivity.

“Every show we get is different from the last, and NVIDIA RTX Virtual Workstations let us scale the memory and performance characteristics up or down to meet the needs of our artists,” said Byrne.

Living in the Future of Virtual Worlds

DNEG continues its pioneering work with NVIDIA Omniverse Enterprise as the studio looks to the future of connected, collaborative workflows.

“The virtual world is where filmmaking is going,” said Salvini. “We now have advanced tools and technologies that are capable of delivering photorealistic virtual environments and digital characters, allowing us to create incredible, beautiful stylized worlds.”

With the shift toward real-time technology and more seamless, collaborative content creation pipelines, DNEG sees greater opportunities to interact with the filmmakers and art teams across the globe. This will allow for many more iterations to accomplish artistic goals in a fraction of the time.

DNEG uses Omniverse Enterprise with Dell Precision workstations with NVIDIA RTX A6000 GPUs, and Dell PowerEdge R7525 servers with NVIDIA A40 GPUs.

Learn more about how DNEG is transforming global film production workflows in the studio’s GTC session, now available on demand.

The post How DNEG Helped Win Another Visual-Effects Oscar by Bringing ‘Dune’ to Life With NVIDIA RTX appeared first on NVIDIA Blog.

Read More

Your Odyssey Awaits: Stream ‘Lost Ark’ to Nearly Any Device This GFN Thursday

It’s a jam-packed GFN Thursday.

This week brings the popular, free-to-play, action role-playing game Lost Ark to gamers across nearly all their devices, streaming on GeForce NOW. And that’s not all.

GFN Thursday also delivers an upgraded experience in the 2.0.40 update. M1-based MacBooks, iMacs and Mac Minis are now supported natively.

Plus, membership gift cards can now be redeemed for RTX 3080, a membership reward for the “Heroic Edition” of Guild Wars 2 and 14 new games are joining the GeForce NOW library this week.

Embark on an Odyssey

Visit Arkesia, the vast and vibrant world of Lost Ark, now streaming to PC, Mac, Chromebook and more with GeForce NOW.

Explore new lands and encounter vibrant cultures, strange creatures and unexpected marvels waiting to be discovered. Seek out lost treasures in dungeons, test your mettle on epic quests, show your strength in intense raids against enemies and go head-to-head in expert PvP duels. Do it all solo, grouped with friends or matched with other players in this massive open world.

Forge your own destiny across almost all devices. Play to your fighting style with iconic character classes — each with their own distinct abilities — at up to 1440p or 1600p and 120 frames per second on PC and up to 4K on SHIELD TV with an RTX 3080 membership.

Choose weapons and gear to assist you on your journey gaming on the go with a mobile phone. Dive deep into non-combat skills, crafting, guilds, social systems and other rich features that bring the world alive, streaming even on a MacBook at up to 1600p or iMac up to 1440p.

The adventure is yours. Do it all, streaming on GeForce NOW.

Level Up With the GeForce NOW 2.0.40 Update

The newest update to the cloud enables the GeForce NOW macOS app to natively support the Apple M1 chip. This update provides lower power consumption, faster app startup times and an overall elevated GeForce NOW experience on M1-based MacBooks, iMacs and Mac Minis.

Streaming Statistics Overlay on GeForce NOW
Enjoy PC gaming across nearly all devices.

The upgrade brings some additional benefits to members.

Members can more easily discover new games to play in the app with the added Genre row at the bottom of the Games menu. Useful sorting options include the ability to see All Games available in specific regions and by device type, and multiple filters can help narrow down the list.

Finally, members can enjoy an improved Streaming Statistics Overlay that now includes server-side rendering frame rates. The overlay quickly toggles through Standard/Compact/Off using the hotkey Ctrl+N. Members can complete their whole log-in process on play.geforcenow.com within the same browser tab.

Give the Gift of Gaming

What could be as good as playing with the power of a GeForce RTX 3080? Try giving that power to a loved one, streaming from the cloud.

Gift Cards on GeForce NOW
It’s the gift that keeps on giving for gamers.

GeForce NOW RTX 3080 memberships are now available as digital gift cards with two-, three- and six-month options. GeForce NOW membership gift cards can be used to redeem an RTX 3080 membership or a Priority membership, depending on the recipient’s preference, so you can’t go wrong.

Spoil a special gamer in your life by giving them access to the cloud across their devices or bring a buddy onto the service to party up and play.

Gift cards can be added to an existing GeForce NOW account or redeemed on a new one. Existing Founders, Priority and RTX 3080 members will have the number of months added to their accounts. For more information, visit the GeForce NOW website.

Rewards of Heroic Quality

This week, members can receive the “Heroic Edition” of ArenaNet’s critically acclaimed free-to-play massively multiplayer online role-playing game Guild Wars 2. The Guild Wars 2Heroic Edition” comes with a full treasure trove of goodies, including the Suit of Legacy Armor, an 18-slot inventory expansion and four heroic Boosters.

Guild Wars 2 Heroic Edition on GeForce NOW
You bring the heroics, we’ll bring the rewards in Guild Wars 2.

Getting membership rewards for streaming games on the cloud is easy. Log in to your NVIDIA account and select “GEFORCE NOW” from the header, scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that pops up to start receiving special offers and in-game goodies.

Sign up for the GeForce NOW newsletter, including notifications for when rewards are available, by logging into your NVIDIA account and selecting “PREFERENCES” from the header. Check the “Gaming & Entertainment” box, and “GeForce NOW” under topic preferences.

No Time Like Playtime

Dune Spice Wars on GeForce NOW
Lead your faction and battle for control over the harsh desert planet of Arrakis.

To cap it all off, GFN Thursday has new games to play, as always. Members can look for the following 14 games ready to stream this week:

  • Dune: Spice Wars (New release on Steam)
  • Holomento (New release on Steam)
  • Prehistoric Kingdom (New release on Steam and Epic Games Store) 
  • Romans: Age of Caesar (New release on Steam)
  • Sea of Craft (New release on Steam)
  • Trigon: Space Story (New release on Steam)
  • Vampire: The Masquerade – Bloodhunt (New release on Steam)
  • Conan Exiles (Epic Games Store)
  • Crawl (Steam)
  • Flashing Lights – Police, Firefighting, Emergency Services Simulator (Steam)
  • Galactic Civilizations II: Ultimate Edition (Steam)
  • Jupiter Hell (Steam)
  • Lost Ark (Steam)
  • SOL CRESTA (Steam)

Finally, we’ve got a question for you this week and are only accepting wrong answers. Let us know what you think on Twitter or in the comments below:

The post Your Odyssey Awaits: Stream ‘Lost Ark’ to Nearly Any Device This GFN Thursday appeared first on NVIDIA Blog.

Read More

Answers Blowin’ in the Wind: HPC Code Gives Renewable Energy a Lift

A hundred and forty turbines in the North Sea — and some GPUs in the cloud — pumped wind under the wings of David Standingford and Jamil Appa’s dream.

As colleagues at a British aerospace firm, they shared a vision of starting a company to apply their expertise in high performance computing across many industries.

They formed Zenotech and coded what they learned about computational fluid dynamics into a program called zCFD. They also built a tool, EPIC, that simplified running it and other HPC jobs on the latest hardware in public clouds.

But getting visibility beyond their Bristol, U.K. home was a challenge, given the lack of large, open datasets to show what their tools could do.

Harvesting a Wind Farm’s Data

Their problem dovetailed with one in the wind energy sector.

As government subsidies for wind farms declined, investors demanded a deeper analysis of a project’s likely return on investment, something traditional tools didn’t deliver. A U.K. government project teamed Zenotech with consulting firms in renewable energy and SSE, a large British utility willing to share data on its North Sea wind farm, one of the largest in the world.

Wind fames Eng Channel
Zenotech simulated SSE’s Greater Gabbard wind farm, one of the largest of many in the North Sea.

Using zCFD, Zenotech simulated the likely energy output of the farm’s 140 turbines. The program accounted for dozens of wind speeds and directions. This included key but previously untracked phenomena in front of and behind a turbine, like the combined effects the turbines had on each other.

“These so-called ‘wake’ and ‘blockage’ effects around a wind farm can even impact atmospheric currents,” said Standingford, a director and co-founder of Zenotech.

The program can also track small but significant terrain effects on the wind, such as when trees in a nearby forest lose their leaves.

SSE validated that the final simulation came within 2 percent of the utility’s measured data, giving zCFD a stellar reference.

Accelerated in the Cloud

Icing the cake, Zenotech showed cloud GPUs delivered results fast and cost effectively.

For example, the program ran 43x faster on NVIDIA A100 Tensor Core GPUs than CPUs at a quarter of the CPUs’ costs, the company reported in a recent GTC session (viewable on-demand). NVIDIA NCCL libraries that speed communications between GPU systems further boosted results up to 15 percent.

Zenotech GTC results
Zenotech benchmarked zCFD on NVIDIA GPUs (green on the left chart) versus CPUs (blue) as well as GPU clusters with and without NCCL libraries (right chart).

As a result, work that took more than five hours on CPUs ran in less than 50 minutes on GPUs. The ability to analyze nuanced wind effects in detail and finish a report in a day “got people’s attention,” Standingford said.

“The tools and computing power to perform high-fidelity simulations of wind projects are now affordable and accessible to the wider industry,” he concluded in a report on the project.

Lowering Carbon, Easing Climate Change

Wind energy is among the largest, most cost-effective contributors to lowering carbon emissions, notes Farah Hariri, technical lead for the team helping NVIDIA’s customers manage their transitions to net-zero emissions.

“By modeling both wake interactions and the blockage effects, zCFD helps wind farms extract the maximum energy for the minimum installation costs,” she said.

This kind of fast yet detailed analysis lowers risk for investors, making wind farms more economically attractive than traditional energy sources, said Richard Whiting, a partner at Everose, one of the consultants who worked on the project with Zenotech.

Looking forward, Whiting estimates more than 2,100 gigawatts of wind farms could be in operation worldwide by 2030, up 3x from in 2020. It’s a growing opportunity on many levels.

“In future, we expect projects will use larger arrays of larger turbines so the modeling challenge will only get bigger,” he added.

Less Climate Change, More Business

Helping renewable projects get off the ground also put wind in Zenotech’s sails.

Since the SSE analysis, the company has helped design wind farms or turbines in at least 10 other projects across Europe and Asia. And half of Zenotech’s business is now outside the U.K.

As the company expands, it’s also revisiting its roots in aerospace, lifting the prospects of new kinds of businesses for drones and air taxis.

A Parallel Opportunity

For example, Cardiff Airport is sponsoring live trials where emerging companies use zCFD on cloud GPUs to predict wind shifts in urban environments so they can map safe, efficient routes.

“It’s a forward-thinking way to use today’s managed airspace to plan future services like air taxis and automated airport inspections,” said Standingford.

“We’re seeing a lot of innovation in small aircraft platforms, and we’re working with top platform makers and key sites in the U.K.”

It’s one more way the company is keeping its finger to the wind.

The post Answers Blowin’ in the Wind: HPC Code Gives Renewable Energy a Lift appeared first on NVIDIA Blog.

Read More

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

Entrepreneur Jason Mars calls conversation our “first technology.”

Before humans invented the wheel, crafted a spear or tamed fire, we mastered the superpower of talking to one another.

That makes conversation an incredibly important tool.

But if you’ve dealt with the automated chatbots deployed by the customer service arms of just about any big organization lately — whether banks or airlines — you also know how hard it can be to get it right.

Deep learning AI and new techniques such as zero-shot learning promise to change that.

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz — whose intelligence is anything but artificial — spoke with Mars about how the latest AI techniques intersect with the very ancient art of conversation.

In addition to being an entrepreneur and CEO of several startups, including Zero Shot Bot, Mars is an associate professor of computer science at the University of Michigan and the author of “Breaking Bots: Inventing a New Voice in the AI Revolution” (ForbesBooks, 2021).

You Might Also Like

NVIDIA’s Liila Torabi Talks the New Era of Robotics Through Isaac Sim

Robots aren’t limited to the assembly line. Liila Torabi, senior product manager for Isaac Sim, a robotics and AI simulation platform powered by NVIDIA Omniverse, talks about where the field’s headed.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

The Driving Force: How Ford Uses AI to Create Diverse Driving Data

The neural networks powering autonomous vehicles require petabytes of driving data to learn how to operate. Nikita Jaipuria and Rohan Bhasin from Ford Motor Company explain how they use generative adversarial networks (GANs) to fill in the gaps of real-world data used in AV training.

Subscribe to the AI Podcast: Now available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast Better: Have a few minutes to spare? Fill out our listener survey.

The post What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains appeared first on NVIDIA Blog.

Read More

In the NVIDIA Studio: April Driver Launches Alongside New NVIDIA Studio Laptops and Featured 3D Artist

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

This week In the NVIDIA Studio, we’re launching the April NVIDIA Studio Driver with optimizations for the most popular 3D apps, including Unreal Engine 5, Cinema4D and Chaos Vantage. The driver also supports new NVIDIA Omniverse Connectors from Blender and Redshift.

Digital da Vincis looking to upgrade their canvases can learn more about the newly announced Lenovo ThinkPad P1 NVIDIA Studio laptop, or pick up the Asus ProArt Studiobook 16, MSI Creator Pro Z16 and Z17 — all available now.

These updates are part of the NVIDIA Studio advantage: dramatically accelerated 3D creative workflows that are essential to this week’s featured In the NVIDIA Studio creator, Andrew Averkin, lead 3D environment artist at NVIDIA.

April Showers Bring Studio Driver Powers

The April NVIDIA Studio Driver supports many of the latest creative app updates, starting with the highly anticipated launch of Unreal Engine 5.

Unreal Engine 5’s Lumen on full display.

NVIDIA RTX GPUs support Lumen — UE5’s new, fully dynamic global illumination system — for both software and hardware ray tracing. Along with Nanite, developers are empowered to create games and apps that contain massive amounts of geometric detail with fully dynamic global illumination. At GTC last month, NVIDIA introduced an updated Omniverse Connector that includes the ability to export the source geometry of Nanite meshes from Unreal Engine 5.

The City Sample is a free downloadable sample project that reveals how the city scene from ‘The Matrix Awakens: An Unreal Engine 5 Experience’ was built.

NVIDIA has collaborated with Epic Games to integrate a host of key RTX technologies with Unreal Engine 5. These plugins are also available on the Unreal Engine Marketplace. RTX-accelerated ray tracing and NVIDIA DLSS in the viewport make iterating on and refining new ideas simpler and faster. For the finished product, those same technologies power beautifully ray-traced graphics while AI enables higher frame rates.

With NVIDIA Reflex — a standard feature in UE5 that does not require a separate plugin or download — PC games running on RTX GPUs experience unimaginably low latency.

NVIDIA real-time denoisers offer real-time performance, increasing the efficiency of art pipelines. RTX global illumination produces realistic bounce lighting in real time, giving artists instant feedback in the Unreal Editor viewport. With the processing power of RTX GPUs, suite of high-quality RTX UE plugins, and the next-generation UE5, there’s no limit to creation.

Maxon’s Cinema 4D version S26 includes all-new cloth and rope dynamics, accelerated by NVIDIA RTX GPUs, allowing artists to model digital subjects with increased realism, faster.

In the time it took to render “The City” scene with an NVIDIA GeForce RTX 3090 GPU, the CPU alone took an hour longer!

Performance testing conducted by NVIDIA in April 2022 with AMD Ryzen Threadripper 3990X 64-Core Processor, 2895 Mhz, 128GB RAM. NVIDIA Driver 512.58.

Additional features include OpenColorIO adoption, a new camera and enhanced modeling tools.

Chaos Vantage, aided by real-time ray tracing exclusive to NVIDIA RTX GPUs, adds normal map support, a new feature for converting texture models to clay to focus on lighting, and ambient occlusion for shadows.

NVIDIA Omniverse Connector updates are giving real-time workflows new features and options. Blender adds new blend shape I/O support to ensure detailed, multifaceted subjects automate and animate correctly. Plus, new USD scale maps unlock large-scale cinematic visualization.

Rendered in Redshift with the NVIDIA RTX A6000 GPU in mere minutes; CPU alone would take over an hour.

Blender and Redshift have added hydra-based rendering. Artists can now use their renderer of choice within the viewport of all Omniverse apps.

New Studio Driver, Meet New Studio Laptops

April also brings a new Lenovo mobile workstation to the NVIDIA Studio lineup, plus the availability of three more.

The Lenovo ThinkPad P1 features a thin and light design, 16-inch panel and impressive performance powered by GeForce RTX and NVIDIA RTX professional graphics, equipped with up to the new NVIDIA RTX A5500 Laptop GPU.

Dolby Vision, HDR400 and a 165Hz display make the Lenovo ThinkPad P1 a great device for creators.

Studio laptops from other partners include the recently announced Asus ProArt Studiobook 16, MSI Creator Pro Z16 and Z17, all now available for purchase.

Walk Down Memory Lane With Andrew Averkin

Andrew Averkin is a Ukraine-based lead 3D environment artist at NVIDIA. He specializes in creating photorealistic 3D scenes, focused on realism that intentionally invokes warm feelings of nostalgia.

‘Boyhood’ by Andrew Averkin.

Averkin leads with empathy, a critical component of his flow state, saying he aims to create “artwork that combines warm feelings that fuel my passion and artistic determination.”

He created the piece below, called When We Were Kids, using the NVIDIA Omniverse Create app and Autodesk 3ds Max, accelerated by an NVIDIA RTX A6000 GPU.

Multiple light sources provide depth and shadows adding to the realism.

Here, Averkin harkens back to the pre-digital days of playing with toys and letting one’s imagination do the work.

Averkin first modeled When We Were Kids in Autodesk 3ds Max.

Closeups from the scene show exquisite attention to detail.

The RTX GPU-accelerated viewport and RTX-accelerated AI denoising in Autodesk 3ds Max enable fluid interactivity despite the massive file size.

Omniverse Create lets users assemble complex and physically accurate simulations and 3D scenes in real time.

Averkin then brought When We Were Kids into Omniverse Create to light, simulate and render his 3D scene in real time.

Omniverse allows 3D artists, like Averkin, to connect their favorite design tools to a single scene and simultaneously create and edit between the apps. The “Getting Started in NVIDIA Omniverse” series on the NVIDIA Studio YouTube channel is a great place to learn more.

“Most of the assets were taken from the Epic marketplace,” Averkin said. “My main goal was in playing with lighting scenarios, composition and moods.”

Averkin’s focus on nostalgia and photorealism help viewers feel the rawrs of yesteryear.

In Omniverse Create, Averkin used specialized lighting tools for his artwork, updating the original Autodesk 3ds Max file automatically with no need for messy and time-consuming file conversions and uploads, concluding with final files rendered at lightspeed with his RTX GPU.

Previously, Averkin worked at Axis Animation, Blur Studio and elsewhere. View his portfolio and favorite projects on ArtStation.

Dive Deeper In the NVIDIA Studio

Tons of resources are available to creators who want to learn more about the apps used by this week’s featured artist, and how RTX and GeForce RTX GPUs help accelerate creative workflows.

Take a behind-the-scenes look at The Storyteller, built in Omniverse and showcasing a stunning, photorealistic retro-style writer’s room.

Check out this tutorial from 3D artist Sir Wade Neistadt, who shows how to bring multi-app workflows into Omniverse using USD files, setting up Nucleus for live-linking tools.

View curated playlists on the Studio YouTube channel, plus hundreds more on the Omniverse YouTube channel. Follow NVIDIA Studio on Facebook, Twitter and Instagram, and get updates directly in your inbox by joining the NVIDIA Studio newsletter.

The post In the NVIDIA Studio: April Driver Launches Alongside New NVIDIA Studio Laptops and Featured 3D Artist appeared first on NVIDIA Blog.

Read More

Let Me Shoyu How It’s Done: Creating the NVIDIA Omniverse Ramen Shop

When brainstorming a scene to best showcase the groundbreaking capabilities of the Omniverse platform, some NVIDIA artists turned to a cherished memory: enjoying ramen together in a mom-and-pop shop down a side street in Tokyo.

Simmering pots of noodles, steaming dumplings, buzzing kitchen appliances, warm ambient lighting and glistening black ledger stools. These were all simulated in a true-to-reality virtual world by nearly two dozen NVIDIA artists and freelancers across the globe using NVIDIA Omniverse, a 3D design collaboration and world simulation platform.

The final scene — consisting of over 22 million triangles, 350 unique textured models and 3,000 4K-resolution texture maps — welcomes viewers into a virtual ramen shop featured in last month’s GTC keynote address by NVIDIA founder and CEO Jensen Huang.

The mouth-watering demo was created to highlight the NVIDIA RTX-powered real-time rendering and physics simulation capabilities of Omniverse, which scales performance and speed when running on multiple GPUs.

It’s a feast for the eyes, as all of the demo’s parts are physically accurate and photorealistic, from the kitchen appliances and order slips; to the shoyu ramen and chashu pork; to the stains on the pots and pans.

“Our team members were hungry just looking at the renders,” said Andrew Averkin, senior art manager and lead environment artist at NVIDIA, in a GTC session offering a behind-the-scenes look at the making of the Omniverse ramen shop.

The session — presented by Averkin and Gabriele Leone, senior art director at NVIDIA — is now available on demand.

Gathering the Ingredients for Reference

The team’s first step was to gather the artistic ingredients: visual references on which to base the 3D models and props for the scene.

An NVIDIA artist traveled to a real ramen restaurant in Tokyo and collected over 2,000 high-resolution reference images and videos, each capturing aspects from the kitchen’s distinct areas for cooking, cleaning, food preparation and storage.

Then, props artists modeled and textured 3D assets for all of the shop’s items, from the stoves and fridges to gas pipes and power plugs. Even the nutrition labels on bottled drinks and the buttons for the ticket machine from which visitors order meals were precisely replicated.

Drinks in a fridge at the virtual ramen shop, made using Omniverse Create, Adobe Substance 3D Painter, Autodesk 3ds Max, Blender, Maxon Cinema 4D, and RizomUV.

In just two months, NVIDIA artists across the world modeled 350 unique props for the scene, using a range of design software including Autodesk Maya, Autodesk 3ds Max, Blender, Maxon Cinema 4D and Pixologic Zbrush. Omniverse Connectors and Pixar’s Universal Scene Description format enabled the models to be seamlessly brought into the Omniverse Create app.

“The best way to think about Omniverse Create is to consider it a world-building tool,” Leone said. “It works with Omniverse Connectors, which allow artists to use whichever third-party apps they’re familiar with and connect their work seamlessly in Omniverse — taking creativity and experimentation to new levels.”

Adding Lighting and Texture Garnishes 

Artists then used Adobe Substance Painter to texture the materials. To make the props look used on a daily basis, the team whipped up details like dents on wooden counters, stickers peeling off appliances and sauce stains on pots.

“Some of our artists went as far as cooking some of the recipes themselves and taking references of their own pots to get a good idea of how sauce or burn stains might accumulate,” Averkin said.

Omniverse’s simulation capabilities enable light to reflect off of glass and other materials with true-to-reality physical accuracy. Plus, real-time photorealistic lighting rendered in 4K resolution created an orange warmth inside the cozy virtual shop, contrasting the rainy atmosphere that can be seen through the windows.

Artists used Omniverse Flow, a fluid simulation Omniverse Extension for smoke and fire, to bring the restaurant’s burning stoves and steaming plates to life. SideFX Houdini software helped to animate the boiling water, which was eventually brought into the virtual kitchen using an Omniverse Connector.

Broth boils in the virtual kitchen using visual effects offered by Houdini software.

And Omniverse Create’s camera animation feature allowed the artists to capture the final path-traced scene in real time, exactly as observed through the viewport.

Photorealistic lighting illuminates the virtual ramen shop, enabled by NVIDIA RTX-based ray tracing and path tracing.

Learn more about Omniverse by watching additional GTC sessions on demand — featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity and Walt Disney Studios.

Join in on the Creation

Creators across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Join the #MadeInMachinima contest, running through June 27, for a chance to win the latest NVIDIA Studio laptop.

Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Let Me Shoyu How It’s Done: Creating the NVIDIA Omniverse Ramen Shop appeared first on NVIDIA Blog.

Read More

Stellar Weather: Researchers Describe the Skies of Exoplanets

A paper released today describes in the greatest detail to date the atmospheres on distant planets.

Seeking the origins of what’s in and beyond the Milky Way, researchers surveyed 25 exoplanets, bodies that orbit stars far beyond our solar system. Specifically, they studied hot Jupiters, the largest and thus easiest to detect exoplanets, many sweltering at temperatures over 3,000 degrees Fahrenheit.

Their analysis of these torrid atmospheres used high performance computing with NVIDIA GPUs to advance understanding of all planets, including our own.

Hot Jupiters Shine New Lights

Hot Jupiters “offer an incredible opportunity to study physics in environmental conditions nearly impossible to reproduce on Earth,” said Quentin Changeat, lead author of the paper and a research fellow at University College London (UCL).

By analyzing trends across a large group of exoplanets, they shine new light on big questions.

“This work can help make better models of how the Earth and other planets came to be,” said Ahmed F. Al-Refaie, a co-author of the paper and head of numerical methods at the UCL Centre for Space Exochemistry Data.

Parsing Hubble’s Big Data

They used the most data ever employed in a survey of exoplanets — 1,000 hours of archival observations, mainly from the Hubble Space Telescope.

The hardest and, for Changeat, the most fascinating part of the process was determining what small set of models to run in a consistent way against data from all 25 exoplanets to get the most reliable and revealing results.

“There was an amazing period of exploration — I was finding all kinds of sometimes weird solutions — but it was really fast to get the answers using NVIDIA GPUs,” he said.

Millions of Calculations

Their overall results required heady math. Each of about 20 models had to run 250,000 times for all 25 exoplanets.

They used the Wilkes3 supercomputer at the University of Cambridge, which packs 320 NVIDIA A100 Tensor Core GPUs on an NVIDIA Quantum InfiniBand network.

“I expected the A100s might be double the performance of V100s and P100s I used previously, but honestly it was like an order of magnitude difference,” said Al-Refaie.

Orders of Magnitude Gains

A single A100 GPU gave a 200x performance boost compared to a CPU.

Packing 32 processes on each GPU, the team got the equivalent of a 6,400x speedup compared to a CPU. Each node on Wilkes3 delivered with its four A100s the equivalent of up to 25,600 CPU cores, he said.

The speedups are high because their application is amazingly parallel. It simulates on GPUs how hundreds of thousands of light wavelengths would travel through an exoplanet’s atmosphere

On A100s, their models complete in minutes work that would require weeks on CPUs.

The GPUs ran the complex physics models so fast that their bottleneck became a CPU-based system handling a much simpler task of determining statistically where to explore next.

“It was a little funny, and somewhat astounding, that simulating the atmosphere was not the hard part — that gave us an ability to really see what was in the data,” he said.

A Wealth of Software

Al-Refaie employed CUDA profilers to optimize jobs, PyCUDA to optimize the team’s code and cuBlas to speed up some math routines.

“With all the NVIDIA software available, there’s a wealth of things you can exploit, so the team is starting to spit out papers quickly now because we have the right tools,” he said.

They will need all the help they can get, as the work is poised to get much more challenging.

Getting a Better Telescope

The James Webb Space Telescope comes online in June. Unlike Hubble and all previous instruments, it’s specifically geared to observe exoplanets.

The team is already developing ways to work at higher resolutions to accommodate the expected data. For example, instead of using one-dimensional models, they will use two- or three-dimensional ones and account for more parameters like changes over time.

“If a planet has a storm, for example, we may not be able to see it with current data, but with the next generation data, we think we will,” said Changeat.

Exploring HPC+AI

The rising tide of data opens a door to apply deep learning, something the group’s AI experts are exploring.

It’s an exciting time, said Changeat, who’s joining the Space Telescope Science Institute in Baltimore as an ESA fellow to work directly with experts and engineers there.

“It’s really fun working with experts from many fields. We had space observers, data analysts, machine-learning and software experts on this team — that’s what made this paper possible,” Changeat said.

Learn more about the paper here.

Image at top courtesy of ESA/Hubble, N. Bartmann

The post Stellar Weather: Researchers Describe the Skies of Exoplanets appeared first on NVIDIA Blog.

Read More

By Land, Sea and Space: How 5 Startups Are Using AI to Help Save the Planet

Different parts of the globe are experiencing distinct climate challenges — severe drought, dangerous flooding, reduced biodiversity or dense air pollution.

The challenges are so great that no country can solve them on their own. But innovative startups worldwide are lighting the way, demonstrating how these daunting challenges can be better understood and addressed with AI.

Here’s how five — all among the 10,000+ members of NVIDIA Inception, a program designed to nurture cutting-edge startups — are looking out for the environment using NVIDIA-accelerated applications:

Blue Sky Analytics Builds Cloud Platform for Climate Action

India-based Blue Sky Analytics is building a geospatial intelligence platform that harnesses satellite data for environmental monitoring and climate risk assessment. The company provides developers with climate datasets to analyze air quality and estimate greenhouse gas emissions from fires  — with additional datasets in the works to forecast future biomass fires and monitor water capacity in lakes, rivers and glacial melts.

The company uses cloud-based NVIDIA GPUs to power its work. It’s a founding member of Climate TRACE, a global coalition led by Al Gore that aims to provide high-resolution global greenhouse gas emissions data in near real time. The startup leads Climate TRACE’s work examining how land use and land cover change due to fires.

Rhions Lab Protects Wildlife With Computer Vision

Kenya-based Rhions Lab uses AI to tackle challenges to biodiversity, including human-wildlife conflict, poaching and illegal wildlife trafficking. The company is adopting NVIDIA Jetson Nano modules for AI at the edge to support its conservation projects.

One of the company’s projects, Xoome, is an AI-powered camera trap that identifies wild animals, vehicles and civilians — sending alerts of poaching threats to on-duty wildlife rangers. Another initiative monitors beekeepers’ colonies with a range of sensors that capture acoustic data, vibrations, temperature and humidity within beehives. The platform can help beekeepers monitor bee colony health and fend off threats from thieves, whether honey badgers or humans.

TrueOcean Predicts Undersea Carbon Capture and Storage

German startup TrueOcean analyzes global-scale maritime data to inform innovation around natural ocean carbon sinks, renewable energy and shipping route optimization. The company is using AI to predict and quantify carbon absorption and storage in seagrass meadows and subsea geology. This makes it possible to greatly increase the carbon storage potential of Earth’s oceans.

TrueOcean uses AI solutions, including federated learning accelerated on NVIDIA DGX A100 systems, to help scientists predict, monitor and manage these sequestration efforts.

ASTERRA Saves Water With GPU-Accelerated Leak Detection

ASTERRA, based in Israel, has developed AI models that analyze satellite images to answer critical questions around water infrastructure. It’s equipping maintenance workers and engineers with the insights needed to find deficient water pipelines, assess underground moisture and locate leaks. The company uses NVIDIA GPUs through Amazon Web Services to develop and run its machine learning algorithms.

Since deploying its leak detection solution in 2016, ASTERRA has helped the water industry identify tens of thousands of leaks, conserving billions of gallons of drinkable water each year. Stopping leaks prevents ground pollution, reduces water wastage and even saves power. The company estimates its solution has reduced the water industry’s energy use by more than 420,000 megawatt hours since its launch.

Neu.ro Launches Renewable Energy-Powered AI Cloud

Another way to make a difference is by decreasing the carbon footprint of training AI models.

To help address this challenge, San Francisco-based Inception startup Neu.ro launched an NVIDIA DGX A100-powered AI cloud that’s powered entirely by geothermal and hydropower, with free-air cooling. Located in Iceland, the data center is being used for AI applications in telecommunications, retail, finance and healthcare.

The company has also developed a Green AI suite to help businesses monitor the environmental impact of AI projects, allowing developer teams to optimize compute usage to balance performance with carbon footprint.

Learn more about how GPU technology drives applications with social impact, including environmental projects. AI, data science and HPC startups can apply to join NVIDIA Inception.

The post By Land, Sea and Space: How 5 Startups Are Using AI to Help Save the Planet appeared first on NVIDIA Blog.

Read More