This spook-tacular Halloween edition of GFN Thursday features a special treat: 40% off a six-month GeForce NOW Priority Membership — get it for just $29.99 for a limited time.
Creatures of the night can now stream vampire survival game V Rising from the cloud. The fang-tastic fun arrives just in time to get started with the game’s “Bloodfeast” Halloween event and free weekend.
It leads the pack for 12 total games streaming this week, including new releases like Victoria 3.
Elevate Your Gaming to Priority
Through Sunday, Nov. 20, upgrade to a six-month Priority Membership for just $29.99, 40% off the standard price of $49.99.
Power up devices compatible with GeForce NOW for the boost of a full gaming rig in the cloud. Get faster access to games with priority access to gaming servers. Enjoy extended play times with six-hour gaming sessions. And take supported games to the next level with beautifully ray-traced graphics with RTX ON.
This limited-time offer is valid for new users and existing ones upgrading from a free or one-month Priority Membership, as well as for those who are on an active promotion or gift card.
Awaken as a vampire after centuries of slumber and survive in a vast world teeming with mythical horrors and danger streaming V Rising on GeForce NOW.
Raise a castle, gather valuable resources and weapons, develop dark powers and convert humans into loyal servants in the quest to raise a vampire empire. Make allies or enemies online, or play solo in the game of blood, power and betrayal.
The game arrives just in time for members to join in on the Bloodfeast, where all creatures of the night are invited to play for free from Oct. 28-Nov. 1. V Rising players will be able to claim the free “Halloween Haunted Nights Castle DLC Pack” through Monday, Nov. 7.
Rule the night playing V Rising across your devices, even on a mobile phone. RTX 3080 members can even stream at 4K resolution on the PC and Mac apps.
Something Wicked Awesome This Way Comes
Gamers can get right into the frightful fun by checking out the horror and thriller titles included in the Halloween games row in the GeForce NOW app.
If games with a bit of a bite aren’t your thing, that’s okay. There’s something for everyone on the cloud.
Look out for the 12 total games available to stream today, including 3 new releases like Victoria 3.
There’s a new sidewalk-savvy robot, and it’s delivering coffee, grub and a taste of fun.
The bot is garnering interest for Oakland, Calif., startup Cartken. The company, founded in 2019, has rapidly deployed robots for a handful of customer applications, including for Starbucks and Grubhub deliveries.
Cartken CEO Chris Bersch said that he and co-founders Jonas Witt, Jake Stelman and Anjali Jindal Naik got excited about the prospect for robots because of the technology’s readiness and affordability. The four Google alumni decided the timing was right to take the leap to start a company together.
“What we saw was a technological inflection point where we could make small self-driving vehicles work on the street,” said Bersch. “Because it doesn’t make sense to build a $20,000 robot that can deliver burritos.”
New and established companies are seeking business efficiencies as well as labor support amid ongoing shortages in the post-COVID era, driving market demand.
Revenue from robotic last-mile deliveries is expected to grow more than 9x to $670 million in 2030, up from $70 million in 2022, according to ABI Research.
Jetson Drives Robots as a Service
Cartken offers robots as a service (RaaS) to customers in a pay-for-usage model. This way, as a white- label technology provider, Cartken enables companies to customize the robots for their particular brand appearance and specific application features.
Much of this is made possible with the powerful NVIDIA Jetson embedded computing modules, which can handle a multitude of sensors and cameras.
“Cartken chose the Jetson edge AI platform because it offers superior embedded computational performance, which is needed to run Cartken’s advanced AI algorithms. In addition, the low energy consumption allows Cartken’s robots to run a whole day on a single battery charge,” said Bersch.
The company relies on the NVIDIA Jetson AGX Orin to run six cameras that aid in mapping and navigation as well as wheel odometry to measure its physical distance of movement.
Harnessing Jetson, Cartken’s robots run simultaneous localization and mapping, or SLAM, to automatically build maps of their surroundings for navigation. “They are basically level-4 autonomy — it’s based on visual processing, so we can map out a whole area,” Bersch said.
“The nice thing about our navigation is that it works both indoors and outdoors, so GPS is optional — we can localize based on purely visual features,” he said.
Cartken is a member of NVIDIA Inception, a program that helps startups with GPU technologies, software and business development support.
Serving Grubhub and Starbucks
Cartken’s robots are serving Grubhub deliveries at the University of Arizona and Ohio State. Grubhub users can order on the app as normally they would, and get a tracking link to follow their order’s progress. They’re informed that their delivery will be by a robot, and can use the app to unlock the robot’s lid to grab grub and go.
Some might wonder if the delivery fee for such entertaining delivery technology is the same. “I believe it’s the same, but you don’t have to tip,” Bersch said with a grin.
Mitsubishi Electric is a distributor for Cartken in Japan. It relies on Cartken’s robots for deployments in AEON Malls in Tokoname and Toki for deliveries of Starbucks coffee and food.
The companies are also testing a “smart city” concept for outdoor deliveries of Starbucks goods within the neighboring parks, apartments and homes. In addition, Mitsubishi, Cartken and others are working on deliveries inside a multilevel office building.
Looking ahead, Cartken’s CEO says the next big challenge is scaling up robot manufacturing to keep pace with orders. It has strong demand from partners, including Grubhub, Mitsubishi and U.K. delivery company DPD.
Cartken in September announced a partnership with Magna International, a global leader in automotive supplies, to help scale up manufacturing of its robots. The agreement offers production of thousands of AMRs as well as development of additional robot models for different use cases.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
This week In the NVIDIA Studio, we’re highlighting 3D and motion graphics artist SouthernShotty — and scenes from his soon-to-be released short film, Watermelon Girl.
“The theme of the film is that it’s more rewarding to give to others than to receive yourself,” said the artist. Watermelon Girl aims to create joy and invoke youth, he said, inspiring artists and viewers to raise each other’s spirits and be a positive force in the world.
“I really hope it encourages people to reach out and help each other through hard times,” SouthernShotty said.
SouthernShotty learned to model in 3D as a faster alternative to his favorite childhood art form, claymation.
“Growing up, I did a lot of arts and crafts with my mom and dad, so I loved creating little worlds,” he said.
SouthernShotty brainstormed characters using the mood board app Milanote, which allows users to drag and reposition cards to organize out-of-order thoughts. He also experimented with AI image generators to develop ideas and create reference material for his own artwork.
Once his vision was set, SouthernShotty began creating characters and scenes in Blender. Using an NVIDIA Studio laptop housing a GeForce RTX 3080 GPU, he deployed Blender’s Cycles renderer with RTX-accelerated OptiX ray tracing in the viewport, unlocking interactive photorealistic rendering for modeling.
To make volume rendering more GPU memory-efficient, SouthernShotty took advantage of the baked-in NVIDIA NanoVBD technology, allowing him to quickly adjust large and complex scenes with smooth interactivity. He then added animations to his characters and scenes before exporting renders in lightning speed using Blender Cycles.
Next the artist moved into Substance 3D Painter to build textures characteristic of his custom look, which, he said, is “a tactile vibe that conveys an interesting mix of unconventional materials.”
NVIDIA Iray technology and the RTX GPU played a critical role, with RTX-accelerated light and ambient occlusion baking photorealistic textures in mere seconds.
SouthernShotty then imported renders in Substance 3D Stager to apply textures and experiment with colors. Substance 3D Stager’s latest update added support for SBSAR to enable faster exports and custom textures that are easy to plug and play, along with new options to apply blending modes and opacity.
Preset lighting options helped him light the scene with ease. With RTX-accelerated denoising, SouthernShotty could tweak and tinker the scene in a highly interactive viewport with virtually no slowdown — allowing him to focus on creating without the waiting.
He quickly exported final passes in Blender before reaching the composition stage, where he applied various GPU-accelerated effects in Adobe Photoshop, After Effects, Illustrator and Premiere Pro.
“GeForce RTX GPUs revolutionized the way I work. I no longer spend hours optimizing my scenes, waiting on preview renders, or packaging files for an expensive online render farm,” SouthernShotty said.
As SouthernShotty continues to refine Watermelon Girl, he’ll now have the powerful GeForce RTX 4090 at his disposal. The same GPU that TechRadar said “is more powerful than we even thought possible.”
When it’s time to export the final film, the RTX 40 Series NVIDIA dual AV1 encoders via the popular Voukoder plugin for Adobe Premiere Pro will see him instantly slash his export times and reduce his file size.
SouthernShotty recently tested the GeForce RTX 4090 GPU to see if it’s the best card for Blender and 3D.
Watch below.
In testing, render speeds in Blender are 70% faster than the previous generation.
Check out SouthernShotty’s linktree for Blender tutorials, social media links and more.
Join the #From2Dto3D Challenge
NVIDIA Studio wants to see your 2D to 3D progress.
Join the #From2Dto3D challenge this month for a chance to be featured on NVIDIA Studio’s social media channels, like @Rik_Vasquez:
Entering is easy: Simply post a piece of 2D art next to a corresponding 3D rendition on Instagram, Twitter or Facebook — and be sure to tag #From2Dto3D.
Sensor AI solutions specialist SenSen has turned to the NVIDIA Jetson edge AI platform to help regulators track heavy vehicles moving across Australia.
Australia’s National Heavy Vehicle Regulator, or NHVR, has a big job — ensuring the safety of truck drivers across some of the world’s most sparsely populated regions.
They’re now harnessing AI to improve safety and operational efficiency for the trucking industry — even as the trucks are on the move — using drones as well as compact, portable solar-powered trailers and automatic number-plate recognition cameras in the vehicle.
That’s a big change from current systems, which gather information after the fact, when it’s too late to use it to disrupt high-risk journeys and direct on-the-road compliance in real time.
Current license plate recognition systems are often fixed in place and can’t be moved to areas with the most traffic.
NHVR is developing and deploying real-time mobile cameras on multiple platforms to address this challenge, including vehicle-mounted, drone-mounted and roadside trailer-mounted systems.
The regulator turned to the Australia-based SenSen, an NVIDIA Metropolis partner, to build these systems for the pilot program, including two trailers, a pair of vehicles and a drone.
“SenSen technology helps the NHVR support affordable, adaptable and accurate road safety in Australia,” said Nathan Rogers, director of smart city solutions for Asia Pacific at SenSen.
NVIDIA Jetson helps SenSen create lightweight systems that have low energy needs and a small footprint, while being able to handle multiple camera streams integrated with lidar and inertial sensors. These systems operate solely on solar and battery power and are rapidly deployable.
NVIDIA technologies also play a vital role in the systems’ ability to intelligently analyze data fused from multiple cameras and sensors.
To train the AI application, SenSen relies on NVIDIA GPUs and the NVIDIA TAO Toolkit to fast-track the AI model development using transfer learning by refining the accuracy and optimizing the model performance to power the object-detection application.
To run the AI app, SenSen relies on the NVIDIA DeepStream software development kit for highly optimized video analysis in real time on NVIDIA Jetson Nano- and AGX Xavier-based systems.
These mobile systems promise to help safety and compliance officers identify and disrupt high-risk journeys real time.
This allows clients to get accurate data reliably and quickly identify operators who obey road rules, and to help policymakers make better decisions about road safety over the long term.
“Using this solution to obtain real-time heavy vehicle sightings from any location in Australia allows us to further digitize our operations and create a more efficient and safer heavy-vehicle industry in Australia,” said Paul Simionato, director of the southern region at NHVR.
The ultimate goal: waste less time tracking repeat compliant vehicles, present clearer information on vehicles and loads, and use vehicles as a mobile intelligence tool.
And perhaps best of all, operators who are consistently compliant can expect to be less regularly intercepted, creating a strong incentive for the industry to increase compliance.
When two technologies converge, they can create something new and wonderful — like cellphones and browsers were fused to forge smartphones.
Today, developers are applying AI’s ability to find patterns to massive graph databases that store information about relationships among data points of all sorts. Together they produce a powerful new tool called graph neural networks.
What Are Graph Neural Networks?
Graph neural networks apply the predictive power of deep learning to rich data structures that depict objects and their relationships as points connected by lines in a graph.
In GNNs, data points are called nodes, which are linked by lines — called edges — with elements expressed mathematically so machine learning algorithms can make useful predictions at the level of nodes, edges or entire graphs.
What Can GNNs Do?
An expanding list of companies is applying GNNs to improve drug discovery, fraud detection and recommendation systems. These applications and many more rely on finding patterns in relationships among data points.
Researchers are exploring use cases for GNNs in computer graphics, cybersecurity, genomics and materials science. A recent paper reported how GNNs used transportation maps as graphs to improve predictions of arrival time.
Many branches of science and industry already store valuable data in graph databases. With deep learning, they can train predictive models that unearth fresh insights from their graphs.
“GNNs are one of the hottest areas of deep learning research, and we see an increasing number of applications take advantage of GNNs to improve their performance,” said George Karypis, a senior principal scientist at AWS, in a talk earlier this year.
Others agree. GNNs are “catching fire because of their flexibility to model complex relationships, something traditional neural networks cannot do,” said Jure Leskovec, an associate professor at Stanford, speaking in a recent talk, where he showed the chart below of AI papers that mention them.
Who Uses Graph Neural Networks?
Amazon reported in 2017 on its work using GNNs to detect fraud. In 2020, it rolled out a public GNN service that others could use for fraud detection, recommendation systems and other applications.
To maintain their customers’ high level of trust, Amazon Search employs GNNs to detect malicious sellers, buyers and products. Using NVIDIA GPUs, it’s able to explore graphs with tens of millions of nodes and hundreds of millions of edges while reducing training time from 24 to five hours.
For its part, biopharma company GSK maintains a knowledge graph with nearly 500 billion nodes that is used in many of its machine-language models, said Kim Branson, the company’s global head of AI, speaking on a panel at a GNN workshop.
LinkedIn uses GNNs to make social recommendations and understand the relationships between people’s skills and their job titles, said Jaewon Yang, a senior staff software engineer at the company, speaking on another panel at the workshop.
“GNNs are general-purpose tools, and every year we discover a bunch of new apps for them,” said Joe Eaton, a distinguished engineer at NVIDIA who is leading a team applying accelerated computing to GNNs. “We haven’t even scratched the surface of what GNNs can do.”
In yet another sign of the interest in GNNs, videos of a course on them that Leskovec teaches at Stanford have received more than 700,000 views.
How Do GNNs Work?
To date, deep learning has mainly focused on images and text, types of structured data that can be described as sequences of words or grids of pixels. Graphs, by contrast, are unstructured. They can take any shape or size and contain any kind of data, including images and text.
Using a process called message passing, GNNs organize graphs so machine learning algorithms can use them.
Message passing embeds into each node information about its neighbors. AI models employ the embedded information to find patterns and make predictions.
For example, recommendation systems use a form of node embedding in GNNs to match customers with products. Fraud detection systems use edge embeddings to find suspicious transactions, and drug discovery models compare entire graphs of molecules to find out how they react to each other.
GNNs are unique in two other ways: They use sparse math, and the models typically only have two or three layers. Other AI models generally use dense math and have hundreds of neural-network layers.
What’s the History of GNNs?
A 2009 paper from researchers in Italy was the first to give graph neural networks their name. But it took eight years before two researchers in Amsterdam demonstrated their power with a variant they called a graph convolutional network (GCN), which is one of the most popular GNNs today.
The GCN work inspired Leskovec and two of his Stanford grad students to create GraphSage, a GNN that showed new ways the message-passing function could work. He put it to the test in the summer of 2017 at Pinterest, where he served as chief scientist.
Their implementation, PinSage, was a recommendation system that packed 3 billion nodes and 18 billion edges to outperform other AI models at that time.
Pinterest applies it today on more than 100 use cases across the company. “Without GNNs, Pinterest would not be as engaging as it is today,” said Andrew Zhai, a senior machine learning engineer at the company, speaking on an online panel.
Meanwhile, other variants and hybrids have emerged, including graph recurrent networks and graph attention networks. GATs borrow the attention mechanism defined in transformer models to help GNNs focus on portions of datasets that are of greatest interest.
Scaling Graph Neural Networks
Looking forward, GNNs need to scale in all dimensions.
Organizations that don’t already maintain graph databases need tools to ease the job of creating these complex data structures.
Those who use graph databases know they’re growing in some cases to have thousands of features embedded on a single node or edge. That presents challenges of efficiently loading the massive datasets from storage subsystems through networks to processors.
“We’re delivering products that maximize the memory and computational bandwidth and throughput of accelerated systems to address these data loading and scaling issues,” said Eaton.
As part of that work, NVIDIA announced at GTC it is now supporting PyTorch Geometric (PyG) in addition to the Deep Graph Library (DGL). These are two of the most popular GNN software frameworks.
NVIDIA-optimized DGL and PyG containers are performance-tuned and tested for NVIDIA GPUs. They provide an easy place to start developing applications using GNNs.
To learn more, watch a talk on accelerating and scaling GNNs with DGL and GPUs by Da Zheng, a senior applied scientist at AWS. In addition, NVIDIA engineers hosted separate talks on accelerating GNNs with DGL and PyG.
To get started today, sign up for our early access program for DGL and PyG.
GeForce NOW expands touch control support to 13 more games this GFN Thursday. That means it’s easier than ever to take PC gaming on the go using mobile devices and tablets. The new “Mobile Touch Controls” row in the GeForce NOW app is the easiest way for members to find which games put the action right at their fingertips.
For a new way to play, members can soon experience these enhanced mobile games and more streaming on the newly announced Razer Edge 5G handheld gaming device.
And since GFN Thursday means more games every week, get ready for eight new titles in the GeForce NOW library, including A Plague Tale: Requiem.
Plus, the latest GeForce NOW Android app update is rolling out now, adding Adaptive VSync support in select games to improve frame stuttering and screen tearing.
Victory at Your Fingertips
Gamers on the go, rejoice! Enhanced mobile touch controls are now available for more than a dozen additional GeForce NOW games when playing on mobile devices and tablets.
These games join Fortnite and Genshin Impact as touch-enabled titles in the GeForce NOW library, removing the need to bring a controller when away from your battlestation.
Here’s the full list of games with touch-control support streaming on GeForce NOW on mobile devices and tablets:
To get right into gaming, use the new “Mobile Touch Controls” row in the GeForce NOW app to find your next adventure.
The Razer Edge of Glory
Announced last week at RazerCon, the new Razer Edge 5G handheld device launches in January 2023 with the GeForce NOW app installed right out of the box.
The Razer Edge 5G is a dedicated 5G console, featuring a 6.8-inch AMOLED touchscreen display that pushes up to a 144Hz refresh rate at 1080p — perfect for GeForce NOW RTX 3080 members who can stream at ultra-low latency and 120 frames per second.
The Razer Edge 5G is powered by the latest Snapdragon G3x Gen 1 Gaming Platform and runs on Verizon 5G Ultra Wideband. With a beautiful screen and full connectivity, gamers will have another great way to stream their PC gaming libraries from Steam, Epic, Ubisoft, Origin and more using GeForce NOW. Members can reserve the upcoming Razer Edge 5G ahead of its January 2023 release.
Razer’s new handheld joins a giant list of devices that support GeForce NOW, including PCs, Macs, Chromebooks, iOS Safari, Android mobile and TV devices, and NVIDIA SHIELD TV.
Members can also stream their PC libraries on the Logitech Cloud G handheld and Cloud Gaming Chromebooks from Asus, Acer and Lenovo, all available beginning this week.
Oh, Look – More Games!
That’s not all — every GFN Thursday brings a new pack of games.
Start a new adventure with the newly released A Plague Tale: Requiem, part of eight new titles streaming this week.
A Plague Tale: Requiem (New release on Steam and Epic Games)
Batora – Lost Haven (New release on Steam, Oct. 20)
Warhammer 40,000: Shootas, Blood & Teef (New release on Steam and Epic Games, Oct. 20)
It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically yearslong and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.
Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.
Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.
Subscribe to the AI Podcast: Now Available on Amazon Music
Speeding adoption of enterprise AI and accelerated computing, Oracle CEO Safra Catz and NVIDIA founder and CEO Jensen Huang discussed their companies’ expanding collaboration in a fireside chat live streamed today from Oracle CloudWorld in Las Vegas.
Oracle and NVIDIA announced plans to bring NVIDIA’s full accelerated computing stack to Oracle Cloud Infrastructure (OCI). It includes NVIDIA AI Enterprise, NVIDIA RAPIDS for Apache Spark and NVIDIA Clara for healthcare.
In addition, OCI will deploy tens of thousands more NVIDIA GPUs to its cloud service, including A100 and upcoming H100 accelerators.
“I’m unbelievably excited to announce our renewed partnership and the expanded capabilities our cloud has,” said Catz to a live and online audience of several thousand customers and developers.
“We’re thrilled you’re bringing your AI solutions to OCI,” she told Huang.
The Power of Two
The combination of Oracle’s heritage in data and its powerful infrastructure with NVIDIA’s expertise in AI will give users traction facing tough challenges ahead, Huang said.
“Industries around the world need big benefits from our industry to find ways to do more without needing to spend more or consume more energy,” he said.
AI and GPU-accelerated computing are delivering these benefits at a time when traditional methods of increasing performance are slowing, he added.
“Data that you harness to find patterns and relationships can automate the way you work and the products and services you deliver — the next ten years will be some of the most exciting times in our industry,” Huang said.
“I’m confident all workloads will be accelerated for better performance, to drive costs out and for energy efficiency,” he added.
The capability of today’s software and hardware, coming to the cloud, “is something we’ve dreamed about since our early days,” said Catz, who joined Oracle in 1999 and has been its CEO since 2014.
Benefits for Healthcare and Every Industry
“One of the most critical areas is saving lives,” she added, pointing to the two companies’ work in healthcare.
A revolution in digital biology is transforming healthcare from a science-driven industry to one powered by both science and engineering, And NVIDIA Clara provides a platform for that work, used by healthcare experts around the world, Huang said.
“We can now use AI to understand the language of proteins and chemicals, all the way to gene screening and quantum chemistry — amazing breakthroughs are happening now,” he said.
AI promises similar advances for every business. The automotive industry, for example, is becoming a tech industry as it discovers its smartphone moment, he said.
“We see this all over with big breakthroughs in natural language processing and large language models that can encode human knowledge to apply to all kinds of skills they were never trained to do,” he said.
Meta today announced its next-generation AI platform, Grand Teton, including NVIDIA’s collaboration on design.
Compared to the company’s previous generation Zion EX platform, the Grand Teton system packs in more memory, network bandwidth and compute capacity, said Alexis Bjorlin, vice president of Meta Infrastructure Hardware, at the 2022 OCP Global Summit, an Open Compute Project conference.
AI models are used extensively across Facebook for services such as news feed, content recommendations and hate-speech identification, among many other applications.
“We’re excited to showcase this newest family member here at the summit,” Bjorlin said, adding her thanks to NVIDIA for its deep collaboration on Grand Teton’s design and continued support of OCP.
Designed for Data Center Scale
Named after the 13,000-foot mountain that crowns one of Wyoming’s two national parks, Grand Teton uses NVIDIA H100 Tensor Core GPUs to train and run AI models that are rapidly growing in their size and capabilities, requiring greater compute.
The NVIDIA Hopper architecture, on which the H100 is based, includes a Transformer Engine to accelerate work on these neural networks, which are often called foundation models because they can address an expanding set of applications from natural language processing to healthcare, robotics and more.
The NVIDIA H100 is designed for performance as well as energy efficiency. H100-accelerated servers, when connected with NVIDIA networking across thousands of servers in hyperscale data centers, can be 300x more energy efficient than CPU-only servers.
“NVIDIA Hopper GPUs are built for solving the world’s tough challenges, delivering accelerated computing with greater energy efficiency and improved performance, while adding scale and lowering costs,” said Ian Buck, vice president of hyperscale and high performance computing at NVIDIA. “With Meta sharing the H100-powered Grand Teton platform, system builders around the world will soon have access to an open design for hyperscale data center compute infrastructure to supercharge AI across industries.”
Mountain of a Machine
Grand Teton sports 2x the network bandwidth and 4x the bandwidth between host processors and GPU accelerators compared to Meta’s prior Zion system, Meta said.
The added network bandwidth enables Meta to create larger clusters of systems for training AI models, Bjorlin said. It also packs more memory than Zion to store and run larger AI models.
Simplified Deployment, Increased Reliability
Packing all these capabilities into one integrated server “dramatically simplifies deployment of systems, allowing us to install and provision our fleet much more rapidly, and increase reliability,” said Bjorlin.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
Adobe MAX is inspiring artists around the world to bring their ideas to life. The leading creative conference runs through Thursday, Oct. 20, in person and virtually.
Plus, artist Anna Natter transforms 2D photos into full-fidelity 3D assets using the power of AI and state-of-the-art photogrammetry technology this week In the NVIDIA Studio.
The new Adobe features, the latest NVIDIA Studio laptops and more are backed by the October NVIDIA Studio Driver available for download today.
Unleash MAXimum Performance
Press and content creators have been putting the new GeForce RTX 4090 GPU through a wide variety of creative workflows — here’s a sampling of their reviews:
“NVIDIA’s new flagship graphics card brings massive gains in rendering and GPU compute-accelerated content creation.” — Forbes
“GeForce RTX 4090 just puts on a clinic, by absolutely demolishing every other card here. In a lot of cases it’s almost cutting rendering times in half.” — Hardware Canucks
“If you care about rendering performance to the point that you always lock your eyes on a top-end target, then the RTX 4090 is going to prove to be an absolute screamer..” — Tech Gage
“The NVIDIA GeForce RTX 4090 is more powerful than we even thought possible.” — TechRadar
“As for the 3D performance of Blender and V-Ray, it delivers a nearly 2x performance increase, which makes it undoubtedly the most powerful weapon for content creators.” — XFastest
“NVIDIA has been providing Studio drivers for GeForce series graphics cards, they added dual hardware encoders and other powerful tools to help creators maximize their creativity. We can say it’s a new-gen GPU king suitable for top-notch gamers and creators.” — Techbang
Pick up the GeForce RTX 4090 GPU or a pre-built system today by heading to our Product Finder.
Enjoy MAXimum Creativity
Adobe is all in on the AI revolution, adopting AI-powered features across its lineup of Adobe Creative Cloud and Substance 3D apps. The updates simplify repetitive tasks and make advanced effects accessible.
Creators equipped with GeForce RTX GPUs, especially those part of the new RTX 40 Series, are primed to benefit from remarkable GPU acceleration of AI features in Adobe Creative Cloud.
Adobe Premiere Pro
Adobe Premiere Pro is getting RTX acceleration for AI features, resulting in significant performance boosts on AI effects. For example, the Unsharp Mask filter will see an increase of 4.5x, and the Posterize Time effect of over 2x compared to running them on a CPU (performance measured on RTX 3090 Ti and Intel i9 12900K).
Adobe Photoshop
The new beta Photo Restoration feature uses AI-powered neural filters to process imagery, add tone and minimize the effects of film grain. Photo Restoration can be applied to a single image or batches of imagery to quickly and conveniently improve the picture quality of an artist’s portfolio.
Photoshop’s AI-powered Object Selection Tool allows artists to apply a selection to a particular object within an image. The user can manipulate the selected object, add filters and fine-tune details.
This saves the huge amount of time it takes artists to mask imagery — and in beta on the GeForce RTX 3060 Ti is 3x faster than the Intel UHD Graphics 700 and 4x faster than the Apple M1 Ultra.
Adobe Photoshop Lightroom Classic
The latest version of Adobe Photoshop Lightroom Classic makes it easy for users to create stunning final images with powerful new AI-powered masking tools.
With just a few clicks, these AI masks can identify and mask key elements within an image, including the main subject, sky and background, and can even select individuals within an image and apply masks to adjust specific areas, such as hair, face, eyes or lips.
Adobe Substance 3D
Substance 3D Modeler is now available in general release. Modeler can help create concept art — it’s perfect for sketching and prototyping, blocking out game levels, crafting detailed characters and props, or sculpting an entire scene in a single app. Its ability to switch between desktop and virtual reality is especially useful, depending on project needs and the artist’s preferred style of working.
Substance 3D Sampler added its photogrammetry feature, currently in private beta, which automatically converts photos of real-world objects into textured 3D models without the need to fiddle with sliders or tweak values. With a few clicks, the artist can now create 3D assets. This feature serves as a bridge for 2D artists looking to make the leap to 3D.
Adobe Creative Cloud and Substance 3D
These advancements join the existing lineup of GPU-accelerated and AI-enhanced Adobe apps, with features that continue to evolve and improve:
Adobe Camera RAW — AI-powered Select Objects and Select People masking tools
After Effects — Improved AI-powered Scene Edit Detection and H.264 rendering for faster exports with hardware-accelerated output
Illustrator — Substance 3D materials plugin for faster access to assets and direct export of Universal Scene Description (USD) files
Photoshop Elements — AI-powered Moving Elements add motion to a still image
Premiere Elements — AI-powered Artistic Effects transform clips with effects inspired by famous works of art or popular art styles
Premiere Pro — Adds Auto Color to apply intelligent color corrections to video clips such as exposure, white balance and contrast that enhance footage, GPU-accelerated Lumetri scopes and faster Motion Graphics Templates
Substance 3D Painter — SBSAR Exports for faster exports and custom textures that are easy to plug and play, plus new options to apply blending modes and opacity
Try these features on an NVIDIA Studio system equipped with a GeForce RTX GPU, and experience the ease and speed of RTX-accelerated creation.
October NVIDIA Studio Driver
This NVIDIA Studio Driver provides optimal support for the latest new creative applications including Topaz Sharpen AI and DXO Photo. In addition, this NVIDIA Studio Driver supports the new application updates announced at Adobe MAX including Premiere Pro, Photoshop, Photoshop Lightroom Classic and more.
Anna Natter, this week’s featured In the NVIDIA Studio artist, is a 3D artist at heart that likes to experiment with different mediums. She has a fascination with AI — both the technology it’s built on and its ever-expanding role in content creation.
“It’s an interesting debate where the ‘art’ starts when it comes to AI,” said Natter. “After almost a year of playing with AI, I’ve been working on developing my own style and figuring out how I can make it mine.”
In the image above, Natter applied Photoshop Neural Filters, which were accelerated by her GeForce RTX 3090 GPU. “It’s always a good idea to use your own art for filters, so you can give everything a unique touch. So if you ask me if this is my art or not, it 100% is!” said the artist.
Natter has a strong passion for photogrammetry, she said, as virtually anything can be preserved in 3D. Photogrammetry features have the potential to save 3D artists countless hours. “I create hyperrealistic 3D models of real-life objects which I could not have done by hand,” she said. “Well, maybe I could’ve, but it would’ve taken forever.”
The artist even scanned her sweet pup Szikra to create a virtual 3D copy of her that will last forever.
To test the private beta photogrammetry feature in Substance 3D Sampler, Natter created this realistic tree model with a single series of images.
Natter captured a video of a tree in a nearby park in her home country of Germany. The artist then uploaded the footage to Adobe After Effects, exporting the frames into an image sequence. After Effects contains over 30 features accelerated by RTX GPUs, which improved Natter’s workflow.
Once she was happy with the 3D image quality, Natter dropped the model from Substance 3D Sampler into Substance 3D Stager. The artist then applied true-to-life materials and textures to the scene and color matched the details to the scanned model with the Stager color picker.
Natter then lit the scene with a natural outdoor High Dynamic Range Image (HDRI), one of the pre-built environment-lighting options in 3D Stager. “What I really like about the Substance 3D suite is that it cuts the frustration out of my workflow, and I can just do my thing in a flow state, without interruption, because everything is compatible and works together so well,” she said.
The GeForce RTX 3090 GPU accelerated her workflow within 3D Stager, with RTX-accelerated and AI-powered denoising in the viewport unlocking interactivity and smooth movement. When it came time to render, RTX-accelerated ray tracing quickly delivered photorealistic 3D renders, up to 7x faster than with CPU alone.
“I’ve always had an NVIDIA GPU since I’ve been working in video editing for the past decade and wanted hardware that works best with my apps. The GeForce RTX 3090 has made my life so much easier, and everything gets done so much faster.” — 3D artist Anna Natter
Natter can’t contain her excitement for the eventual general release of the Sampler photogrammetry feature. “As someone who has invested so much in 3D design, I literally can’t wait to see what people are going to create with this,” she said.
NVIDIA Studio wants to see your 2D to 3D progress!
Join the #From2Dto3D challenge this month for a chance to be featured on the NVIDIA Studio social media channels, like @JennaRambles, whose goldfish sketch was transformed into a beautiful 3D image.