Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

GFN Thursday is downright demonic, as Devil May Cry 5 comes to GeForce NOW.

Capcom’s action-packed third-person brawler leads 15 titles joining the GeForce NOW library this week, including Gears Tactics and The Crew Motorfest.

It’s also the last week to take on the Ultimate KovaaK’s Challenge. Get on the leaderboard today for a chance to win a 240Hz gaming monitor, a gaming Chromebook, GeForce NOW memberships or other prizes. The challenge ends on Thursday, Sept. 21.

The Devil Returns

Devil May Cry 5 on GeForce NOW
Jackpot!

Devil May Cry 5 is the next title from Capcom’s catalog to come to GeForce NOW. Members can stream all of its high-octane, stylish action at GeForce RTX quality to nearly any device, thanks to the power of GeForce NOW cloud gaming servers.

The threat of demonic power has returned to menace the world once again. Take on hordes of enemies as Nero, V or the legendary Dante with the ramped-up sword-and-gun gameplay that the series is known for. Battle epic bosses in adrenaline-fueled fights across the overrun Red Grave City — all to the beat of a truly killer soundtrack.

Take the action on the go thanks to the power of the cloud. GeForce NOW Priority members can take the fight with them across nearly any device at up to 1080p and 60 frames per second.

Kickin’ It Into High Gear

Gears Tactics on GeForce NOW
A squad of survivors is all it takes to stop the Locust threat.

Rise up and fight, members. Gears Tactics is the next PC Game Pass title to arrive in the cloud.

Gears Tactics is a fast-paced, turn-based strategy game from one of the most acclaimed video game franchises — Gears of War. Set a dozen years before the first Gears of War game, the Gears Tactics story opens as cities on the planet Sera begin falling to the monstrous threat rising from underground: the Locust Horde. With the government in disarray, a squad of survivors emerges as humanity’s last hope. Play as the defiant soldier Gabe Diaz to recruit, develop and command squads on a desperate mission to hunt down the relentless and powerful leader of the Locust army, Ukkon, the group’s monster-making mastermind.

Fight for survival and outsmart the enemy with the sharpness of 4K resolution streaming from the cloud with a GeForce NOW Ultimate membership.

Hit the Road, Jack

The Crew Motorfest on GeForce NOW
The best way to see Hawaii is by car, at 100 mph.

The Crew Motorfest also comes to GeForce NOW this week. The latest entry in Ubisoft’s racing franchise drops drivers into the open roads of Oahu, Hawaii. Get behind the wheel of 600+ iconic vehicles from the past, present and future, including sleek sports cars, rugged off-road vehicles and high-performance racing machines. Race alone or with friends through the bustling city of Honolulu, test off-roading skills on the ashy slopes of a volcano or kick back on the sunny beaches behind the wheel of a buggy.

Members can take a test drive from Sept. 14-17 with a five-hour free trial. Explore the vibrant Hawaiian open world, participate in thrilling driving activities and collect prestigious cars, with all progress carrying over to the full game purchase.

Take the pole position with a GeForce NOW Ultimate membership to stream The Crew Motorfest and more than 1,600 other titles at the highest frame rates. Upgrade today.

A New Challenge

Gunbrella on GeForce NOW
Rain, rain, go away. The umbrella is also a gun today.

With GeForce NOW, there’s always something new to play. Here’s what’s hitting the playlist this week:

  • Tavernacle! (New release on Steam, Sept. 11)
  • Gunbrella (New release on Steam, Sept. 13)
  • The Crew Motorfest (New release on Ubisoft Connect, Sept. 14)
  • Amnesia: The Bunker (Xbox, available on PC Game Pass)
  • Descenders (Xbox, available on PC Game Pass)
  • Devil May Cry 5 (Steam)
  • Gears Tactics (Steam and Xbox, available on PC Game Pass)
  • Last Call BBS (Xbox)
  • The Matchless Kungfu (Steam)
  • Mega City Police (Steam)
  • Opus Magnum (Xbox)
  • Remnant II (Epic Games Store)
  • Space Hulk: Deathwing – Enhanced Edition (Xbox)
  • Superhot (Xbox)
  • Vampyr (Xbox)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research.

Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President’s Council of Advisors on Science and Technology.

At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.”

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community.

It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research.

Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns.

“Those are the aspects we’re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?’” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?”

Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications.

She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved.

“This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

NVIDIA Lends Support to Washington’s Efforts to Ensure AI Safety

NVIDIA Lends Support to Washington’s Efforts to Ensure AI Safety

In an event at the White House today, NVIDIA announced support for voluntary commitments that the Biden Administration developed to ensure advanced AI systems are safe, secure and trustworthy.

The news came the same day NVIDIA’s chief scientist, Bill Dally, testified before a U.S. Senate subcommittee seeking input on potential legislation covering generative AI. Separately, NVIDIA founder and CEO Jensen Huang will join other industry leaders in a closed-door meeting on AI Wednesday with the full Senate.

Seven companies including Adobe, IBM, Palantir and Salesforce joined NVIDIA in supporting the eight agreements the Biden-Harris administration released in July with support from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

The commitments are designed to advance common standards and best practices to ensure the safety of generative AI systems until regulations are in place, the White House said. They include:

  • Testing the safety and capabilities of AI products before they’re deployed,
  • Safeguarding AI models against cyber and insider threats, and
  • Using AI to help meet society’s greatest challenges, from cancer to climate change.

Dally Shares NVIDIA’s Experience

In his testimony, Dally told the Senate subcommittee that government and industry should balance encouraging innovation in AI with ensuring models are deployed responsibly.

The subcommittee’s hearing, “Oversight of AI: Rules for Artificial Intelligence,” is among actions from policymakers around the world trying to identify and address potential risks of generative AI.

Earlier this year, the subcommittee heard testimonies from leaders of Anthropic, IBM and OpenAI, as well as academics such as Yoshua Bengio, a University of Montreal professor considered one of the godfathers of AI.

Dally, who leads a global team of more than 300 at NVIDIA Research, shared the witness table on Tuesday with Brad Smith, Microsoft’s president and vice chair. Dally’s testimony briefly encapsulated NVIDIA’s unique role in the evolution of AI over the last two decades.

How Accelerated Computing Sparked AI

He described how NVIDIA invented the GPU in 1999 as a graphics processing unit, then fit it for a broader role in parallel processing in 2006 with the CUDA programming software. Over time, developers across diverse scientific and technical computing fields found this new form of accelerated computing could significantly advance their work.

Along the way, researchers discovered GPUs also were a natural fit for AI’s neural networks, because they require massive parallel processing.

In 2012, the AlexNet model, trained on two NVIDIA GPUs, demonstrated human-like capabilities in image recognition. That result helped spark a decade of rapid advances using GPUs, leading to ChatGPT and other generative AI models used by hundreds of millions worldwide.

Today, accelerated computing and generative AI are showing the potential to transform industries, address global challenges and profoundly benefit society, said Dally, who chaired Stanford University’s computer science department before joining NVIDIA.

AI’s Potential and Limits

In written testimony, Dally provided examples of how AI is empowering professionals to do their jobs better than they might have imagined in fields as diverse as business, healthcare and climate science.

Like any technology, AI products and services have risks and are subject to existing laws and regulations that aim to mitigate those risks.

Industry also has a role to play in deploying AI responsibly. Developers set limits for AI models when they train them and define their outputs.

Dally noted that NVIDIA released in April NeMo Guardrails, open-source software developers can use to guide generative AI applications in producing accurate, appropriate and secure text responses. He said that NVIDIA also maintains internal risk-management guidelines for AI models.

Eyes on the Horizon

Making sure that new and exceptionally large AI models are accurate and safe is a natural role for regulators, Dally suggested.

Picture of Sen Blumenthal welcoming Dally to the hearing
Subcommittee chair Sen. Richard Blumenthal (D-CT) welcomed Dally to the hearing.

He said that these “frontier” models are being developed at a gigantic scale. They exceed the capabilities of ChatGPT and other existing models that have already been well-explored by developers and users.

Dally urged the subcommittee to balance thoughtful regulation with the need to encourage innovation in an AI developer community that includes thousands of startups, researchers and enterprises worldwide. AI tools should be widely available to ensure a level playing field, he said.

During questioning, Senator Amy Klobuchar (D-MN) asked Dally why NVIDIA announced in March it’s working with Getty Images.

“At NVIDIA, we believe in respecting people’s intellectual property rights,” Dally replied. “We partnered with Getty to train large language models with a service called Picasso, so people who provided the original content got remunerated.”

In closing, Dally reaffirmed NVIDIA’s dedication to innovating generative AI and accelerated computing in ways that serve the best interests of all.

Read More

Mobility Gets Amped: IAA Show Floor Energized by Surge in EV Reveals, Generative AI

Mobility Gets Amped: IAA Show Floor Energized by Surge in EV Reveals, Generative AI

Generative AI’s transformative effect on the auto industry took center stage last week at the International Motor Show Germany, known as IAA, in Munich.

NVIDIA’s Danny Shapiro, VP of automotive marketing, explained in his IAA keynote how this driving force is accelerating innovation and streamlining processes — from advancing design, engineering and digital-twin deployment for optimizing manufacturing…to accelerating AV development with simulation…to enhancing retail experiences.

The gen AI message was also shared just ahead of the show in a fireside chat at NVIDIA headquarters with NVIDIA VP of Automotive Ali Kani and Aakash Arora, managing director and partner at Boston Consulting Group, who discussed the rapid pace of innovation, and how genAI will improve in-car experiences and transform the way vehicles are designed, manufactured and sold.

Electric Vehicles Dominate the Show Floor 

The auto industry’s move toward electrification was on full display at IAA, with a number of global automakers showcasing their current and upcoming electric mobility lineup.

Mercedes-Benz took the wraps off its Concept CLA Class, giving visitors insight into the brand’s future vision for the entry-level segment.

Designed on the upcoming Mercedes-Benz Modular Architecture (MMA) platform, the exterior of the Concept CLA Class teases an iconic design and evokes dynamic performance. Its interior provides the ultimate customer experience with exceptional comfort and convenience.

The combination of high performance, sustainability, safety and comfort paired with an outstanding digital experience will help Mercedes-Benz realize its Ambition 2039 vision to be net carbon neutral across its entire fleet of new vehicles by the end of the next decade.

As the first car to be developed on the MMA platform, the Concept CLA Class paves the way for next-gen electric-drive technology, and features Mercedes-Benz’s new operating system, MB.OS, with automated driving capabilities powered by NVIDIA DRIVE. With an anticipated range of more than 466 miles, the CLA Class has an 800V electric architecture to maximize efficiency and performance and rapid charging. Configured for a sporty, rear-wheel drive, its modular design will also be scalable for other vehicle segments.

Lotus conducted test drives at IAA of its Lotus Eletre Hyper-SUV, which features an immersive digital cockpit, a battery range of up to 370 miles and autonomous-driving capabilities powered by the NVIDIA DRIVE Orin system-on-a-chip. With DRIVE at the wheel, the all-electric car offers server-level computing power that can be continuously enhanced during the car’s lifetime through over-the-air updates.

Lotus Eletre Hyper-SUV. Image courtesy of Lotus.

U.S.-based Lucid Motors premiered during IAA its limited-production Lucid Air Midnight Dream Edition electric sedan, which provides up to 496 miles of range. The sedan was created with the European market in mind.

The automaker also showcased other models, including its Lucid Air Pure, Air Touring and Air Grand Touring, which come with the DreamDrive Pro advanced driver-assistance system (ADAS) powered by the high-performance compute of NVIDIA DRIVE for a seamless automated driving experience.

Lucid Air Midnight Dream. Image courtesy of Lucid Motors.

China’s emerging EV makers — which have been quick to embrace the shift to electric powertrains and software-defined strategies — were also in force at IAA as they set their sights on the European market.

Auto giant BYD presented a diverse lineup of five EVs targeting the European market, along with the seven-seater DENZA D9 MPV, or multi-purpose vehicle, which features significant safety, performance and convenience options for drivers and passengers. DENZA is a joint venture brand between BYD and Mercedes-Benz.

The eco-friendly EVs demonstrate the latest in next-gen electric technology and underscore BYD’s position as a leading global car brand.

BYD booth at IAA. Image courtesy of BYD.

LeapMotor unveiled its new model, the C10 SUV, built on its LEAP 3.0 architecture. The vehicle is equipped with 30 high-resolution sensors, including lidar and 8-megapixel high-definition cameras, for accurate surround-perception capabilities. It’s powered by NVIDIA DRIVE Orin, which delivers 254 TOPS of compute to enable safe, high-speed and urban intelligent-driving capabilities.

LeapMotor C10 SUV. Image courtesy of LeapMotor.

XPENG’s inaugural presence at IAA served as the ideal opportunity to introduce its latest models to Europe, including its G9 and P7 EVs, with NVIDIA DRIVE Orin under the hood. Deliveries of the P7 recently commenced, with the vehicles now available in Norway, Sweden, Denmark and the Netherlands. The automaker’s intelligent G6 Coupe SUV, also powered by NVIDIA DRIVE Orin, will be made available to the European market next year.

XPENG G9 and P7. Image courtesy of XPENG.

Ecosystem Partners Paint IAA Show Floor Green

In addition to automakers, NVIDIA ecosystem partners at IAA showcased their latest innovations and developments in the mobility space:

  • DeepRoute.ai showed its Driver 3.0 HD Map-Free solution built on NVIDIA DRIVE Orin and designed to offer a non-geofenced solution for mass-produced ADAS vehicles. The company plans to bring this NVIDIA-powered solution to the European market and expand beyond later next year.
  • DeepScenario showed how it’s using NVIDIA hardware for training and inference on its AI models.
  • dRISK, an NVIDIA DRIVE Sim ecosystem member, demonstrated its full-stack solution for training, testing and validating on level 2-level 5 ADAS/AV/ADS software, preparing autonomy to handle regulatory requirements and the full complexity of the real world for the next generation of highly effective and commercially viable autonomous solutions.
  • NODAR introduced GridDetect, its latest 3D vision product for level 3 driving. Using off-the-shelf cameras and NVIDIA DRIVE Orin, NODAR’s latest system provides high-resolution, real-time 3D sensing at up to 1,000m and can detect objects as small as 10cm at 150m. GridDetect also provides a comprehensive bird’s-eye view of objects in all conditions — including in challenging scenarios like nighttime, adverse weather and severe fog.
  • SafeAD demonstrated its perception technology for mapless driving, fleet map updates and validation processes.
NODAR GridDetect system for high-resolution, real-time 3D sensing. Image courtesy of NODAR.

Read More

A Quantum Boost: cuQuantum With PennyLane Lets Simulations Ride Supercomputers

A Quantum Boost: cuQuantum With PennyLane Lets Simulations Ride Supercomputers

Ten miles in from Long Island’s Atlantic coast, Shinjae Yoo is revving his engine.

The computational scientist and machine learning group lead at the U.S. Department of Energy’s Brookhaven National Laboratory is one of many researchers gearing up to run quantum computing simulations on a supercomputer for the first time, thanks to new software.

Yoo’s engine, the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), is using the latest version of PennyLane, a quantum programming framework from Toronto-based Xanadu. The open-source software, which builds on the NVIDIA cuQuantum software development kit, lets simulations run on high-performance clusters of NVIDIA GPUs.

The performance is key because researchers like Yoo need to process ocean-size datasets. He’ll run his programs across as many as 256 NVIDIA A100 Tensor Core GPUs on Perlmutter to simulate about three dozen qubits — the powerful calculators quantum computers use.

That’s about twice the number of qubits most researchers can model these days.

Powerful, Yet Easy to Use

The so-called multi-node version of PennyLane, used in tandem with the NVIDIA cuQuantum SDK, simplifies the complex job of accelerating massive simulations of quantum systems.

“This opens the door to letting even my interns run some of the largest simulations — that’s why I’m so excited,” said Yoo, whose team has six projects using PennyLane in the pipeline.

Pic of Brookhaven’s Shinjae Yoo prepares to scale up his quantum work on the Perlmutter supercomputer.
Brookhaven’s Shinjae Yoo prepares to scale up his quantum work on the Perlmutter supercomputer.

His work aims to advance high-energy physics and machine learning. Other researchers use quantum simulations to take chemistry and materials science to new levels.

Quantum computing is alive in corporate R&D centers, too.

For example, Xanadu is helping companies like Rolls-Royce develop quantum algorithms to design state-of-the-art jet engines for sustainable aviation and Volkswagen Group invent more powerful batteries for electric cars.

Four More Projects on Perlmutter

Meanwhile, at NERSC, at least four other projects are in the works this year using multi-node Pennylane, according to Katherine Klymko, who leads the quantum computing program there. They include efforts from NASA Ames and the University of Alabama.

“Researchers in my field of chemistry want to study molecular complexes too large for classical computers to handle,” she said. “Tools like Pennylane let them extend what they can currently do classically to prepare for eventually running algorithms on large-scale quantum computers.”

Blending AI, Quantum Concepts

PennyLane is the product of a novel idea. It adapts popular deep learning techniques like backpropagation and tools like PyTorch to programming quantum computers.

Xanadu designed the code to run across as many types of quantum computers as possible, so the software got traction in the quantum community soon after its introduction in a 2018 paper.

“There was engagement with our content, making cutting-edge research accessible, and people got excited,” recalled Josh Izaac, director of product at Xanadu and a quantum physicist who was an author of the paper and a developer of PennyLane.

Calls for More Qubits

A common comment on the PennyLane forum these days is, “I want more qubits,” said Lee J. O’Riordan, a senior quantum software developer at Xanadu, responsible for PennyLane’s performance.

“When we started work in 2022 with cuQuantum on a single GPU, we got 10x speedups pretty much across the board … we hope to scale by the end of the year to 1,000 nodes — that’s 4,000 GPUs — and that could mean simulating more than 40 qubits,” O’Riordan said.

Scientists are still formulating the questions they’ll address with that performance — the kind of problem they like to have.

Companies designing quantum computers will use the boost to test ideas for building better systems. Their work feeds a virtuous circle, enabling new software features in PennyLane that, in turn, enable more system performance.

Scaling Well With GPUs

O’Riordan saw early on that GPUs were the best vehicle for scaling PennyLane’s performance. He co-authored last year a paper on a method for splitting a quantum program across more than 100 GPUs to simulate more than 60 qubits, split into many 30 qubit sub-circuits.

Picture of Lee J. O’Riordan, PennyLane developer at Xanadu
Lee J. O’Riordan

“We wanted to extend our work to even larger workloads, so when we heard NVIDIA was adding multi-node capability to cuQuantum, we wanted to support it as soon as possible,” he said.

Within four months, multi-node PennyLane was born.

“For a big, distributed GPU project, that was a great turnaround time. Everyone working on cuQuantum helped make the integration as easy as possible,” O’Riordan said.

The team is still collecting data, but so far on “sample-based workloads, we see almost linear scaling,” he said.

Or, as NVIDIA founder and CEO Jensen Huang might say, “The more you buy, the more you save.”

Read More

One Small Step for Artists, One Giant Leap for Creative-Kind

One Small Step for Artists, One Giant Leap for Creative-Kind

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks and demonstrates how NVIDIA Studio technology improves creative workflows. 

When it comes to converting 2D concepts into 3D masterpieces, self-taught visual development artist Alex Treviño has confidence in the potential of all aspiring creators.

“You may think it’s a complicated process, but trust me, it’s easier than you think,” he said.

The featured content creator of this week’s In the NVIDIA Studio installment, Treviño is the founder of AENDOM, a project with the mission of creating artwork rooted in storytelling elements and sharing creative processes to educate and inspire the next generation of artists.

 

From this initiative, the Lunar Rover collection was born.

Shooting for the Moon

The story behind the Lunar Rover collection comes from an exploration of grief and inspired by the work of artist Mattias Adolfsson.

However, Treviño wanted to translate Adolfsson’s detailed and playful caricature style into his own 3D design.

Treviño’s inspiration, credit Mattias Adolfsson.

Treviño started gathering reference imagery and creating mood boards with the standalone program PureRef, which allowed him to play with different perspectives and styles while in the conceptual phase.

“I wanted the character to explore a desolate landscape where it is clear that, despite loneliness and abandonment, he continues to explore in allusion to the emotions of grief,” Treviño said.

Advanced sculpting in Blender.

He then shaped and sculpted models in his preferred 3D app, Blender. Using its Cycles’ RTX-accelerated OptiX ray tracing in the viewport, powered by his GeForce RTX 3080 Ti GPU-equipped PC, Treviño unlocked interactive, photorealistic modeling with smooth movement in the viewport.

“NVIDIA GPUs have a wide range of support and powerful performance, which ensures that I can rely on my GPU to work correctly and render images faster and with higher quality,” said Treviño.

Next, Treviño applied UV mapping to his models, which allowed him to texture them in Adobe Substance 3D Painter to create realistic, detailed textures.

UV mapping in Blender.

RTX-accelerated light and ambient occlusion baking optimized assets in mere moments.

Textures created in Adobe Substance 3D Painter.

“My GeForce RTX GPU’s capabilities were essential while texturing,” Treviño said. “Movement without lag and the ability to make speedy material changes effortlessly were especially helpful while swapping looks.”

Treviño moved to Adobe Illustrator to create alphas — color components that represent degrees of transparency or opacity of colors — as well as masks and patterns.

“GPU acceleration and AI-enabled features are essential tools, as they allow me to work more efficiently and produce higher-quality results,” said Treviño.

He returned to Blender, taking advantage of RTX-accelerated OptiX ray tracing in Blender Cycles for the fastest final-frame render.

Finally, Treviño imported the project into Adobe Photoshop for postproduction work, including adjusting color grading, sharpness, noise and chromatic aberration, and using look-up tables for retouching — just a few of the 30+ GPU-accelerated features at his disposal.

Stunning details in post-production and color correction thanks to Adobe Photoshop.

The end result achieved Treviño’s goal of creating a desolate landscape and alluding to the emotions of grief.

Beautiful yet desolate.

For a more detailed look at Treviño’s creative process, check out his five-part tutorial series, Creating 3D Lunar Rover w/ Alex Treviño, live on the NVIDIA Studio YouTube channel.

https://www.youtube.com/playlist?list=PL4w6jm6S2lzvy-mfeIHJiAmqN6ARz-DJt

Discover exclusive step-by-step tutorials from industry-leading artists, inspiring community showcases and more, powered by NVIDIA Studio hardware and software.

Lunar Lessons Learned

Treviño has three monumental pieces of advice for aspiring artists:

  1. Learn the basics of the entire pipeline process. Learn about modeling, texturing, rendering, post-production, marketing and promotion. Expertise across the board isn’t required but general understanding of each step is.
  2. Don’t be afraid to experiment. The best way to learn is by doing. Try new things and experiment with different techniques. Mistakes will lead to growth and evolved artistry.
  3. Find a community of like-minded artists. Connect in multiple communities to learn from others, share work and get valuable feedback.
3D visual development artist Alex Treviño.

Check out Treviño’s portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

NVIDIA Grace Hopper Superchip Sweeps MLPerf Inference Benchmarks

NVIDIA Grace Hopper Superchip Sweeps MLPerf Inference Benchmarks

In its debut on the MLPerf industry benchmarks, the NVIDIA GH200 Grace Hopper Superchip ran all data center inference tests, extending the leading performance of NVIDIA H100 Tensor Core GPUs.

The overall results showed the exceptional performance and versatility of the NVIDIA AI platform from the cloud to the network’s edge.

Separately, NVIDIA announced inference software that will give users leaps in performance, energy efficiency and total cost of ownership.

GH200 Superchips Shine in MLPerf

The GH200 links a Hopper GPU with a Grace CPU in one superchip. The combination provides more memory, bandwidth and the ability to automatically shift power between the CPU and GPU to optimize performance.

Separately, NVIDIA HGX H100 systems that pack eight H100 GPUs delivered the highest throughput on every MLPerf Inference test in this round.

Grace Hopper Superchips and H100 GPUs led across all MLPerf’s data center tests, including inference for computer vision, speech recognition and medical imaging, in addition to the more demanding use cases of recommendation systems and the large language models (LLMs) used in generative AI.

Overall, the results continue NVIDIA’s record of demonstrating performance leadership in AI training and inference in every round since the launch of the MLPerf benchmarks in 2018.

The latest MLPerf round included an updated test of recommendation systems, as well as the first inference benchmark on GPT-J, an LLM with six billion parameters, a rough measure of an AI model’s size.

TensorRT-LLM Supercharges Inference

To cut through complex workloads of every size, NVIDIA developed TensorRT-LLM, generative AI software that optimizes inference. The open-source library — which was not ready in time for August submission to MLPerf — enables customers to more than double the inference performance of their already purchased H100 GPUs at no added cost.

Performance increase using TRT-LLM on H100 GPUs for AI inference

NVIDIA’s internal tests show that using TensorRT-LLM on H100 GPUs provides up to an 8x performance speedup compared to prior generation GPUs running GPT-J 6B without the software.

The software got its start in NVIDIA’s work accelerating and optimizing LLM inference with leading companies including Meta, AnyScale, Cohere, Deci, Grammarly, Mistral AI, MosaicML (now part of Databricks), OctoML, Tabnine and Together AI.

MosaicML added features that it needs on top of TensorRT-LLM and integrated them into its existing serving stack. “It’s been an absolute breeze,” said Naveen Rao, vice president of engineering at Databricks.

“TensorRT-LLM is easy-to-use, feature-packed and efficient,” Rao said. “It delivers state-of-the-art performance for LLM serving using NVIDIA GPUs and allows us to pass on the cost savings to our customers.”

TensorRT-LLM is the latest example of continuous innovation on NVIDIA’s full-stack AI platform. These ongoing software advances give users performance that grows over time at no extra cost and is versatile across diverse AI workloads.

L4 Boosts Inference on Mainstream Servers 

In the latest MLPerf benchmarks, NVIDIA L4 GPUs ran the full range of workloads and delivered great performance across the board.

For example, L4 GPUs running in compact, 72W PCIe accelerators delivered up to 6x more performance than CPUs rated for nearly 5x higher power consumption.

In addition, L4 GPUs feature dedicated media engines that, in combination with CUDA software, provide up to 120x speedups for computer vision in NVIDIA’s tests.

L4 GPUs are available from Google Cloud and many system builders, serving customers in industries from consumer internet services to drug discovery.

Performance Boosts at the Edge

Separately, NVIDIA applied a new model compression technology to demonstrate up to a 4.7x performance boost running the BERT LLM on an L4 GPU. The result was in MLPerf’s so-called “open division,” a category for showcasing new capabilities.

The technique is expected to find use across all AI workloads. It can be especially valuable when running models on edge devices constrained by size and power consumption.

In another example of leadership in edge computing, the NVIDIA Jetson Orin system-on-module showed performance increases of up to 84% compared to the prior round in object detection, a computer vision use case common in edge AI and robotics scenarios.

NVIDIA Jetson Orin performance increase on MLPerf inference

The Jetson Orin advance came from software taking advantage of the latest version of the chip’s cores, such as a programmable vision accelerator, an NVIDIA Ampere architecture GPU and a dedicated deep learning accelerator.

Versatile Performance, Broad Ecosystem

The MLPerf benchmarks are transparent and objective, so users can rely on their results to make informed buying decisions. They also cover a wide range of use cases and scenarios, so users know they can get performance that’s both dependable and flexible to deploy.

Partners submitting in this round included cloud service providers Microsoft Azure and Oracle Cloud Infrastructure and system manufacturers ASUS, Connect Tech, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.

Overall, MLPerf is backed by more than 70 organizations, including Alibaba, Arm, Cisco, Google, Harvard University, Intel, Meta, Microsoft and the University of Toronto.

Read a technical blog for more details on how NVIDIA achieved the latest results.

All the software used in NVIDIA’s benchmarks is available from the MLPerf repository, so everyone can get the same world-class results. The optimizations are continuously folded into containers available on the NVIDIA NGC software hub for GPU applications.

Read More

NVIDIA Partners with India Giants to Advance AI in World’s Most Populous Nation

NVIDIA Partners with India Giants to Advance AI in World’s Most Populous Nation

The world’s largest democracy is poised to transform itself and the world, embracing AI on an enormous scale.

Speaking with the press Friday in Bengaluru, in the context of announcements from two of India’s largest conglomerates, Reliance Industries Limited and Tata Group, NVIDIA founder and CEO Jensen Huang detailed plans to bring AI technology and skills to address the world’s most populous nation’s greatest challenges.

“I think this is going to be one of the largest AI markets in the world,” said Huang, who was wrapping up a week of high-level meetings across the nation, including with Prime Minister Narendra Modi, leading AI researchers, top business leaders, the press and the country’s 4,000-some NVIDIA employees.

The companies will work together to create an AI computing infrastructure and platforms for developing AI solutions. It will be based on NVIDIA technology like the NVIDIA GH200 Grace Hopper Superchip and NVIDIA DGX Cloud.

GH200 marks a fundamental shift in computing architecture that provides exceptional performance and massive memory bandwidth, while DGX Cloud, an AI supercomputing service in the cloud, makes it easier for enterprises to train their employees in AI technology, access the technology internally and provide generative AI services to customers.

In his exchange with more than a dozen of India’s top tech journalists following the announcement, Huang said computer science expertise is a core competency for India, and that with access to technology and capital India is poised to build AI to be able to solve challenges at home and abroad.

“You have the data, you have the talent,” Huang said. “We are open for business and bring great expertise on building supercomputers.

During the freewheeling back and forth with the media, Hiuang emphasized India’s strength in information technology and the potential for AI to accelerate the development of India’s IT industry.

“IT is one of your natural resources. You produce it at an incredible scale. You’re incredibly good at it. You export it all over the world,” Huang said.

India’s “AI Moment”

Earlier, after meeting with many of the region’s top technology leaders — including startup pioneers, AI proponents, and key players in India’s digital public infrastructure — Huang hailed “India’s moment,” saying the nationis on the cusp of becoming a global AI powerhouse.

NVIDIA CEO Jensen Huang with Nandan Nilekani, founder of Infosys and founding chairman of UIDAI during a meeting with key Indian tech leaders.

While India has well-known technical capabilities — distinguished technical universities, 2,500 engineering colleges and an estimated 1.5 million engineers — many of its 1.4 billion people, located across sprawling metropolitan areas and some 650,000 villages, collectively speaking dozens of languages, have yet to fully benefit from this progress.

Applied in the Indian context, AI can help rural farmers interact via cell phones in their local language to get weather information and crop prices. It can help provide, at a massive scale, expert diagnosis of medical symptoms and imaging scans where doctors may not be immediately available. It can better predict cyclonic storms using decades of atmospheric data, enabling those at risk to more quickly evacuate and find shelter.

Reliance Industries and Tata Communications will build and operate state-of-the-art AI supercomputing data centers based on such technology, utilizing it for internal AI development and infrastructure-as-a-service for India’s AI researchers, companies and burgeoning AI startup ecosystem.

That effort, Huang said, during his conversation with the Indian technology press, promises to be part of a process that will turn India into a beacon for AI technology.

“AI could be built in India, used in India, and exported from India,” Huang said.

Read More

How Industries Are Meeting Consumer Expectations With Speech AI

How Industries Are Meeting Consumer Expectations With Speech AI

Thanks to rapid technological advances, consumers have become accustomed to an unprecedented level of convenience and efficiency.

Smartphones make it easier than ever to search for a product and have it delivered right to the front door. Video chat technology lets friends and family on different continents connect with ease. With voice command tools, AI assistants can play songs, initiate phone calls or recommend the best Italian food in a 10-mile radius. AI algorithms can even predict which show users may want to watch next or suggest an article they may want to read before making a purchase.

It’s no surprise, then, that customers expect fast and personalized interactions with companies. According to a Salesforce research report, 83% of consumers expect immediate engagement when they contact a company, while 73% expect companies to understand their unique needs and expectations. Nearly 60% of all customers want to avoid customer service altogether, preferring to resolve issues with self-service features.

Meeting such high consumer expectations places a massive burden on companies in every industry, including on their staff and technological needs — but speech AI can help.

Speech AI can understand and converse in natural language, creating opportunities for seamless, multilingual customer interactions while supplementing employee capabilities. It can power self-serve banking in the financial services industry, enable food kiosk avatars in restaurants, transcribe clinical notes in healthcare facilities or streamline bill payments for utility companies — helping businesses across industries deliver personalized customer experiences.

Speech AI for Banking and Payments

Most people now use both digital and traditional channels to access banking services, creating a demand for omnichannel, personalized customer support. However, higher demand for support coupled with a high agent churn rate has left many financial institutions struggling to keep up with the service and support needs of their customers.

Common consumer frustrations include difficulty with complex digital processes, a lack of helpful and readily available information, insufficient self-service options, long call wait times and communication difficulties with support agents.

According to a recent NVIDIA survey, the top AI use cases for financial service institutions are natural language processing (NLP) and large language models (LLMs). These models automate customer service interactions and process large bodies of unstructured financial data to provide AI-driven insights that support all lines of business across financial institutions — from risk management and fraud detection to algorithmic trading and customer service.

By providing speech-equipped self-service options and supporting customer service agents with AI-powered virtual assistants, banks can improve customer experiences while controlling costs. AI voice assistants can be trained on finance-specific vocabulary and rephrasing techniques to confirm understanding of a user’s request before offering answers.

Kore.ai, a conversational AI software company, trained its BankAssist solution on 400-plus retail banking use cases for interactive voice response, web, mobile, SMS and social media channels. Customers can use a voice assistant to transfer funds, pay bills, report lost cards, dispute charges, reset passwords and more.

Kore.ai’s agent voice assistant has also helps live agents provide personalized suggestions so they can resolve issues faster. The solution has been shown to improve live agent efficiency by cutting customer handling time by 40% with a return on investment of $2.30 per voice session.

With such trends, expect financial institutions to accelerate the deployment of speech AI to streamline customer support and reduce wait times, offer more self-service options, transcribe calls to speed loan processing and automate compliance, extract insights from spoken content and boost the overall productivity and speed of operations.

Speech AI for Telecommunications    

Heavy investments in 5G infrastructure and cut-throat competition to monetize and achieve profitable returns on new networks mean that maintaining customer satisfaction and brand loyalty is paramount in the telco industry.

According to an NVIDIA survey of 400-plus industry professionals, the top AI use cases in the telecom industry involve optimizing network operations and improving customer experiences. Seventy-three percent of respondents reported increased revenue from AI.

By using speech AI technologies to power chatbots, call-routing, self-service features and recommender systems, telcos can enhance and personalize customer engagements.

KT, a South Korean mobile operator with over 22 million users, has built GiGa Genie, an intelligent voice assistant that’s been trained to understand and use the Korean language using LLMs. It has already conversed with over 8 million users.

By understanding voice commands, the GiGA Genie AI speaker can support people with tasks like turning on smart TVs or lights, sending text messages or providing real-time traffic updates.

KT has also strengthened its AI-powered Customer Contact Center with transformer-based speech AI models that can independently handle over 100,000 calls per day. A generative AI component of the system autonomously responds to customers with suggested resolutions or transfers them to human agents for more nuanced questions and solutions.

Telecommunications companies are expected to lean into speech AI to build more customer self-service capabilities, optimize network performance and enhance overall customer satisfaction.

Speech AI for Quick-Service Restaurants

The food service industry is expected to reach $997 billion in sales in 2023, and its workforce is projected to grow by 500,000 openings. Meanwhile, elevated demand for drive-thru, curbside pickup and home delivery suggests a permanent shift in consumer dining preferences. This shift creates the challenge of hiring, training and retaining staff in an industry with notoriously high turnover rates — all while meeting consumer expectations for fast and fresh service.

Drive-thru order assistants and in-store food kiosks equipped with speech AI can help ease the burden. For example, speech-equipped avatars can help automate the ordering process by offering menu recommendations, suggesting promotions, customizing options or passing food orders directly to the kitchen for preparation.

HuEx, a Toronto-based startup and member of NVIDIA Inception, has designed a multilingual automated order assistant to enhance drive-thru operations. Known as AIDA, the AI assistant receives and responds to orders at the drive-thru speaker box while simultaneously transcribing voice orders into text for food-prep staff.

AIDA understands 300,000-plus product combinations with 90% accuracy, from common requests such as “coffee with milk” to less common requests such as “coffee with butter.” It can even understand different accents and dialects to ensure a seamless ordering experience for a diverse population of consumers.

Speech AI streamlines the order process by speeding fulfillment, reducing miscommunication and minimizing customer wait times. Early movers will also begin to use speech AI to extract customer insights from voice interactions to inform menu options, make upsell recommendations and improve overall operational efficiency while reducing costs.

Speech AI for Healthcare

In the post-pandemic era, the digitization of healthcare is continuing to accelerate. Telemedicine and computer vision support remote patient monitoring, voice-activated clinical systems help patients check in and receive zero-touch care and speech recognition technology supports clinical documentation responsibilities. Per IDC, 36% of survey respondents indicated that they had deployed digital assistants for patient healthcare.

Automated speech recognition and NLP models can now capture, recognize, understand and summarize key details in medical settings. At the Conference for Machine Intelligence in Medical Imaging, NVIDIA researchers showcased a state-of-the-art pretrained architecture with speech-to-text functionality to extract clinical entities from doctor-patient conversations. The model identifies clinical words — including symptoms, medication names, diagnoses and recommended treatments — and automatically updates medical records.

This technology can ease the burden of manual note-taking and has the potential to accelerate insurance and billing processes while also creating consultation recaps for caregivers. Relieved of administrative tasks, physicians can focus on patient care to deliver superior experiences.

Artisight, an AI platform for healthcare, uses speech recognition to power zero-touch check-ins and speech synthesis to notify patients in the waiting room when the doctor is available. Over 1,200 patients per day use Artisight kiosks, which help streamline registration processes, improve patient experiences, eliminate data entry errors with automation and boost staff productivity.

As healthcare moves toward a smart hospital model, expect to see speech AI play a bigger role in supporting medical professionals and powering low-touch experiences for patients. This may include risk factor prediction and diagnosis through clinical note analysis, translation services for multilingual care centers, medical dictation and transcription and automation of other administrative tasks.

Speech AI for Energy

Faced with increasing demand for clean energy, high operating costs and a workforce retiring in greater numbers, energy and utility companies are looking for ways to do more with less.

To drive new efficiencies, prepare for the future of energy and meet ever-rising customer expectations, utilities can use speech AI. Voice-based customer service can enable customers to report outages, inquire about billing and receive support on other issues without agent intervention. Speech AI can streamline meter reading, support field technicians with voice notes and voice commands to access work orders and enable utilities to analyze customer preferences with NLP.

Minerva CQ, an AI assistant designed specifically for retail energy use cases, supports customer service agents by transcribing conversations into text in real time. Text is fed into Minerva CQ’s AI models, which analyze customer sentiment, intent, propensity and more.

By dynamically listening, the AI assistant populates an agent’s screen with dialogue suggestions, behavioral cues, personalized offers and sentiment analysis. A knowledge-surfacing feature pulls up a customer’s energy usage history and suggests decarbonization options — arming agents with the information needed to help customers make informed decisions about their energy consumption.

With the AI assistant providing consistent, simple explanations on energy sources, tariff plans, billing changes and optimal spending, customer service agents can effortlessly guide customers to the most ideal energy plan. After deploying Minerva CQ, one utility provider reported a 44% reduction in call handling time, a 12.5% increase in first-contact resolution and average savings of $2.67 per call.

Speech AI is expected to continue to help utility providers reduce training costs, remove friction from customer service interactions and equip field technicians with voice-activated tools to boost productivity and improve safety — all while enhancing customer satisfaction.

Speech and Translation AI for the Public Sector

Because public service programs are often underfunded and understaffed, citizens seeking vital services and information are at times left waiting and frustrated. To address this challenge, some federal- and state-level agencies are turning to speech AI to achieve more timely service delivery.

The Federal Emergency Management Agency uses automated speech recognition systems to manage emergency hotlines, analyze distress signals and direct resources efficiently. The U.S. Social Security Administration uses an interactive voice response system and virtual assistants to respond to inquiries about social security benefits and application processes and to provide general information.

The Department of Veterans Affairs has appointed a director of AI to oversee the integration of the technology into its healthcare systems. The VA uses speech recognition technology to power note-taking during telehealth appointments. It has also developed an advanced automated speech transcription engine to help score neuropsychological tests for analysis of cognitive decline in older patients.

Additional opportunities for speech AI in the public sector include real-time language translation services for citizen interactions, public events or visiting diplomats. Public agencies that handle a large volume of calls can benefit from multilingual voice-based interfaces to allow citizens to access information, make inquiries or request services in different languages.

Speech and translation AI can also automate document processing by converting multilingual audio recordings or spoken content into translated text to streamline compliance processes, improve data accuracy and enhance administrative task efficiency. Speech AI additionally has the potential to expand access to services for people with visual or mobility impairments.

Speech AI for Automotive 

From vehicle sales to service scheduling, speech AI can bring numerous benefits to automakers, dealerships, drivers and passengers alike.

Before visiting a dealership in person, more than half of vehicle shoppers begin their search online, then make the first contact with a phone call to collect information. Speech AI chatbots trained on vehicle manuals can answer questions on technological capabilities, navigation, safety, warranty, maintenance costs and more. AI chatbots can also schedule test drives, answer pricing questions and inform shoppers of which models are in stock. This enables automotive manufacturers to differentiate their dealership networks through intelligent and automated engagements with customers.

Manufacturers are building advanced speech AI into vehicles and apps to improve driving experiences, safety and service. Onboard AI assistants can execute natural language voice commands for navigation, infotainment, general vehicle diagnostics and querying user manuals. Without the need to operate physical controls or touch screens, drivers can keep their hands on the wheel and eyes on the road.

Speech AI can help maximize vehicle up-time for commercial fleets. AI trained on technical service bulletins and software update cadences lets technicians provide more accurate quotes for repairs, identify key information before putting the car on a lift and swiftly supply vehicle repair updates to commercial and small business customers.

With insights from driver voice commands and bug reports, manufacturers can also improve vehicle design and operating software. As self-driving cars become more advanced, expect speech AI to play a critical role in how drivers operate vehicles, troubleshoot issues, call for assistance and schedule maintenance.

Speech AI — From Smart Spaces to Entertainment

Speech AI has the potential to impact nearly every industry.

In Smart Cities, speech AI can be used to handle distress calls and provide emergency responders with crucial information. In Mexico City, the United Nations Office on Drugs and Crime is developing a speech AI program to analyze 911 calls to prevent gender violence. By analyzing distress calls, AI can identify keywords, signals and patterns to help prevent domestic violence against women. Speech AI can also be used to deliver multilingual services in public spaces and improve access to transit for people who are visually impaired.

In higher education and research, speech AI can automatically transcribe lectures and research interviews, providing students with detailed notes and saving researchers the time spent compiling qualitative data. Speech AI also facilitates the translation of educational content to various languages, increasing its accessibility.

AI translation powered by LLMs is making it easier to consume entertainment and streaming content online in any language. Netflix, for example, is using AI to automatically translate subtitles into multiple languages. Meanwhile, startup Papercup is using AI to automate video content dubbing to reach global audiences in their local languages.

Transforming Product and Service Offerings With Speech AI

In the modern consumer landscape, it’s imperative that companies provide convenient, personalized customer experiences. Businesses can use NLP and the translation capabilities of speech AI to transform the way they operate and interact with customers in real time on a global scale.

Companies across industries are using speech AI to deliver rapid, multilingual customer service responses, self-service features and information and automation tools to empower employees to provide higher-value experiences.

To help enterprises in every industry realize the benefits of speech, translation and conversational AI, NVIDIA offers a suite of technologies.

NVIDIA Riva, a GPU-accelerated multilingual speech and translation AI software development kit, powers fully customizable real-time conversational AI pipelines for automatic speech recognition, text-to-speech and neural machine translation applications.

NVIDIA Tokkio, built on the NVIDIA Omniverse Avatar Cloud Engine, offers cloud-native services to create virtual assistants and digital humans that can serve as AI customer service agents.

These tools enable developers to quickly deploy high-accuracy applications with the real-time response speed needed for superior employee and customer experiences.

Join the free Speech AI Day on Sept. 20 to hear from renowned speech and translation AI leaders about groundbreaking research, real-world applications and open-source contributions.

Read More

Attention, Please: Focus Entertainment Brings Game Pass Titles to GeForce NOW

Attention, Please: Focus Entertainment Brings Game Pass Titles to GeForce NOW

GeForce NOW brings expanded support for PC Game Pass to members this week. Members can stream eight more games from Microsoft’s subscription service, including four titles from hit publisher Focus Entertainment.

Play A Plague Tale: Requiem, Atomic Heart and more from the GeForce NOW library at up to 4K resolution and 120 frames per second with a GeForce NOW Ultimate membership.

Plus, time’s almost up to take on the Ultimate KovaaK’s Challenge. Get on the leaderboard today — the challenge ends on Thursday, Sept. 21.

Laser-Focused 

Four games from Focus Entertainment’s PC Game Pass catalog join GeForce NOW this week. Members signed up with Microsoft’s subscription service can now stream titles like A Plague Tale: Requiem, Atomic Heart and more titles at stunning quality across their devices — without additional purchases.

A Plague Tale Requiem on GeForce NOW
The ultimate test of love and survival on the ultimate cloud gaming service.

Embark on a heartrending journey into a brutal, breathtaking world in the critically acclaimed A Plague Tale: Requiem or explore an alternate history of the 1950s Soviet Union in Atomic Heart. Go off road in SnowRunner or salvage among the stars in Hardspace: Shipbreaker. Members can even bring the squad together for military battles in Insurgency: Sandstorm. There’s something for everyone.

Experience it all with a PC Game Pass subscription, best paired with a GeForce NOW Ultimate membership, which provides up to 4K streaming or up to 240 fps for the ultimate cloud gaming experience.

Endless Adventures

SYNCED on GeForce NOW
Venture into the collapsed world for intense PvE and PvP combats in SYNCED on GeForce NOW.

A new week, a new batch of games. Catch the 16 new games supported in the cloud this week:

  • Chants of Sennaar (New release on Steam, Sept. 5)
  • SYNCED (New release on Steam, Sept. 7)
  • Void Crew (New release on Steam, Sept. 7)
  • Deceive Inc. (Steam)
  • A Plague Tale: Requiem (Xbox)
  • Airborne Kingdom (Epic Games Store)
  • Atomic Heart (Xbox)
  • Call of the Wild: The Angler (Xbox)
  • Danganronpa V3: Killing Harmony (Xbox)
  • Death in the Water (Steam)
  • Hardspace: Shipbreaker (Xbox)
  • Insurgency: Sandstorm (Xbox)
  • Monster Sanctuary (Xbox)
  • Saints Row (Steam)
  • Shadowrun: Hong Kong – Extended Edition (Xbox)
  • SnowRunner (Xbox)
  • War for the Overworld (Steam)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More