Keeping an AI on Quakes: Researchers Unveil Deep Learning Model to Improve Forecasts

Keeping an AI on Quakes: Researchers Unveil Deep Learning Model to Improve Forecasts

A research team is aiming to shake up the status quo for earthquake models.

Researchers from the Universities of California at Berkeley and Santa Cruz, and the Technical University of Munich recently released a paper describing a new model that delivers deep learning to earthquake forecasting.

Dubbed RECAST, the model can use larger datasets and offer greater flexibility than the current model standard, ETAS, which has improved only incrementally since its development in 1988, it argues.

The paper’s authors — Kelian Dascher-Cousineau, Oleksandr Shchur, Emily Brodsky and Stephan Günnemann — trained the model on NVIDIA GPU workstations.

“There’s a whole field of research that explores how to improve ETAS,” said Dacher-Cousineau, a postdoctoral researcher at UC Berkeley. “It’s an immensely useful model that has been used a lot, but it’s been frustratingly hard to improve on it.”

AI Drives Seismology Ahead 

The promise of RECAST is that its model flexibility, self-learning capability and ability to scale will enable it to interpret larger datasets and make better predictions during earthquake sequences, he said.

Model advances with improved forecasts could help agencies such as the U.S. Geological Survey and its counterparts elsewhere offer better information to those who need to know. Firefighters and other first responders entering damaged buildings, for example, could benefit from more reliable forecasts on aftershocks.

“There’s a ton of room for improvement within the forecasting side of things. And for a variety of reasons, our community hasn’t really dove into the machine learning side of things, partly because of being conservative and partly because these are really impactful decisions,” said Dacher-Cousineau.

RECAST Model Moves the Needle

While past work on aftershock predictions has relied on statistical models, this doesn’t scale to handle the larger datasets becoming available from an explosion of newly enhanced data capabilities, according to the researchers.

The RECAST model architecture builds on developments in neural temporal point processes, which are probabilistic generative models for continuous time event sequences. In a nutshell, the model has an encoder-decoder neural network architecture used for predicting the timing of a next event based on a history of past events.

Dacher-Cousineau said that releasing and benchmarking the model in the paper demonstrates that it can quickly learn to do what ETAS can do, while it holds vast potential to do more.

“Our model is a generative model that, just like a natural language processing model, you can generate paragraphs and paragraphs of words, and you can sample it and make synthetic catalogs,” said Dacher-Cousineau. “Part of the paper is there to convince old-school seismologists that this is a model that’s doing the right thing — we’re not overfitting.”

Boosting Earthquake Data With Enhanced Catalogs 

Earthquake catalogs, or records of earthquake data, for particular geographies can be small. That’s because to this day many come from seismic analysts who interpret scribbles of raw data that comes from seismometers. But this, too, is an area where AI researchers are building models to autonomously interpret these P waves and other signals in the data in real time.

Enhanced data is meanwhile helping to fill the void. With the labeled data in earthquake catalogs, machine learning engineers are revisiting these sources of raw data and building enhanced catalogs to get 10x to 100x the number of earthquakes for training data and categories.

“So it’s not necessarily that we put out more instruments to gather data but rather that we enhance the datasets,” said Dacher-Cousineau.

Applying Larger Datasets to Other Settings

With the larger datasets, the researchers are starting to see improvements from RECAST over the standard ETAS model.

To advance the state of the art in earthquake forecasting, Dascher-Cousineau is working with a team of undergraduates at UC Berkeley to train earthquake catalogs on multiple regions for better predictions.

“I have the natural language processing analogies in mind, where it seems very plausible that earthquake sequences in Japan are useful to inform earthquakes in California,” he said. “And you can see that going in the right direction.”

Learn about synthetic data generation with NVIDIA Omniverse Replicator

Read More

Brains of the Operation: Atlas Meditech Maps Future of Surgery With AI, Digital Twins

Brains of the Operation: Atlas Meditech Maps Future of Surgery With AI, Digital Twins

Just as athletes train for a game or actors rehearse for a performance, surgeons prepare ahead of an operation.

Now, Atlas Meditech is letting brain surgeons experience a new level of realism in their pre-surgery preparation with AI and physically accurate simulations.

Atlas Meditech, a brain-surgery intelligence platform, is adopting tools — including the MONAI medical imaging framework and NVIDIA Omniverse 3D development platform — to build AI-powered decision support and high-fidelity surgery rehearsal platforms. Its mission: improving surgical outcomes and patient safety.

“The Atlas provides a collection of multimedia tools for brain surgeons, allowing them to mentally rehearse an operation the night before a real surgery,” said Dr. Aaron Cohen-Gadol, founder of Atlas Meditech and its nonprofit counterpart, Neurosurgical Atlas. “With accelerated computing and digital twins, we want to transform this mental rehearsal into a highly realistic rehearsal in simulation.”

Neurosurgical Atlas offers case studies, surgical videos and 3D models of the brain to more than a million online users. Dr. Cohen-Gadol, also a professor of neurological surgery at Indiana University School of Medicine, estimates that more than 90% of brain surgery training programs in the U.S. — as well as tens of thousands of neurosurgeons in other countries — use the Atlas as a key resource during residency and early in their surgery careers.

Atlas Meditech’s Pathfinder software is integrating AI algorithms that can suggest safe surgical pathways for experts to navigate through the brain to reach a lesion.

And with NVIDIA Omniverse, a platform for connecting and building custom 3D pipelines and metaverse applications, the team aims to create custom virtual representations of individual patients’ brains for surgery rehearsal.

Custom 3D Models of Human Brains

A key benefit of Atlas Meditech’s advanced simulations — either onscreen or in immersive virtual reality — is the ability to customize the simulations, so that surgeons can practice on a virtual brain that matches the patient’s brain in size, shape and lesion position.

“Every patient’s anatomy is a little different,” said Dr. Cohen-Gadol. “What we can do now with physics and advanced graphics is create a patient-specific model of the brain and work with it to see and virtually operate on a tumor. The accuracy of the physical properties helps to recreate the experience we have in the real world during an operation.”

To create digital twins of patients’ brains, the Atlas Pathfinder tool has adopted MONAI Label, which can support radiologists by automatically annotating MRI and CT scans to segment normal structures and tumors.

“MONAI Label is the gateway to any healthcare project because it provides us with the opportunity to segment critical structures and protect them,” said Dr. Cohen-Gadol. “For the Atlas, we’re training MONAI Label to act as the eyes of the surgeon, highlighting what is a normal vessel and what’s a tumor in an individual patient’s scan.”

With a segmented view of a patient’s brain, Atlas Pathfinder can adjust its 3D brain model to morph to the patient’s specific anatomy, capturing how the tumor deforms the normal structure of their brain tissue.

Based on the visualization — which radiologists and surgeons can modify to improve the precision — Atlas Pathfinder suggests the safest surgical approaches to access and remove a tumor without harming other parts of the brain. Each approach links out to the Atlas website, which includes a written tutorial of the operative plan.

“AI-powered decision support can make a big difference in navigating a highly complex 3D structure where every millimeter is critical,” Dr. Cohen-Gadol said.

Realistic Rehearsal Environments for Practicing Surgeons 

Atlas Meditech is using NVIDIA Omniverse to develop a virtual operating room that can immerse surgeons into a realistic environment to rehearse upcoming procedures. In the simulation, surgeons can modify how the patient and equipment are positioned.

Using a VR headset, surgeons will be able to work within this virtual environment, going step by step through the procedure and receiving feedback on how closely they are adhering to the target pathway to reach the tumor. AI algorithms can be used to predict how brain tissue would shift as a surgeon uses medical instruments during the operation, and apply that estimated shift to the simulated brain.

“The power to enable surgeons to enter a virtual, 3D space, cut a piece of the skull and rehearse the operation with a simulated brain that has very similar physical properties to the patient would be tremendous,” said Dr. Cohen-Gadol.

To better simulate the brain’s physical properties, the team adopted NVIDIA PhysX, an advanced real-time physics simulation engine that’s part of NVIDIA Omniverse. Using haptic devices, they were able to experiment with adding haptic feedback to the virtual environment, mimicking the feeling of working with brain tissue.

Envisioning AI, Robotics in the Future of Surgery Training

Dr. Cohen-Gadol believes that in the coming years AI models will be able to further enhance surgery by providing additional insights during a procedure. Examples include warning surgeons about critical brain structures that are adjacent to the area they’re working in, tracking medical instruments during surgery, and providing a guide to next steps in the surgery.

Atlas Meditech plans to explore the NVIDIA Holoscan platform for streaming AI applications to power these real-time, intraoperative insights. Applying AI analysis to a surgeon’s actions during a procedure can provide the surgeon with useful feedback to improve their technique.

In addition to being used for surgeons to rehearse operations, Dr. Cohen-Gadol says that digital twins of the brain and of the operating room could help train intelligent medical instruments such as microscope robots using Isaac Sim, a robotics simulation application developed on Omniverse.

View Dr. Cohen-Gadol’s presentation at NVIDIA GTC.

Subscribe to NVIDIA healthcare news.

Read More

Fall in Line for October With Nearly 60 New Games, Including Latest Game Pass Titles to Join the Cloud

Fall in Line for October With Nearly 60 New Games, Including Latest Game Pass Titles to Join the Cloud

October brings more than falling leaves and pumpkin spice lattes for GeForce NOW members. Get ready for nearly 60 new games to stream, including Forza Motorsport and 16 more PC Game Pass titles.

Assassin’s Creed Mirage leads 29 new games to hit the GeForce NOW library this week. In addition, catch a challenge to earn in-game rewards for World of Warship players.

Leap Into the Cloud

Assassin's Creed Mirage on GeForce NOW
Nothing is true. Everything is permitted … in the cloud.

It’s not an illusion — Ubisoft’s Assassin’s Creed Mirage launches in the cloud this week. Mirage was created as an homage to the first Assassin’s Creed games and pays tribute to the series’ well-loved roots.

Join the powerful proto-Assassin order — the Hidden Ones — as a 17-year-old street thief named Basim Ibn Is’haq as he learns to become a master assassin. Stalk the streets of a bustling and historically accurate ninth-century Baghdad — the perfect urban setting to seamlessly parkour across rooftops, scale tall towers and flee guards while uncovering a conspiracy that threatens the city and Basim’s future destiny.

Take a Leap of Faith into a GeForce NOW Ultimate membership and explore this new open world at up to 4K resolution and 120 frames per second. Ultimate members get exclusive access to GeForce RTX 4080 servers in the cloud, making it the easiest upgrade around.

No Tricks, Only Treats

Don’t be spooked — GeForce NOW has plenty of treats for members this month. More PC Game Pass games are coming soon to the cloud, including Forza Motorsport from Turn 10 Studios and Xbox Game Studios and the Dishonored series from Arkane and Bethesda.

Catch some action (with a little stealth, magic and combat mixed in) with the Dishonored franchise. Dive into a struggle of power and revenge that revolves around the assassination of the Empress of the Isles. Members can follow the whole story starting with the original Dishonored game, up through the latest entry, Dishonored: Death of an Outsider, when the series launches in the cloud this month.

Jump into all the action with an Ultimate or Priority account today, for higher performance and faster access to stream over 1,700 games.

Check out the spooktacular list for October:

  • Star Trek: Infinite (New release on Steam, Oct. 12)
  • Lords of the Fallen (New release on Steam and Epic Games Store, Oct. 13)
  • Wizard with a Gun (New release on Steam, Oct. 17)
  • Alaskan Road Truckers (New release Steam and Epic Games Store, Oct. 18)
  • Hellboy: Web of Wyrd (New release on Steam, Oct. 18)
  • HOT WHEELS UNLEASHED 2 – Turbocharged (New release on Steam, Oct. 19)
  • Laika Aged Through Blood (New release on Steam, Oct. 19)
  • Cities: Skylines II (New release on Steam, Xbox and available on PC Game Pass, Oct. 24)
  • Ripout (New release on Steam, Oct 24)
  • War Hospital (New release on Steam, Oct. 26)
  • Alan Wake 2 (New release on Epic Games Store, Oct. 26)
  • Headbangers: Rhythm Royale (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Jusant (New release on Steam, Xbox and available on PC Game Pass, Oct. 31)
  • Bad North (Xbox, available on Microsoft Store)
  • Daymare 1994: Sandcastle (Steam)
  • For The King (Xbox, available on Microsoft Store)
  • Forza Motorsport (Steam, Xbox and available on PC Game Pass)
  • Heretic’s Fork (Steam)
  • Moonbreaker (Steam)
  • Metro Simulator 2 (Steam)
  • Narita Boy (Xbox, available on Microsoft Store)
  • Sifu (Xbox, available on Microsoft Store)
  • StalCraft (Steam)
  • Star Renegades (Xbox, available on Microsoft Store)
  • Streets of Rogue (Xbox, available on Microsoft Store)
  • Supraland (Xbox, available on Microsoft Store)
  • The Surge (Xbox, available on Microsoft Store)
  • Tiny Football (Steam)
  • Vampire Survivors (Steam and Xbox, available on PC Game Pass)
  • VEILED EXPERTS (Steam)
  • Yes, Your Grace (Xbox, available on Microsoft Store)

Come Sail Away

A new challenge awaits on the open sea.

World of Warships is launching a new in-game event this week exclusive to GeForce NOW members. From Oct. 5-9, those streaming the game on GeForce NOW will be prompted to complete a special in-game challenge chain, only available from the cloud, to earn economic reward containers and one-day GeForce NOW Priority trials. Aspiring admirals can learn more about these challenges on the World of Warships blog and social channels.

Those new to World of Warships can activate the invite code “GEFORCENOW” in the game starting today to claim exclusive rewards, including a seven-day Premium World of Warships account, 300 doubloons, credits and economic boosters. Once 15 battles are completed, players can choose one of the following tech tree ships to speed up game progress: Japanese destroyer Isokaze, American cruiser Phoenix, German battleship Moltke or British aircraft carrier Hermes.

Age Of Empires II on GeForce NOW
Conquer the cloud.

The leaves may be falling, but new games are always coming to the cloud. Dive into the action now with 29 new games this week:

  • Battle Shapers (New release on Steam, Oct. 3)
  • Disgaea 7: Vows of the Virtueless (New release on Steam, Oct. 3)
  • Station to Station (New release on Steam, Oct. 3)
  • The Lamplighter’s League (New release on Steam, Xbox and available on PC Game Pass, Oct. 3)
  • Thief Simulator 2 (New release on Steam, Oct. 4)
  • Heads Will Roll: Reforged (New release on Steam, Oct. 4)
  • Assassin’s Creed Mirage (New release on Ubisoft, Oct. 5)
  • Age of Empires II: Definitive Edition (Xbox, available on PC Game Pass)
  • Arcade Paradise (Xbox, available on PC Game Pass)
  • The Ascent (Xbox, available on Microsoft Store)
  • Citizen Sleeper (Xbox, available on PC Game Pass)
  • Dicey Dungeons (Xbox, available on PC Game Pass)
  • Godlike Burger (Epic Games Store)
  • Greedfall (Xbox, available on Microsoft Store)
  • Hypnospace Outlaw (Xbox, available on PC Game Pass)
  • Killer Frequency (Xbox, available on Microsoft Store)
  • Lonely Mountains: Downhill (Xbox, available on PC Game Pass)
  • Metro 2033 Redux (Xbox, available on Microsoft Store)
  • Metro: Last Light Redux (Xbox, available on Microsoft Store)
  • MudRunner (Xbox, available on Microsoft Store)
  • Potion Craft: Alchemist Simulator (Xbox, available on PC Game Pass)
  • Shadow Gambit: The Cursed Crew (Epic Games Store)
  • Slayers X: Terminal Aftermath: Vengance of the Slayer (Xbox, available on PC Game Pass)
  • Soccer Story (Xbox, available on PC Game Pass)
  • SOMA (Xbox, available on PC Game Pass)
  • Space Hulk: Tactics (Xbox, available on Microsoft Store)
  • SpiderHeck (Xbox, available on PC Game Pass)
  • SUPERHOT: MIND CONTROL DELETE (Xbox, available on Microsoft Store)
  • Surviving Mars (Xbox, available on Microsoft Store)

Surprises in September

On top of the 24 games announced in September, an additional 45 joined the cloud last month:

  • Void Crew (New release on Steam, Sept. 7)
  • Tavernacle! (New release on Steam, Sept. 11)
  • Gunbrella (New release on Steam, Sept. 13)
  • HumanitZ (New release on Steam, Sept. 18)
  • These Doomed Isles (New release on Steam, Sept. 25)
  • Overpass 2 (New release on Steam, Sept. 28)
  • 911 Operator (Epic Games Store)
  • A Plague Tale: Requiem (Xbox)
  • Amnesia: The Bunker (Xbox, available on PC Game Pass)
  • Airborne Kingdom (Epic Games Store)
  • Atomic Heart (Xbox)
  • BlazBlue: Cross Tag Battle (Xbox, available on PC Game Pass)
  • Bramble: The Mountain King (Xbox, available on PC Game Pass)
  • Call of the Wild: The Angler (Xbox)
  • Chained Echoes (Xbox, available on PC Game Pass)
  • Danganronpa V3: Killing Harmony (Xbox)
  • Descenders (Xbox, available on PC Game Pass)
  • Doom Eternal (Xbox, available on PC Game Pass)
  • Dordogne (Xbox, available on PC Game Pass)
  • Eastern Exorcist (Xbox, available on PC Game Pass)
  • Figment 2: Creed Valley (Xbox, available on PC Game Pass)
  • Hardspace: Shipbreaker (Xbox)
  • Insurgency: Sandstorm (Xbox)
  • I Am Fish (Xbox)
  • Last Call BBS (Xbox)
  • The Legend of Tianding (Xbox, available on PC Game Pass)
  • The Matchless Kungfu (Steam)
  • Mechwarrior 5: Mercenaries (Xbox, available on PC Game Pass)
  • Monster Sanctuary (Xbox)
  • Opus Magnum (Xbox)
  • Pizza Possum (New release on Steam, Sept. 28)
  • A Plague Tale: Innocence (Xbox)
  • Quake II (Steam, Epic Games Store and Xbox, available on PC Game Pass)
  • Remnant II (Epic Games Store)
  • Road 96 (Xbox)
  • Shadowrun: Hong Kong – Extended Edition (Xbox)
  • SnowRunner (Xbox)
  • Soulstice (New release on Epic Games Store, free on Sept. 28)
  • Space Hulk: Deathwing – Enhanced Edition (Xbox)
  • Spacelines from the Far Out (Xbox)
  • Superhot (Xbox)
  • Totally Reliable Delivery Service (Xbox, available on PC Game Pass)
  • Vampyr (Xbox)
  • Warhammer 40,000: Battlesector (Xbox, available on PC Game Pass)
  • Yooka-Laylee and the Impossible Lair (Xbox)

Halo Infinite and Kingdoms Reborn didn’t make it in September. Stay tuned to GFN Thursday for more updates.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

A Mine-Blowing Breakthrough: Open-Ended AI Agent Voyager Autonomously Plays ‘Minecraft’

A Mine-Blowing Breakthrough: Open-Ended AI Agent Voyager Autonomously Plays ‘Minecraft’

For NVIDIA Senior AI Scientist Jim Fan, the video game Minecraft served as the “perfect primordial soup” for his research on open-ended AI agents.

In the latest AI Podcast episode, host Noah Kravitz spoke with Fan on using large language models to create AI agents — specifically to create Voyager, an AI bot built with Chat GPT-4 that can autonomously play Minecraft.

AI agents are models that “can proactively take actions and then perceive the world, see the consequences of its actions, and then improve itself,” Fan said. Many current AI agents are programmed to achieve specific objectives, such as beating a game as quickly as possible or answering a question. They can work autonomously toward a particular output but lack a broader decision-making agency.

Fan wondered if it was possible to have a “truly open-ended agent that can be prompted by arbitrary natural language to do open-ended, even creative things.”

But he needed a flexible playground in which to test that possibility.

“And that’s why we found Minecraft to be almost a perfect primordial soup for open-ended agents to emerge, because it sets up the environment so well,” he said. Minecraft at its core, after all, doesn’t set a specific key objective for players other than to survive and freely explore the open world.

That became the springboard for Fan’s project, MineDojo, which eventually led to the creation of the AI bot Voyager.

“Voyager leverages the power of Chat GPT-4 to write code in Javascript to execute in the game,” Fan explained. “GPT-4 then looks at the output, and if there’s an error from JavaScript or some feedback from the environment, GPT-4 does a self-reflection and tries to debug the code.”

The bot learns from its mistakes and stores the correctly implemented programs in a skill library for future use, allowing for “lifelong learning.”

In-game, Voyager can autonomously explore for hours, adapting its decisions based on its environment and developing skills to combat monsters and find food when needed.

“We see all these behaviors come from the Voyager setup, the skill library and also the coding mechanism,” Fan explained. “We did not preprogram any of these behaviors.”

He then spoke more generally about the rise and trajectory of LLMs. He foresees strong applications in software, gaming and robotics and increasingly pressing conversations surrounding AI safety.

Fan encourages those looking to get involved and work with LLMs to “just do something,” whether that means using online resources or experimenting with beginner-friendly, CPU-based AI models.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

 

Read More

How AI Helps Fight Wildfires in California

How AI Helps Fight Wildfires in California

California has a new weapon against the wildfires that have devastated the state: AI.

A freshly launched system powered by AI trained on NVIDIA GPUs promises to provide timely alerts to first responders across the Golden State every time a blaze ignites.

The ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego, uses advanced AI developed by DigitalPath.

Harnessing the raw power of NVIDIA GPUs and aided by a network of thousands of cameras dotting the Californian landscape, DigitalPath has refined a convolutional neural network to spot signs of fire in real time.

A Mission That’s Close to Home

DigitalPath CEO Jim Higgins said it’s a mission that means a lot to the 100-strong technology partner, which is nestled in the Sierra Nevada foothills in Chico, Calif., a short drive from the town of Paradise, where the state’s deadliest wildfire killed 85 people in 2018.

“It’s one of the main reasons we’re doing this,” Higgins said of the wildfire, the deadliest and most destructive in the history of the most populous U.S. state. “We don’t want people to lose their lives.”

The ALERTCalifornia initiative is based at UC San Diego’s Jacobs School of Engineering, the Qualcomm Institute and the Scripps Institution of Oceanography.

The program manages a network of thousands of monitoring cameras and sensor arrays and collects data that provides actionable, real-time information to inform public safety.

The AI program started in June and was initially deployed in six of Cal Fire’s command centers. This month it expanded to all of CAL FIRE’s 21 command centers.

ALERTCalifornia, powered by DigitalPath, can detect fires from cameras positioned across the golden state.

DigitalPath began by building out a management platform for a network of cameras used to confirm California wildfires after a 911 call.

The company quickly realized there would be no way to have people examine images from the thousands of cameras relaying images to the system every ten to fifteen seconds.

So Ethan Higgins, the company’s system architect, turned to AI.

The team began by training a convolutional neural network on a cloud-based system running an NVIDIA A100 Tensor Core GPU and later transitioned to a system running on eight A100 GPUs.

The AI model is crucial to examining a system that sees almost 8 million images a day streaming in from over 1,000 first-party cameras, primarily in California, and thousands more from third-party sources nationwide, he said.

Impact of Wildfires

All anomalies being tracked throughout California as of Sept. 20, 2023. Image Credit: DigitalPath

It’s arriving just in time.

Wildfires have ravaged California over the past decade, burning millions of acres of land, destroying thousands of homes and businesses and claiming hundreds of lives.

According to CAL FIRE, in 2020 alone, the state experienced five of its six largest and seven of its 20 most destructive wildfires.

And the total dollar damage of wildfires in California from 2019 to 2021 was estimated at over $25 billion.

The new system promises to give first responders a crucial tool to prevent such conflagrations.

In fact, during a recent interview with DigitalPath, the system detected two separate fires in Northern California as they ignited.

Every day, the system detects between 50 and 300 events, offering invaluable real-time information to local first responders.

 

 

Beyond Detection: Enhancing Capabilities

Example of multiple cameras detecting a single anomaly. Image Credit: DigitalPath.

But AI is just part of the story.

The system is also a case study in how innovative companies can use AI to amplify their unique capabilities.

One of DigitalPath’s breakthroughs is its system’s ability to identify the same fire captured from diverse camera angles. DigitalPath’s system efficiently filters imagery down to a human-digestible level. The system filters 8 million daily images down to just 100 alerts, or 1.25 thousandths of one percent of total images captured.

“The system was designed from the start with human processing in mind,” Higgins said, ensuring that authorities receive a single, consolidated notification for every incident.

“We’ve got to catch every fire we can,” he adds.

Expanding Horizons

DigitalPath eventually hopes to expand its detection technology to help California detect more kinds of natural disasters.

And having proven its worth in California, DigitalPath is now in talks with state and county officials and university research teams across the fire-prone Western United States under its ALERTWest subsidiary.

Their goal: to help partners replicate the success of UC San Diego and ALERTCalifornia, potentially shielding countless lives and homes from the wrath of wildfires.

Featured image credit: SLworking2, via Flickr, Creative Commons license, some rights reserved.

Read More

Meet the Maker: Robotics Student Rolls Out Autonomous Wheelchair With NVIDIA Jetson

Meet the Maker: Robotics Student Rolls Out Autonomous Wheelchair With NVIDIA Jetson

With the help of AI, robots, tractors and baby strollers — even skate parks — are becoming autonomous. One developer, Kabilan KB, is bringing autonomous-navigation capabilities to wheelchairs, which could help improve mobility for people with disabilities.

The undergraduate from the Karunya Institute of Technology and Sciences in Coimbatore, India, is powering his autonomous wheelchair project using the NVIDIA Jetson platform for edge AI and robotics.

The autonomous motorized wheelchair is connected to depth and lidar sensors — along with USB cameras — which allow it to perceive the environment and plan an obstacle-free path toward a user’s desired destination.

“A person using the motorized wheelchair could provide the location they need to move to, which would already be programmed in the autonomous navigation system or path-planned with assigned numerical values,” KB said. “For example, they could press ‘one’ for the kitchen or ‘two’ for the bedroom, and the autonomous wheelchair will take them there.”

An NVIDIA Jetson Nano Developer Kit processes data from the cameras and sensors in real time. It then uses deep learning-based computer vision models to detect obstacles in the environment.

The developer kit acts as the brain of the autonomous system — generating a 2D map of its surroundings to plan a collision-free path to the destination — and sends updated signals to the motorized wheelchair to help ensure safe navigation along the way.

About the Maker

KB, who has a background in mechanical engineering, became fascinated with AI and robotics during the pandemic, when he spent his free time searching up educational YouTube videos on the topics.

He’s now working toward a bachelor’s degree in robotics and automation at the Karunya Institute of Technology and Sciences and aspires to one day launch a robotics startup.

KB, a self-described supporter of self-education, has also received several certifications from the NVIDIA Deep Learning Institute, including “Building Video AI Applications at the Edge on Jetson Nano” and “Develop, Customize and Publish in Omniverse With Extensions.”

Once he learned the basics of robotics, he began experimenting with simulation in NVIDIA Omniverse, a platform for building and operating 3D tools and applications based on the OpenUSD framework.

“Using Omniverse for simulation, I don’t need to invest heavily in prototyping models for my robots, because I can use synthetic data generation instead,” he said. “It’s the software of the future.”

His Inspiration

With this latest NVIDIA Jetson project, KB aimed to create a device that could be helpful for his cousin, who has a mobility disorder, and other people with disabilities who might not be able to control a manual or motorized wheelchair.

“Sometimes, people don’t have the money to buy an electric wheelchair,” KB said. “In India, only upper- and middle-class people can afford them, so I decided to use the most basic type of motorized wheelchair available and connect it to the Jetson to make it autonomous.”

The personal project was funded by the Program in Global Surgery and Social Change, which is jointly positioned under the Boston Children’s Hospital and Harvard Medical School.

His Jetson Project

After purchasing the basic motorized wheelchair, KB connected its motor hub with the NVIDIA Jetson Nano and lidar and depth cameras.

He trained the AI algorithms for the autonomous wheelchair using YOLO object detection on the Jetson Nano, as well as the Robot Operating System, or ROS, a popular software for building robotics applications.

The wheelchair can tap these algorithms to perceive and map its environment and plan a collision-free path.

“The NVIDIA Jetson Nano’s real-time processing speed prevents delays or lags for the user,” said KB, who’s been working on the project’s prototype since June. The developer dives into the technical components of the autonomous wheelchair on his blog. A demo of the autonomous wheelchair has also been featured on the Karunya Innovation and Design Studio YouTube channel.

Looking forward, he envisions his project could be expanded to allow users to control a wheelchair using brain signals from electroencephalograms, or EEGs, that are connected to machine learning algorithms.

“I want to make a product that would let a person with a full mobility disorder control their wheelchair by simply thinking, ‘I want to go there,’” KB said.

Learn more about the NVIDIA Jetson platform.

Read More

CG Geek Makes VFX Look Easy This Week ‘In the NVIDIA Studio’

CG Geek Makes VFX Look Easy This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources and how they dramatically accelerate content creation.

Releasing a 3D tutorial dubbed The Easiest VFX Tutorial Ever takes supreme confidence and the skills to back it up.

Steve Lund a.k.a. CG Geek — the featured artist of this week’s In the NVIDIA Studio installment — has both in spades. It’s no surprise that over 1 million people have subscribed to his YouTube channel, which features tutorials on animation and visual effects (VFX) as well as select tech reviews.

CG Geek has been a content creator for over 13 years, starting with videos on stop-motion animation before moving on to 3D software. Films and movies are his primary sources of inspiration. He grew up creating short films with his family — experimenting with and implementing video effects and 3D characters — which became a critical foundation for his current work.

Artists can strengthen their creative arsenal with the new Microsoft Surface Laptop Studio 2, available for pickup today. It’s powered by GeForce RTX 4060, GeForce RTX 4050 or NVIDIA RTX 2000 Ada Generation Laptop GPUs with 13th Gen Intel Core processors, up to 64GB of RAM and a 2TB SSD. It features a bright, vibrant 14.4-inch PixelSense Flow touchscreen, a 120Hz refresh rate, and Dolby Vision IQ and HDR to deliver sharper colors.

The versatile Microsoft Surface Laptop Studio 2.

The Easiest VFX Tutorial Ever

CG Geek also happens to be a geek for Blender, free for 3D enthusiasts, who regularly create impressive, individualistic art.

“I love the amazing Blender 3D community,” he said. “Whenever you need inspiration or creative feedback, they’re the most helpful, kind and talented collective of ever-growing artists.”

CG Geek wanted to make a tutorial that could prove that virtually anyone could get started in VFX with relative ease, from anywhere, at any time.

Work on VFX from anywhere — even the outdoors.

The first step, he instructs, is to capture video footage. To keep things simple, CG Geek recommends mounting a camera or mobile device to a tripod. Note that the camera lens determines the focal length and sensor size — critical details to input in Blender later in the process.

Keep track of the camera’s focal length and sensor size.

Keep a close eye on the video footage lighting for shadows and light intensity — it helps to snap a straight-down photo of the environment the 3D element will populate, namely for light bounces, to help create more realistic shadows.

Seasoned visual effects artists can capture and scan the entire 3D area.

Next, secure a 3D model. Create one with guidance from an NVIDIA Studio blog or watch detailed tutorials on the Studio YouTube channel. Alternatively, look online for a 3D model equipped with basic physically based rendering materials, as well as a roughness and normal map.

Sketchfab is an excellent resource for acquiring 3D models.

Next, combine the video footage and 3D materials. Open Blender, import the video footage and line up the 3D grid floor to the surface where the model will be presented. The 3D grid doubles as a shadow catcher that will grab the shadows being cast from the 3D elements. With an added texture, lighting will bounce back against the object, resulting in heightened realism.

The 3D grid floor will determine where the 3D model will be placed.

Then, light the 3D model to match the video footage. Most commonly, this is achieved by acquiring a high-dynamic range image (HDRI), a panorama with lighting data. CG Geek recommends Poly Haven for free, high-quality HDRIs. The key is picking one that resembles the lighting, color, shadow and intensity of the video footage.

Poly Haven has HDRIs for use in VFX work.

Use the HDRI lighting to align the sun’s rotation with the shadows of the footage, adding further realism.

Lighting adjustments in Blender.

From there, import camera information into Blender and render out passes for the 3D model over a transparent background in Cycles. Create as many render layers as possible for added post-render editing flexibility, especially in compositing. Shadowcatcher, glossy passes, Z depth and ambient occlusion layers are recommended for advanced users.

Speedy renders in Blender on NVIDIA Studio hardware.

These layers can then be combined in popular creative apps like Adobe Premiere Pro, After Effects, Blackmagic Design’s DaVinci Resolve or any of the over 100 NVIDIA RTX GPU-accelerated apps. This workflow, in particular, will be completed in Blender’s custom compositor.

Speedy renders in Blender.

Add shadows to the live footage with a multiple overlay. Then, carry over the 3D elements render layer to adjust the intensity of the shadows, helping them mesh better with the video capture. Individual layers can be edited to match the desired tone.

CG Geek made use of Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport. “Rendering in Cycles with multiple render layers and passes, along with the NVIDIA OptiX Denoiser, made animations and early tests a breeze,” he said.

“All my rendering changes can be visualized in real time thanks to the power of NVIDIA Studio before ever even hitting that button.” – CG Geek 

Finally, perform simple masking on areas where the 3D model passes in front of or behind objects. CG Geek’s one-minute YouTube tutorial can help guide this process. DaVinci Resolve or Premiere Pro’s AI-powered magic mask features can further speed the process by automatically masking background elements, saving the effort of painstakingly editing videos frame by frame.

These AI features are all accelerated by the GeForce RTX 4070 GPU equipped in CG Geek’s ASUS Zenbook 14 NVIDIA Studio laptop.

An entire workflow in a single shot.

“NVIDIA Studio laptops powered by RTX GPUs are great for portability and speed in a compact form factor.” – CG Geek

For CG Geek, getting reps in, making mistakes and strengthening weaknesses are the keys to evolving as an artist. “Don’t get hung up on the details!” he stressed. “Give yourself a deadline and then get started on another project.”

For more on the basics of 3D VFX and CGI with Blender, accelerated by the NVIDIA Studio platform and RTX GPUs, watch his featured five-minute tutorial.

Content creator CG Geek.

Check out CG Geek on YouTube.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Heeding Huang’s Law: Video Shows How Engineers Keep the Speedups Coming

Heeding Huang’s Law: Video Shows How Engineers Keep the Speedups Coming

In a talk, now available online, NVIDIA Chief Scientist Bill Dally describes a tectonic shift in how computer performance gets delivered in a post-Moore’s law era.

Each new processor requires ingenuity and effort inventing and validating fresh ingredients, he said in a recent keynote address at Hot Chips, an annual gathering of chip and systems engineers. That’s radically different from a generation ago, when engineers essentially relied on the physics of ever smaller, faster chips.

The team of more than 300 that Dally leads at NVIDIA Research helped deliver a whopping 1,000x improvement in single GPU performance on AI inference over the past decade (see chart below).

It’s an astounding increase that IEEE Spectrum was the first to dub “Huang’s Law” after NVIDIA founder and CEO Jensen Huang. The label was later popularized by a column in the Wall Street Journal.

1000x leap in GPU performance in a decade

The advance was a response to the equally phenomenal rise of large language models used for generative AI that are growing by an order of magnitude every year.

“That’s been setting the pace for us in the hardware industry because we feel we have to provide for this demand,” Dally said.

In his talk, Dally detailed the elements that drove the 1,000x gain.

The largest of all, a sixteen-fold gain, came from finding simpler ways to represent the numbers computers use to make their calculations.

The New Math

The latest NVIDIA Hopper architecture with its Transformer Engine uses a dynamic mix of eight- and 16-bit floating point and integer math. It’s tailored to the needs of today’s generative AI models. Dally detailed both the performance gains and the energy savings the new math delivers.

Separately, his team helped achieve a 12.5x leap by crafting advanced instructions that tell the GPU how to organize its work. These complex commands help execute more work with less energy.

As a result, computers can be “as efficient as dedicated accelerators, but retain all the programmability of GPUs,” he said.

In addition, the NVIDIA Ampere architecture added structural sparsity, an innovative way to simplify the weights in AI models without compromising the model’s accuracy. The technique brought another 2x performance increase and promises future advances, too, he said.

Dally described how NVLink interconnects between GPUs in a system and NVIDIA networking among systems compound the 1,000x gains in single GPU performance.

No Free Lunch  

Though NVIDIA migrated GPUs from 28nm to 5nm semiconductor nodes over the decade, that technology only accounted for 2.5x of the total gains, Dally noted.

That’s a huge change from computer design a generation ago under Moore’s law, an observation that performance should double every two years as chips become ever smaller and faster.

Those gains were described in part by Denard scaling, essentially a physics formula defined in a 1974 paper co-authored by IBM scientist Robert Denard. Unfortunately, the physics of shrinking hit natural limits such as the amount of heat the ever smaller and faster devices could tolerate.

An Upbeat Outlook

Dally expressed confidence that Huang’s law will continue despite diminishing gains from Moore’s law.

For example, he outlined several opportunities for future advances in further simplifying how numbers are represented, creating more sparsity in AI models and designing better memory and communications circuits.

Because each new chip and system generation demands new innovations, “it’s a fun time to be a computer engineer,” he said.

Dally believes the new dynamic in computer design is giving NVIDIA’s engineers the three opportunities they desire most: to be part of a winning team, to work with smart people and to work on designs that have impact.

Read More

Kicking Games Up a Notch: Startup Sports Vision AI to Broadcast Athletics Across the Globe

Kicking Games Up a Notch: Startup Sports Vision AI to Broadcast Athletics Across the Globe

Pixellot is scoring with vision AI — making it easier for organizations to deliver real-time sports broadcasting and analytics to viewers across the globe.

A member of the NVIDIA Metropolis vision AI partner ecosystem, the company based near Tel Aviv offers an AI-powered platform that automates the capturing, streaming and analysis of sporting events.

It’s changing the game for fans, coaches and players of nearly 20 different sports — not just basketball and soccer but also rugby and handball — as it broadcasts events and provides analytics from more than 30,000 venues across 70+ countries. In the U.S., Pixellot powers the broadcasting of over a million games every year through its partnership with the NFHS Network, a leader in streaming live and on-demand high school sports.

Through its broadcasting partners like the NFHS Network, MLB and others, Pixellot provides professional analytics, post-match breakdowns and highlights based on jersey numbers with shot charts and heat maps — which can be especially useful for coaches and players of school and pro sports alike as they study their moves to up their game. It also enables interactive experiences for users, who can manipulate viewframes and cut their own highlights for a game.

Recently, SuperSport Schools, a company based in Cape Town, South Africa, deployed the Pixellot platform to power an app that broadcasts student athletics across the nation, where more than 1,500 high schools are active in sports.

“Our goal is to democratize the coverage of sports with the help of AI and automation,” said Yossi Tarablus, who leads marketing at Pixellot, a member of the NVIDIA Inception program for cutting-edge startups. “Using the NVIDIA Jetson platform for edge AI, Pixellot brings powerful technology for sports broadcasting and analytics to some of the world’s most remote areas.”

How Pixellot Works

During peak sports seasons, about 200,000 games a month are broadcasted across the globe using the Pixellot platform, according to Tarablus.

Lightweight Pixellot cameras powered by NVIDIA Jetson capture high-quality video of games, matches and even practices — and livestream them in high definition to users through an app in real time with an overlaid scoreboard, live stats, commentary and more.

The platform creates an automatic viewframe that simulates a camera operator, optimizes videos and corrects scene lighting using NVIDIA RTX ray-tracing technology.

In addition, the platform helps organizations and companies monetize sports while making them more accessible to viewers, as it enables over-the-top, or OTT, streaming — direct streaming over the internet without the need for a traditional cable or satellite TV provider.

In all of its camera setups, the Metropolis member runs the NVIDIA DeepStream software development kit for AI-powered video streaming analytics. And the company relies on the NVIDIA TensorRT SDK for high-performance deep learning inference.

“NVIDIA Jetson made it possible for Pixellot to create the most accurate and affordable AI-powered camera solution for broadcasting live sporting events,” said Gal Oz, chief technology officer and cofounder of Pixellot. “The versatility of Jetson modules in terms of camera pipeline, encoders and AI capabilities enabled Pixellot to develop multiple products based on the same hardware and software platform.”

Broadcasting South African School Sports

High-quality, real-time broadcasts of athletics are difficult to produce without access to a slew of graphics and data.

As the NVIDIA Jetson Orin NX module enables AI-powered video processing and GPU-accelerated computing right at the edge — on the field or at courtside — Pixellot lets organizations broadcast sports from anywhere.

“It’s amazing how many people have told us stories about a moment they were empowered to share with their children thanks to SuperSport Schools and Pixellot, because they couldn’t be there physically but were present through live or on-demand video,” said Kelvin Watt, managing director of Capitalize Media and SuperSport Schools, on the Pixellot deployment in South Africa.

The SuperSport Schools app, which is free and recently reached 600,000 subscribers, was the first to broadcast a junior nationals track race in the country.

At the event last year, a student named Viwe Jingqi broke 50-plus-year national records for both the 100- and 200-meter races for South African girls under 18 years old. People all over the world could easily witness these historic victories through the SuperSport Schools app, powered by Pixellot.

Building a Smart Sports City in China

In China, tech giant Baidu and the Chengdu Sports Authority are using Pixellot technology in an initiative to develop a smart sports city, with an initial focus on broadcasting community soccer.

Chengdu, the capital of southwestern China’s Sichuan province, is a sports-oriented city and was the host of this year’s World University Games, an event sanctioned by the International University Sports Federation.

“Pixellot’s AI-driven sports production solutions are a perfect fit for our strategic vision of delivering innovative technology solutions to communities,” said Liu Chuan, solution director of the intelligent cloud sports industry at Baidu.

“Broadcasting community soccer with vision AI is part of the Chengdu initiative’s efforts to emphasize the health benefits of engaging in sports recreationally,” said Tarablus. “It moves the spotlight from pro or Olympic sports to the importance of athletics for all.”

Learn more about the NVIDIA Metropolis application framework, developer tools and partner ecosystem.

Read More

V for Victory: ‘Cyberpunk 2077: Phantom Liberty’ Comes to GeForce NOW

V for Victory: ‘Cyberpunk 2077: Phantom Liberty’ Comes to GeForce NOW

The wait is over. GeForce NOW Ultimate members can experience Cyberpunk 2077: Phantom Liberty on GOG.com at full GeForce RTX 4080 quality, with support for NVIDIA DLSS 3.5 technology.

It’s part of an action-packed GFN Thursday, with 26 more games joining the cloud gaming platform’s library, including Quake II from id Software.

A New Look for Night City

Cyberpunk 2077: Phantom Liberty on GeForce NOW
Experience NVIDIA DLSS 3.5 in Cyberpunk 2077’s spy-thriller expansion.

Take on a thrilling challenge with Phantom Liberty, an all-new adventure for Cyberpunk 2077. When the orbital shuttle of the President of the New United States of America is shot down over the deadliest district of Night City, there’s only one person who can save her. Become V, a cyberpunk for hire, and dive into a tangled web of espionage and political intrigue, unraveling a story that connects the highest echelons of power with the brutal world of black-market mercenaries.

Ultimate members can return to the neon lights of Night City and experience the benefits of NVIDIA DLSS 3.5 and its new Ray Reconstruction technology. These updates enhance the quality of full ray tracing in Cyberpunk 2077’s Ray Tracing: Overdrive Mode, as part of the game’s 2.0 update available for the base game for free and included with the Phantom Liberty expansion. Upgrade to a GeForce NOW Ultimate membership today to see Night City at its best.

Prepare for War

Quake II on GeForce NOW
id Software’s classic first-person shooter is better than ever, streaming from the cloud.

Experience the authentic, enhanced and complete version of id Software’s critically acclaimed first-person shooter, Quake II, now streaming from the cloud.

Humankind is at war with the Strogg, a hostile alien race that’s attacked Earth. In response, humanity launched a strike on the Strogg homeworld — which failed. Outnumbered and outgunned, battle through fortified military installations to shut down the enemy’s war machine. Only then will the fate of humanity be decided.

Quake II includes a new, enhanced version of id Software’s classic, along with both original mission packs: “The Reckoning” and “Ground Zero.” Plus, battle through 28 campaign levels in MachineGames’ all-new “Call of the Machine” expansion and play through the exclusive levels from Quake II 64 for the first time on PC. Blast the Stroggs or friends in classic multiplayer modes at up to 4K and 120 frames per second or with ultrawide resolutions for GeForce NOW Ultimate members.

Nice Shootin,’ Tex

GeForce NOW Kovaaks Ultimate Challenge
Ultimate leads the way.

The GeForce NOW Ultimate KovaaK’s challenge is complete, and the results are in: Ultimate power means more ultimate wins. Members worked to sharpen their skills and win amazing prizes during the challenge while experiencing the power of Ultimate for themselves. With up to 240 fps streaming at ultra-low latency, gamers are playing up to their ultimate potential, just by upgrading to an Ultimate membership.

The proof is in the data. Check out some powerful stats showing what Ultimate members accomplished:

  • Nearly 15,000 people took on the challenge, playing over 120,000 sessions.
  • Members saw a 2x boost in scores when playing on Ultimate over a Free membership.
  • All of the leaderboard’s top 25 slots were filled with those playing on Ultimate.

But don’t just take our word for it. Here’s what it felt like for TinooQ, who placed third overall in the challenge:

“As a long-time KovaaK’s user, transitioning to this platform was seamless, as the precision and responsiveness was nothing short of extraordinary. 

“The minimal latency and the consistent 240 fps made me think that many people could rely solely on the GeForce NOW Ultimate plan and a monitor, that’s all. I found it perfect for top gaming, saving a lot of money and the PC hardware headaches I suffered when building mine.” — TinooQ

And check out what the press had to say about the Ultimate membership tier:

“GeForce NOW KovaaK’s Challenge Proves Gaming At Glorious 240 FPS Matters” – Hot Hardware

“Hot dang does GeForce Now Ultimate ever deliver.” – Tom’s Guide

“NVIDIA GeForce NOW has held the crown for cloud gaming performance for a while now, and it’s just getting better.” – 9to5Google

“Unsurprisingly, NVIDIA says 98% of the users who tried Kovaak’s Challenge on the GeForce NOW Ultimate tier have seen improvements in their test results over the free tier.” – Wccftech

“Nvidia has taken a different approach to cloud gaming: Instead of boosting their library and settling for 1080p 60fps, Nvidia’s GeForce Now service prioritizes performance, implementing faster graphics cards for players to use.” – SlashGear

“One of the distinguishing factors of GeForce Now is its superior image quality and lack of noticeable input delay. ” – Game Is Hard

“NVIDIA’s GeForce NOW is widely considered one of the best cloud gaming platforms in terms of latency, visual fidelity, and overall experience.” — TweakTown

Everyone’s a winner when they play on Ultimate. Upgrade today for the best performance in the cloud, even when streaming popular shooters like Counter Strike, Destiny 2, Tom Clancy’s Rainbow Six Siege and more, where every frame counts.

Even better, Ultimate members get a free copy of KovaaKs – the world’s best aim trainer. Don’t miss this chance to claim the reward, starting today, only for Ultimate members. Be on the lookout for an email starting today, available only for a limited time.

Challenge Accepted

Infinity Strash DRAGON QUEST The Adventure of Dai on GeForce NOW
Be a hero. Be Dai.

Square Enix’s Infinity Strash: DRAGON QUEST The Adventure of Dai leads 26 new titles in the GeForce NOW library this week. In this action role-playing game based on the popular anime and manga series of the same name, Dai and his friends must fight the Dark Lord Hadlar and his evil army of monsters. Fulfill Dai’s dream of becoming a hero in this game, which features fast-paced, dynamic combat, stunning anime-style graphics and a rich storyline.

Here’s the full list of what’s joining this week:

  • These Doomed Isles (New release on Steam, Sept. 25)
  • Paleo Pines (New release on Steam, Sept. 26)
  • Infinity Strash: DRAGON QUEST The Adventure of Dai (New release on Steam, Sept. 28)
  • Pizza Possum (New release on Steam, Sept. 28)
  • Wildmender (New release on Steam, Sept. 28)
  • Overpass 2 (New release on Steam, Sept. 28)
  • Soulstice (New release on Epic Games Store, Free on Sept. 28)
  • Amnesia: Rebirth (Xbox, available on PC Game Pass)
  • BlazBlue: Cross Tag Battle (Xbox, available on PC Game Pass)
  • Bramble: The Mountain King (Xbox, available on PC Game Pass)
  • Broforce (Steam)
  • Don Duality (Steam)
  • Doom Eternal (Xbox, available on PC Game Pass)
  • Dordogne (Xbox, available on PC Game Pass)
  • Dust Fleet (Steam)
  • Eastern Exorcist (Xbox, available on PC Game Pass)
  • Figment 2: Creed Valley (Xbox, available on PC Game Pass)
  • I Am Fish (Xbox)
  • Necesse (Steam)
  • A Plague Tale: Innocence (Xbox)
  • Quake II (Steam, Epic Games Store and Xbox, available on PC Game Pass)
  • Road 96 (Xbox)
  • Spacelines from the Far Out (Xbox)
  • Totally Reliable Delivery Service (Xbox, available on PC Game Pass)
  • Warhammer 40,000: Battlesector (Xbox, available on PC Game Pass)
  • Yooka-Laylee and the Impossible Lair (Xbox)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More