Damage assessment using Amazon SageMaker geospatial capabilities and custom SageMaker models

In this post, we show how to train, deploy, and predict natural disaster damage with Amazon SageMaker with geospatial capabilities. We use the new SageMaker geospatial capabilities to generate new inference data to test the model. Many government and humanitarian organizations need quick and accurate situational awareness when a disaster strikes. Knowing the severity, cause, and location of damage can assist in the first responder’s response strategy and decision-making. The lack of accurate and timely information can contribute to an incomplete or misdirected relief effort.

As the frequency and severity of natural disasters increases, it’s important that we equip decision-makers and first responders with fast and accurate damage assessment. In this example, we use geospatial imagery to predict natural disaster damage. Geospatial data can be used in the immediate aftermath of a natural disaster for rapidly identifying damage to buildings, roads, or other critical infrastructure. In this post, we show you how to train and deploy a geospatial segmentation model to be used for disaster damage classification. We break down the application into three topics: model training, model deployment, and inference.

Model training

In this use case, we built a custom PyTorch model using Amazon SageMaker for image segmentation of building damage. The geospatial capabilities in SageMaker include trained models for you to utilize. These built-in models include cloud segmentation and removal, and land cover segmentation. For this post, we train a custom model for damage segmentation. We first trained the SegFormer model on data from the xView2 competition. The SegFormer is a transformer-based architecture that was introduced in the 2021 paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. It’s based on the transformer architectures that are quite popular with natural language processing workloads; however, the SegFormer architecture is built for semantic segmentation. It combines both the transformer-based encoder and a lightweight decoder. This allows for better performance than previous methods, while providing significantly smaller model sizes than previous methods. Both pre-trained and untrained SegFormer models are available from the popular Hugging Face transformer library. For this use case, we download a pre-trained SegFormer architecture and train it on a new dataset.

The dataset used in this example comes from the xView2 data science competition. This competition released the xBD dataset, one of the largest and highest-quality publicly available datasets of high-resolution satellite imagery annotated with building location and damage scores (classes) before and after natural disasters. The dataset contains data from 15 countries including 6 types of disasters (earthquake/tsunami, flood, volcanic eruption, wildfire, wind) with geospatial data containing 850,736 building annotations across 45,362 km^2 of imagery. The following image shows an example of the dataset. This image shows the post-disaster image with the building damage segmentation mask overlayed. Each image includes the following: pre-disaster satellite image, pre-disaster building segmentation mask, post-disaster satellite image, and post-disaster building segmentation mask with damage classes.

post-disaster image with the building damage segmentation mask overlayed

In this example, we only use the pre- and post-disaster imagery to predict the post-disaster damage classification (segmentation mask). We don’t use the pre-disaster building segmentation masks. This approach was selected for simplicity. There are other options for approaching this dataset. A number of the winning approaches for the xView2 competition used a two-step solution: first, predict the pre-disaster building outline segmentation mask. The building outlines and the post-damage images are then used as input for predicting the damage classification. We leave this to the reader to explore other modeling approaches to improve classification and detection performance.

The pre-trained SegFormer architecture is built to accept a single three-color channel image as input and outputs a segmentation mask. There are a number of ways we could have modified the model to accept both the pre- and post-satellite images as input, however, we used a simple stacking technique to stack both images together into a six-color channel image. We trained the model using standard augmentation techniques on the xView2 training dataset to predict the post-disaster segmentation mask. Note that we did resize all the input images from 1024 to 512 pixels. This was to further reduce spatial resolution of the training data. The model was trained with SageMaker using a single p3.2xlarge GPU based instance. An example of the trained model output is shown in the following figures. The first set of images are the pre- and post-damage images from the validation set.
pre- and post-damage images from the validation set

The following figures show the predicted damage mask and ground truth damage mask.
The following figures show the predicted damage mask and ground truth damage mask.

At first glance, it seems like the model doesn’t perform well as compared to the ground truth data. Many of the buildings are incorrectly classified, confusing minor damage for no damage and showing multiple classifications for a single building outline. However, one interesting finding when reviewing the model performance is that it appears to have learned to localize the building damage classification. Each building can be classified into No Damage, Minor Damage, Major Damage, or Destroyed. The predicted damage mask shows that the model has classified the large building in the middle into mostly No Damage, but the top right corner is classified as Destroyed. This sub-building damage localization can further assist responders by showing the localized damage per building.

Model deployment

The trained model was then deployed to an asynchronous SageMaker inference endpoint. Note that we chose an asynchronous endpoint to allow for longer inference times, larger payload input sizes, and the ability to scale the endpoint down to zero instances (no charges) when not in use. The following figure shows the high-level code for asynchronous endpoint deployment. We first compress the saved PyTorch state dictionary and upload the compressed model artifacts to Amazon Simple Storage Service (Amazon S3). We create a SageMaker PyTorch model pointing to our inference code and model artifacts. The inference code is required to load and serve our model. For more details on the required custom inference code for a SageMaker PyTorch model, refer to Use PyTorch with the SageMaker Python SDK.
high-level code for asynchronous endpoint deployment

The following figure shows the code for the auto scaling policy for the asynchronous inference endpoint.
The following figure shows the code for the auto scaling policy for the asynchronous inference endpoint.

Note that there are other endpoint options, such as real time, batch, and serverless, that could be used for your application. You’ll want to pick the option that is best suited for the use case and recall that Amazon SageMaker Inference Recommender is available to help recommend machine learning (ML) endpoint configurations.

Model inference

With the trained model deployed, we can now use SageMaker geospatial capabilities to gather data for inference. With SageMaker geospatial capabilities, several built-in models are available out of the box. In this example, we use the band stacking operation for stacking the red, green, and blue color channels for our earth observation job. The job gathers the data from the Sentinel-2 dataset. To configure an earth observation job, we first need the coordinates of the location of interest. Second, we need the time range of the observation. With this we can now submit an earth observation job using the stacking feature. Here we stack the red, green, and blue bands to produce a color image. The following figure shows the job configuration used to generate data from the floods in Rochester, Australia, in mid-October 2022. We utilize images from before and after the disaster as input to our trained ML model.

After the job configuration is defined, we can submit the job. When the job is complete, we export the results to Amazon S3. Note that we can only export the results after the job has completed. The results of the job can be exported to an Amazon S3 location specified by the user in the export job configuration. Now with our new data in Amazon S3, we can get damage predictions using the deployed model. We first read the data into memory and stack the pre- and post-disaster imagery together.
We first read the data into memory and stack the pre- and post-disaster imagery together.

The results of the segmentation mask for the Rochester floods are shown in the following images. Here we can see that the model has identified locations within the flooded region as likely damaged. Note also that the spatial resolution of the inference image is different than the training data. Increasing the spatial resolution could help model performance; however, this is less of an issue for the SegFormer model as it is for other models due to the multiscale model architecture.

pre-post flood

results of the segmentation mask for the Rochester floods

Damage Assessment

Conclusion

In this post, we showed how to train, deploy, and predict natural disaster damage with SageMaker with geospatial capabilities. We used the new SageMaker geospatial capabilities to generate new inference data to test the model. The code for this post is in the process of being released, and this post will be updated with links to the full training, deployment, and inference code. This application allows for first responders, governments, and humanitarian organizations to optimize their response, providing critical situational awareness immediately following a natural disaster. This application is only one example of what is possible with modern ML tools such as SageMaker.

Try SageMaker geospatial capabilities today using your own models; we look forward to seeing what you build next.


About the author

Aaron Sengstacken is a machine learning specialist solutions architect at Amazon Web Services. Aaron works closely with public sector customers of all sizes to develop and deploy production machine learning applications. He is interested in all things machine learning, technology, and space exploration.

Read More