Consider the days after a hurricane strikes. Trees and debris are blocking roads, bridges are destroyed, and sections of roadway are washed out. Emergency managers soon face a bevy of questions: How can supplies get delivered to certain areas? What’s the best route for evacuating survivors? Which roads are too damaged to remain open?
Without concrete data on the state of the road network, emergency managers often have to base their answers on incomplete information. The Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory hopes to use its airborne lidar platform, paired with artificial intelligence (AI) algorithms, to fill this information gap.
“For a truly large-scale catastrophe, understanding the state of the transportation system as early as possible is critical,” says Chad Council, a researcher in the group. “With our particular approach, you can determine road viability, do optimal routing, and also get quantified road damage. You fly it, you run it, you’ve got everything.”
Since the 2017 hurricane season, the team has been flying its advanced lidar platform over stricken cities and towns. Lidar works by pulsing photons down over an area and measuring the time it takes for each photon to bounce back to the sensor. These time-of-arrival data points paint a 3D “point cloud” map of the landscape — every road, tree, and building — to within about a foot of accuracy.
To date, they’ve mapped huge swaths of the Carolinas, Florida, Texas, and all of Puerto Rico. In the immediate aftermath of hurricanes in those areas, the team manually sifted through the data to help the Federal Emergency Management Agency (FEMA) find and quantify damage to roads, among other tasks. The team’s focus now is on developing AI algorithms that can automate these processes and find ways to route around damage.
What’s the road status?
Information about the road network after a disaster comes to emergency managers in a “mosaic of different information streams,” Council says, namely satellite images, aerial photographs taken by the Civil Air Patrol, and crowdsourcing from vetted sources.
“These various efforts for acquiring data are important because every situation is different. There might be cases when crowdsourcing is fastest, and it’s good to have redundancy. But when you consider the scale of disasters like Hurricane Maria on Puerto Rico, these various streams can be overwhelming, incomplete, and difficult to coalesce,” he says.
During these times, lidar can act as an all-seeing eye, providing a big-picture map of an area and also granular details on road features. The laboratory’s platform is especially advanced because it uses Geiger-mode lidar, which is sensitive to a single photon. As such, its sensor can collect each of the millions of photons that trickle through openings in foliage as the system is flown overhead. This foliage can then be filtered out of the lidar map, revealing roads that would otherwise be hidden from aerial view.
To provide the status of the road network, the lidar map is first run through a neural network. This neural network is trained to find and extract the roads, and to determine their widths. Then, AI algorithms search these roads and flag anomalies that indicate the roads are impassable. For example, a cluster of lidar points extending up and across a road is likely a downed tree. A sudden drop in the elevation is likely a hole or washed out area in a road.
The extracted road network, with its flagged anomalies, is then merged with an OpenStreetMap of the area (an open-access map similar to Google Maps). Emergency managers can use this system to plan routes, or in other cases to identify isolated communities — those that are cut off from the road network. The system will show them the most efficient route between two specified locations, finding detours around impassable roads. Users can also specify how important it is to stay on the road; on the basis of that input, the system provides routes through parking lots or fields.
This process, from extracting roads to finding damage to planning routes, can be applied to the data at the scale of a single neighborhood or across an entire city.
How fast and how accurate?
To gain an idea of how fast this system works, consider that in a recent test, the team flew the lidar platform, processed the data, and got AI-based analytics in 36 hours. That sortie covered an area of 250 square miles, an area about the size of Chicago, Illinois.
But accuracy is equally as important as speed. “As we incorporate AI techniques into decision support, we’re developing metrics to characterize an algorithm’s performance,” Council says.
For finding roads, the algorithm determines if a point in the lidar point cloud is “road” or “not road.” The team ran a performance evaluation of the algorithm against 50,000 square meters of suburban data, and the resulting ROC curve indicated that the current algorithm provided an 87 percent true positive rate (that is, correctly labeled a point as “road”), with a 20 percent false positive rate (that is, labeling a point as “road” that may not be road). The false positives are typically areas that geometrically look like a road but aren’t.
“Because we have another data source for identifying the general location of roads, OpenStreetMaps, these false positives can be excluded, resulting in a highly accurate 3D point cloud representation of the road network,” says Dieter Schuldt, who has been leading the algorithm-testing efforts.
For the algorithm that detects road damage, the team is in the process of further aggregating ground truth data to evaluate its performance. In the meantime, preliminary results have been promising. Their damage-finding algorithm recently flagged for review a potentially blocked road in Bedford, Massachusetts, which appeared to be a hole measuring 10 meters wide by 7 meters long by 1 meter deep. The town’s public works department and a site visit confirmed that construction blocked the road.
“We actually didn’t go in expecting that this particular sortie would capture examples of blocked roads, and it was an interesting find,” says Bhavani Ananthabhotla, a contributor to this work. “With additional ground truth annotations, we hope to not only evaluate and improve performance, but also to better tailor future models to regional emergency management needs, including informing route planning and repair cost estimation.”
The team is continuing to test, train, and tweak their algorithms to improve accuracy. Their hope is that these techniques may soon be deployed to help answer important questions during disaster recovery.
“We picture lidar as a 3D scaffold that other data can be draped over and that can be trusted,” Council says. “The more trust, the more likely an emergency manager, and a community in general, will use it to make the best decisions they can.”