HomeDrones → HPC helps scien...
HPC helps scientists extract data from satellite and drone imagery

HPC helps scientists extract data from satellite and drone imagery

Oak Ridge National Laboratory (ORNL) researchers are using machine learning and high-performance computing (HPC) platforms to optimize data extraction from satellite and drone imagery.

“The massive amount of data of course comes [with] unprecedented challenges in machine learning developments and how to leverage existing computational resources to help us to do more effective and efficient remote sensing data analytics work,” Lexie Yang, research scientist at ORNL, said during a Nov. 11 presentation at NVIDIA’s GTC21 GPU Technology Conference 2021.

She and Philipe Dias, research associate at ORNL, extracted building footprints and roads from satellite imagery datasets, using building segmentation, which involves labeling pixels of buildings from complex remote sensing images.

One challenge with this task is the volume of data that needs to be handled. To map Earth’s surface, the researchers use a low resolution of 5 meters per pixel, which means 100 trillion pixels need to be processed, Dias said. More granularity -- say, 0.5-meter resolution -- results in 90 terabytes of data just for a country the size of Nigeria. Converted to pixels, that’s 32 trillion. But out of those, less than half a percent – 62 billion – are of buildings.

“It becomes kind of a needle in a haystack problem,” Dias said.

A second challenge is generalization, he said. Currently, sensors from satellites and drones collect imagery at various resolutions. Other considerations include the “look angle,” or how the device captures the image and domain distribution shifts in which images collected from the same location with the same look angle at different points in time might appear different.

HPC helps scientists extract data from satellite and drone imagery

“You need to somehow address the domain shift because otherwise the performance of your model is going to be really poor,” he said.

He cited three potential strategies. The first is to label more data for the target domain, but that is the costliest, most non-scalable approach, he said: “Every new domain you have, you need to annotate more data.”

The second is to transfer learning. This means adding to the initial segmentation pipeline a discriminator component and corresponding loss components to force adversarial learning that results in a feature representation that maps data from a search domain and data from a target domain in such a similar manner that the discriminator will not be able to tell them apart. The idea is then if you layer on building segmentation, you can get a segmentation good enough for search and target domains, Dias said.

The third option is to create specialist models by partitioning manifold spaces. This means feeding through an ML model the extracted corresponding features of a data collection and then mapping those features to make a visualization. “By doing so, you are able to start to see some structure, some patterns in this data,” Dias said.

This option is the idea behind ReSFlow, “a workflow that breaks the problem of model generalization into a collection of specialized exploitations. ReSFlow partitions imagery collections into homogeneous buckets equipped with exploitation models trained to perform well under each bucket’s specific context. Essentially, ReSFlow aims for generalization through stratification,” according to a paper on it by other ORNL researchers.

A benefit of this method is parallel training and inference. As the specialist models train simultaneously, they allow for inference, or extracting features of images in the dataset and automatically assigning them to the buckets. The result of this strategy has been a 10% increase in segmentation quality, Dias said.

“Whenever we develop a model-trending strategies even with the most advanced visual extractors to complete this end-to-end workflow, the last step is to see how we can deploy this trained model at scale,” Yang said.

This is important because of amount of data, she said, adding that with an efficient model deployment workflow in place, researchers can make out all the buildings within hours or days.

The work went from a single workstation to a model stage and now to mini HPC clusters or full-scale HPC resources to address how to scale the workflow computationally.

“We tried to balance the resources and maximize the utilization across all the computing nodes,” Yang said, while also monitoring GPU and CPU usage and using an NVIDIA profiling tool to identify sources of latency. “We are successfully chopping all the workflow so we can leverage both CPU and GPU simultaneously.”

Processing time fell from three weeks to three days.

Another way to address scalability and generalization is to use HPC. To illustrate this, Yang pointed to a project that used no human annotations for roads, instead relying on labels from OpenStreetMap. To address data quality issues and the misalignment between OpenStreetMap and satellite images, ORNL used Auto-Shifting ML. That let researchers process the data from OpenStreetMap and calculate features using vector data to generate higher-quality data for road mapping.

Without HPC to process almost 140,000 training symbols, it would take about 115 days, but with ML and HPC, it took two hours, Yang said.