CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery
Jie Mei1, Rou-Jing Li2, Wang Gao3, Ming-Ming Cheng1
1TKLNDST, CS, Nankai University 2Beijing Normal University
3Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory
Abstract
Extracting roads from satellite imagery is a promising approach to update the dynamic changes of road networks efficiently and timely. However, it is challenging due to the occlusions caused by other objects and the complex traffic environment, the pixel-based methods often generate fragmented roads and fail to predict topological correctness. In this paper, motivated by the road shapes and connections in the graph network, we propose a connectivity attention network (CoANet) to jointly learn the segmentation and pair-wise dependencies. Since the strip convolution is more aligned with the shape of roads, which are long-span, narrow, and distributed continuously. We develop a strip convolution module (SCM) that leverages four strip convolutions to capture long-range context information from different directions and avoid interference from irrelevant regions. Besides, considering the occlusions in road regions caused by buildings and trees, a connectivity attention module (CoA) is proposed to explore the relationship between neighboring pixels. The CoA module incorporates the graphical information and enables the connectivity of roads are better preserved. Extensive experiments on the popular benchmarks (SpaceNet and DeepGlobe datasets) demonstrate that our proposed CoANet establishes new state-of-the-art results.
Paper
- CoANet: Connectivity Attention Network for Road Extraction from Satellite Imagery, Jie Mei, Rou-Jing Li, Wang Gao, Ming-Ming Cheng, IEEE TIP, 2021. [Project Page] [PDF] bib [Code]
Method
We propose a connectivity attention network (CoANet) for road extraction from satellite imagery. We first introduce an encoder-decoder architecture network to learn the feature of roads, where the Atrous Spatial Pyramid Pooling module (ASPP) is adopted to increase the receptive field of feature points and capture multi-scale features. Since the roads are long-span, narrow, and distributed continuously, the strip convolutions are more aligned with the shapes of roads. We take advantage of it and develop a strip convolution module (SCM), which is placed in the decoder network. The SCM leverages four strip convolutions with horizontal, vertical, left diagonal, and right diagonal to capture long-range context information from four different directions. Besides, it prevents irrelevant regions from interfering with feature learning. To alleviate occlusions in road regions caused by buildings and trees, we propose a connectivity attention module (CoA) to explore the relationship between neighboring pixels. The connectivity of a given pixel with eight neighboring pixels is predicted, which enables the topological correctness of roads.
Citation
It would be high appreciated if you can cite our paper when using our code:
@article{mei2021coanet, title={CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery}, author={Mei, Jie and Li, Rou-Jing and Gao, Wang and Cheng, Ming-Ming}, journal={IEEE Transactions on Image Processing}, volume={30}, pages={8540--8552}, year={2021}, publisher={IEEE} }
hi,thanks for your excellent work!
In your paper, page 6, It said “SpaceNet… The dataset consists of 2,780 images, which are split into 2,213 images for training and 567 images for testing following”.
But I notice that the data download from AWS have 2780 images, but only 2549 of that have a geojson(using to generate mask) correspondingly. Did I get a different version?
Thanks for your attention to our paper.
In the data I downloaded, the number of images and ‘geojson’ is 2780. Please check your data.
Thanks for the quick reply! I noticed that Spacenet had change their dataset on AWS server, and some ‘geojson’ was missing🤣. Would you please share the ‘geojson’ data(Google drive link or something)?