Abstract:
Due to the anisotropic distribution of ground targets in remote sensing images, such as large variation of aspect ratio and wide range of target scale, the existing segmentation methods are still insufficient for the segmentation ability of targets with long-range banded structure and dense discrete distribution objects. In order to solve this problem, a remote sensing image segmentation network integrating anisotropic context is proposed. The network extracted the gradient information of the target by imposing a priori constraints on the gradient convolution kernel parameters, optimized the segmentation edge. It designed modules such as multiscale parallel dilated convolution and anisotropic target composite strip pooling module, and captured the anisotropic context information of different scale targets in remote sensing images. The multi-scale context information was fused and the image details were restored. Experiments on the public Potsdam and Vaihingen datasets show that the anisotropic context fusion network in this paper is superior to the advanced segmentation networks such as DaNet, DeepLabv3 + and Eanet, and the ablation experiment also verifies the effectiveness of each module of the network in this paper.