融合各向异性上下文的遥感图像语义分割

SEMANTIC SEGMENTATION OF REMOTE SENSING IMAGES BY FUSING ANISOTROPIC CONTEXT

  • 摘要: 由于遥感图像中地面目标存在纵横比变化大、目标尺度范围广等各向异性分布问题,现有分割方法对于呈长程带状结构、密集离散分布物体等目标的分割能力还存在不足。为了应对该问题,提出融合各向异性上下文的遥感图像分割网络,该网络通过对梯度卷积参数施加先验约束来提取目标的梯度信息,优化分割边缘,设计多尺度并行空洞卷积和各向异性复合条状池化等模块,捕获遥感图像中不同尺度目标的各向异性上下文信息,融合多尺度的上下文信息并恢复图像细节。在公开的Potsdam和Vaihingen数据集上的实验表明,该网络优于DaNet、DeepLabv3+、Eanet等先进的分割网络,消融实验也验证了各模块的有效性。

     

    Abstract: Due to the anisotropic distribution of ground targets in remote sensing images, such as large variation of aspect ratio and wide range of target scale, the existing segmentation methods are still insufficient for the segmentation ability of targets with long-range banded structure and dense discrete distribution objects. In order to solve this problem, a remote sensing image segmentation network integrating anisotropic context is proposed. The network extracted the gradient information of the target by imposing a priori constraints on the gradient convolution kernel parameters, optimized the segmentation edge. It designed modules such as multiscale parallel dilated convolution and anisotropic target composite strip pooling module, and captured the anisotropic context information of different scale targets in remote sensing images. The multi-scale context information was fused and the image details were restored. Experiments on the public Potsdam and Vaihingen datasets show that the anisotropic context fusion network in this paper is superior to the advanced segmentation networks such as DaNet, DeepLabv3 + and Eanet, and the ablation experiment also verifies the effectiveness of each module of the network in this paper.

     

/

返回文章
返回