基于无锚框模型目标检测任务的语义集中对抗样本

THE SEMANTIC-FOCUS ADVERSARIAL EXAMPLES ON ANCHOR-FREE MODEL FOR OBJECT DETECTION

  • 摘要: 深度神经网络容易受到对抗样本的干扰。现有的针对无锚框目标检测器对抗样本的研究较为缺乏,导致此类模型更易受到对抗样本的影响。为改善这种情况,采用一种针对无锚框目标检测器的对抗样本通用框架,其基于识别到的类快速进行梯度收集,其比基于单个候选框生成扰动的方法效率更高。同时提出一个提取语义信息编码的方法,使得对抗扰动仅集中于图像中语义信息丰富的区域,使得产生的扰动更为稀疏和集中。在两个数据集上的结果表明该方法在白盒和黑盒实验中都达到了最先进的性能,可为此类网络鲁棒性的改进优化提供支撑。

     

    Abstract: Deep neural networks are vulnerable to adversarial examples. Currently, there is relatively little research on adversarial examples for anchor-free object detectors, making such models more susceptible to adversarial attacks. To address this issue, we adopted a method of generating adversarial examples on anchor-free object detection models. This framework rapidly collected gradients based on the identified classes, which was more efficient than methods that generated perturbations based on individual candidate boxes. Meanwhile, a method for extracting semantic information masks was proposed, enabling the adversarial perturbations to be concentrated only in the semantically rich regions of the image, resulting in sparser and more focused perturbations. The results on two datasets demonstrate that this method achieves state-of-the-art performance in both white-box and black-box experiments, providing support for the improvement and optimization of the robustness of such networks.

     

/

返回文章
返回