基于对抗生成的鲁棒数据分离方法

ROBUST DATA EXTRACTION METHOD BASED ON ADVERSARIAL GENERATION

  • 摘要: 对抗防御算法主要用来对来自人类的攻击样本进行防御,使系统减少被攻击的风险。一般采用鲁棒精确度作为模型抵抗攻击能力的指标,然而目前的鲁棒性的研究现状仅停留在概念上的解释。为了更好地理解鲁棒性,提出一种分离鲁棒数据的方法 GAN-Separator 模型,该模型可以对鲁棒数据进行分离。使用分离的鲁棒数据来实现对抗训练,可以提高模型的鲁棒性。分离的鲁棒数据结果展示出了鲁棒性的来源主要来自于物体轮廓、关键部位及其颜色,该结果为对抗防御领域的研究和改进方向提供了一种指引,并可以有针对性地提升 AI 系统的鲁棒性。

     

    Abstract: Adversarial defense algorithms are mainly used to defend against attack examples from humans and reduce the risk of attack on the system. Generally, robust accuracy is used as an indicator of the model's ability to resist attacks. However, the current state of research on the robustness features that are contained in the training data is only conceptually explained and lacks a more intuitive understanding. In order to better understand the robustness, GAN-Separator model that can separate the robust data is proposed. The use of separated data to realize the adversarial training could enhance the stability of the model. The separated robust data results show that the source of robustness mainly comes from the object contour, key parts and its color, which is roughly consistent with the way human vision judges objects. The experimental results provide a guidance for the improvement direction of adversarial defense, and can improve the robustness of AI system.

     

/

返回文章
返回