Abstract:
Incremental learning aims to alleviate the catastrophic forgetting problem of deep neural networks. The main problems faced by existing incremental learning methods include: privacy leakage, consumption of additional memory, and linear growth of model parameters. In view of the shortcomings of the existing methods, based on the regularization method of knowledge distillation, the paper proposed a prototype sampling mechanism based on the k-means automatic clustering algorithm, and performed joint classification training on the old class prototypes and deep features of the new data to maintain distinction and balance between old and new categories. The experiment results show that the method has an average classification accuracy improvement of 17.4 and 16.9 percentage points compared with the existing methods on two public datasets CIFAR100 and Tiny-ImageNet, which verifies the effectiveness and advantages of the method in the paper.