TY - GEN
T1 - NASIL
T2 - 11th International Symposium on Parallel Architectures, Algorithms and Programming, PAAP 2020
AU - Fu, Xianya
AU - Li, Wenrui
AU - Chen, Qiurui
AU - Zhang, Lianyi
AU - Yang, Kai
AU - Qing, Duzheng
AU - Wang, Rui
N1 - Publisher Copyright:
© 2021, Springer Nature Singapore Pte Ltd.
PY - 2021
Y1 - 2021
N2 - “Catastrophic forgetting” and scalability of tasks are two major challenges of incremental learning. Both of these issues were related to the insufficient capacity of machine learning model and the insufficiently trained weights as the increasing of tasks. In this paper, we try to figure out the impact of the neural network architecture to the performance of incremental learning in the case of image classification. During the increasing of tasks, we propose to use neural network architecture searching (NAS) to find a structure that fits the new tasks collection better. We build a NAS environment with reinforcement learning as the searching strategy and Long Short-Term Memory network as the controller network. Computation operation and connecting previous nodes are selected for each layer in the search phase. For each time a new group of tasks is added, the neural network architecture is searched and reorganized according to the training data set. To speed up the searching, we design a parameter sharing mechanism, in which the same building blocks in each layer share a group of parameters. We also introduce the quantified-parameter building blocks into the NAS, to identify the best candidate during each round of searching. We test our solution in cifar100 data set, the average accuracy outperforms the current representative solutions (LwEMC, iCaRL, GANIL) by 24.92%, 5.62%, and 3.6%, respectively, the more tasks added, the better our solution performs.
AB - “Catastrophic forgetting” and scalability of tasks are two major challenges of incremental learning. Both of these issues were related to the insufficient capacity of machine learning model and the insufficiently trained weights as the increasing of tasks. In this paper, we try to figure out the impact of the neural network architecture to the performance of incremental learning in the case of image classification. During the increasing of tasks, we propose to use neural network architecture searching (NAS) to find a structure that fits the new tasks collection better. We build a NAS environment with reinforcement learning as the searching strategy and Long Short-Term Memory network as the controller network. Computation operation and connecting previous nodes are selected for each layer in the search phase. For each time a new group of tasks is added, the neural network architecture is searched and reorganized according to the training data set. To speed up the searching, we design a parameter sharing mechanism, in which the same building blocks in each layer share a group of parameters. We also introduce the quantified-parameter building blocks into the NAS, to identify the best candidate during each round of searching. We test our solution in cifar100 data set, the average accuracy outperforms the current representative solutions (LwEMC, iCaRL, GANIL) by 24.92%, 5.62%, and 3.6%, respectively, the more tasks added, the better our solution performs.
KW - Continual learning
KW - Image classification
KW - Network architecture searching
UR - https://www.scopus.com/pages/publications/85102529944
U2 - 10.1007/978-981-16-0010-4_7
DO - 10.1007/978-981-16-0010-4_7
M3 - 会议稿件
AN - SCOPUS:85102529944
SN - 9789811600098
T3 - Communications in Computer and Information Science
SP - 68
EP - 80
BT - 11th International Symposium, PAAP 2020, Proceedings
A2 - Ning, Li
A2 - Chau, Vincent
A2 - Lau, Francis
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 28 December 2020 through 30 December 2020
ER -