TY - JOUR
T1 - Interpretable Relative Squeezing bottleneck design for compact convolutional neural networks model
AU - Zhao, Qi
AU - Liu, Jiahui
AU - Zhang, Boxue
AU - Lyu, Shuchang
AU - Raoof, Nauman
AU - Feng, Wenquan
N1 - Publisher Copyright:
© 2019 The Authors
PY - 2019/9
Y1 - 2019/9
N2 - Convolutional neural networks (CNN) are mainly used for image recognition tasks. However, some huge models are infeasible for mobile devices because of limited computing and memory resources. In this paper, feature maps of DenseNet and CondenseNet are visualized. It could be observed that there are some feature channels in locked state and some have similar distribution property, which could be compressed further. Thus, in this work, a novel architecture — RSNet is introduced to improve the computing efficiency of CNNs. This paper proposes Relative-Squeezing (RS) bottleneck design, where the output is the weighted percentage of input channels. Besides, RSNet also contains multiple compression layers and learned group convolutions (LGCs). By eliminating superfluous feature maps, relative squeezing and compression layers only transmit the most significant features to the next layer. Less parameters are employed and much computation is saved. The proposed model is evaluated on three benchmark datasets: CIFAR-10, CIFAR-100 and ImageNet. Experiment results show that RSNet performs better with less parameters and FLOPs, compared to the state-of-the-art baseline, including CondenseNet, MobileNet and ShuffleNet.
AB - Convolutional neural networks (CNN) are mainly used for image recognition tasks. However, some huge models are infeasible for mobile devices because of limited computing and memory resources. In this paper, feature maps of DenseNet and CondenseNet are visualized. It could be observed that there are some feature channels in locked state and some have similar distribution property, which could be compressed further. Thus, in this work, a novel architecture — RSNet is introduced to improve the computing efficiency of CNNs. This paper proposes Relative-Squeezing (RS) bottleneck design, where the output is the weighted percentage of input channels. Besides, RSNet also contains multiple compression layers and learned group convolutions (LGCs). By eliminating superfluous feature maps, relative squeezing and compression layers only transmit the most significant features to the next layer. Less parameters are employed and much computation is saved. The proposed model is evaluated on three benchmark datasets: CIFAR-10, CIFAR-100 and ImageNet. Experiment results show that RSNet performs better with less parameters and FLOPs, compared to the state-of-the-art baseline, including CondenseNet, MobileNet and ShuffleNet.
KW - Compact CNN
KW - Image recognition
KW - Learned group convolutions
KW - Relative-Squeezing bottleneck
UR - https://www.scopus.com/pages/publications/85070959871
U2 - 10.1016/j.imavis.2019.06.006
DO - 10.1016/j.imavis.2019.06.006
M3 - 文章
AN - SCOPUS:85070959871
SN - 0262-8856
VL - 89
SP - 276
EP - 288
JO - Image and Vision Computing
JF - Image and Vision Computing
ER -