PP-NAS: Searching for Plug-and-Play Blocks on Convolutional Neural Networks

  • Anqi Xiao
  • , Biluo Shen
  • , Jie Tian*
  • , Zhenhua Hu*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Multiscale features are of great importance in modern convolutional neural networks, showing consistent performance gains on numerous vision tasks. Therefore, many plug-and-play blocks are introduced to upgrade existing convolutional neural networks for stronger multiscale representation ability. However, the design of plug-and-play blocks is getting more and more complex, and these manually designed blocks are not optimal. In this work, we propose PP-NAS to develop plug-and-play blocks based on neural architecture search (NAS). Specifically, we design a new search space PPConv and develop a search algorithm consisting of one-level optimization, zero-one loss, and connection existence loss. PP-NAS minimizes the optimization gap between super-net and subarchitectures and can achieve good performance even without retraining. Extensive experiments on image classification, object detection, and semantic segmentation verify the superiority of PP-NAS over state-of-the-art CNNs (e.g., ResNet, ResNeXt, and Res2Net). Our code is available at https://github.com/ainieli/PP-NAS.

Original languageEnglish
Pages (from-to)12718-12730
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number9
DOIs
StatePublished - 2024

Keywords

  • Multiscale
  • neural architecture search (NAS)
  • plug-and-play
  • representation learning

Fingerprint

Dive into the research topics of 'PP-NAS: Searching for Plug-and-Play Blocks on Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this