Research

iNAS: Integral NAS for Device-Aware Salient Object Detection

Comparison between the iNAS and previous handcraft SOD methods.

Introduction

Existing salient object detection (SOD) models usually focus on either backbone feature extractors or saliency heads, ignoring their relations. A powerful backbone could still achieve sub-optimal performance with a weak saliency head and vice versa. Moreover, the balance between model performance and inference latency poses a great challenge to model design, especially when considering different deployment scenarios. Considering all components in an integral neural architecture search (iNAS) space, we propose a flexible device-aware search scheme that only trains the SOD model once and quickly finds high-performance but low-latency models on multiple devices. An evolution search with latency-group sampling (LGS) is proposed to explore the entire latency area of our enlarged search space. Models searched by iNAS achieve similar performance with SOTA methods but reduce the 3.8×, 3.3×, 2.6×, 1.9× latency on Huawei Nova6 SE, Intel Core CPU, the Jetson Nano, and Nvidia Titan Xp.

Codes

Source Code and pre-trained model: https://github.com/guyuchao/iNAS .

Citation

@inproceedings{gu2021inas,
  title={iNAS: Integral NAS for Device-Aware Salient Object Detection},
  author={Gu, Yu-Chao and Gao, Shang-Hua and Cao, Xu-Sheng and Du, Peng and Lu, Shao-Ping and Cheng, Ming-Ming},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={4934--4944},
  year={2021}
}

(Visited 1,534 times, 1 visits today)
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments