Research

MobileSal: Extremely Efficient RGB-D Salient Object Detection

Our MobileSal method shows very competitive accuracy and much faster speed.

  • MobileSal: Extremely Efficient RGB-D Salient Object Detection, Yu-Huan Wu, Yun Liu, Jia-Wang Bian, Yu-Chao Gu, Ming-Ming Cheng*, IEEE TPAMI, 2021. [pdf | bib | code | project | 中译版]
  • Introduction

    The high computational cost of neural networks has prevented recent successes in RGB-D salient object detection (SOD) from benefiting real-world applications. Hence, this paper introduces a novel network, MobileSal, which focuses on efficient RGB-D SOD using mobile networks for deep feature extraction. However, mobile networks are less powerful in feature representation than cumbersome networks. To this end, we observe that the depth information of color images can strengthen the feature representation related to SOD if leveraged properly. Therefore, we propose an implicit depth restoration (IDR) technique to strengthen the mobile networks’ feature representation capability for RGB-D SOD. IDR is only adopted in the training phase and is omitted during testing, so it is computationally free. Besides, we propose compact pyramid refinement (CPR) for efficient multi-level feature aggregation to derive salient objects with clear boundaries. With IDR and CPR incorporated, MobileSal performs favorably against state-of-the-art methods on six challenging RGB-D SOD datasets with much faster speed (450fps for the input size of 320 × 320) and fewer parameters (6.5M).

    Method Overview

    The pipeline of MobileSal. We fuse RGB and depth information only at the coarsest level and then efficiently do the multi-scale aggregation with CPRs. The IDR branch strengthens the less powerful features learned by the mobile networks in a computationally free manner.
    Illustration of the proposed IDR and CPR. (a) The IDR branch strengthens the less powerful features of the mobile backbone network. (b) Multi-level deep features are efficiently aggregated by the CPR module. “D-Conv” indicates depthwise separable convolution.

    Results

    Quantitative results on six challenging datasets. The best, second best, and third best results are highlighted in redblue and bold, respectively. Our method achieves the best speed-accuracy trade-off.

    Citation

    @ARTICLE{wu2021mobilesal,
      author={Wu, Yu-Huan and Liu, Yun and Xu, Jun and Bian, Jia-Wang and Gu, Yu-Chao and Cheng, Ming-Ming},
      journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
      title={MobileSal: Extremely Efficient RGB-D Salient Object Detection}, 
      year={2021},
      doi={10.1109/TPAMI.2021.3134684}
    }

    (Visited 2,408 times, 1 visits today)
    Subscribe
    Notify of
    guest

    3 Comments
    Inline Feedbacks
    View all comments

    […] Yun Liu, Jia-Wang Bian, Yu-Chao Gu, Ming-Ming Cheng*, IEEE TPAMI, 2021. [pdf | bib | code | project | […]

    […] Yun Liu, Jia-Wang Bian, Yu-Chao Gu, Ming-Ming Cheng*, IEEE TPAMI, 2021. [pdf | bib | code | project | […]

    […] Yun Liu, Jia-Wang Bian, Yu-Chao Gu, Ming-Ming Cheng*, IEEE TPAMI, 2021. [pdf | bib | code | project | […]