Research

EGNet: Edge Guidance Network for Salient Object Detection

Jia-Xing Zhao,  Jiang-Jiang LiuDeng-Ping FanYang Cao,   Jufeng Yang Ming-Ming Cheng

TKLNDST, CS, Nankai University

Abstract

Fully convolutiona lneural networks(FCNs )have shown their advantages in the salient object detection task. However,most existing FCNs-based methods still suffer from the coarse object boundaries. In this paper, to solve this problem, we focus on the complementarity between the salient edge information and the salient object information. Accordingly, we present an edge guidance network (EGNet) for salient object detection with three steps to simultaneously model these complementary information in a single network. In the first step, we extract the salient object features by a progressive fusion way. In the second step, we integrate the local edge information and global location information to obtain the salient edge features. Finally, in order to suffciently leverage these complementary features, we couple the same salient edge features with salient object features at various resolutions. Benefiting from the rich edge information and location information in salient edge features, the fused features can help locate salient objects, especially their boundaries more accurately. Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods on six widely used datasets without any pre-processing and postprocessing.

Paper

  • EGNet: Edge Guidance Network for Salient Object Detection, Jiaxing Zhao, Jiangjiang Liu, Dengping Fan, Yang Cao, Jufeng Yang, Ming-Ming Cheng, ICCV, 2019. [pdf][code and evaluation results]

Source Code

We release  the source code and provide all the evaluation results(SOD, ECSSD, PASCALS, DUT-OMRON, DUTS, HKU-IS, SOC) in the code page.

Most related projects on this website

Method

Pipeline

The pipeline of our method. We use brown thick lines to represent information flows between the scales. PSFEM: progressive salient object features extraction module. NLSEM: non-local salient edge features extraction module. O2OGM : one-to-one guidance module. FF: feature fusion. Spv.: supervision. First, we explicitly model the edge information and get salient edge features . Then we leverage the salient edge features to guide the saliency features locate and segment the salient object better.

Qualitative comparisons

We compare our EGNet and the other state-of-the-art methods.

Quantitative comparisons

As we can see, our method performs favorably against the state-of-the-art methods on six widely used datasets without any pre-processing and post-processing.

If you find our work is helpful, please cite

@inproceedings{zhao2019EGNet,
 title={EGNet:Edge Guidance Network for Salient Object Detection},
 author={Zhao, Jia-Xing and Liu, Jiang-Jiang and Fan, Deng-Ping and Cao, Yang and Yang, Jufeng and Cheng, Ming-Ming},
 booktitle={The IEEE International Conference on Computer Vision (ICCV)},
 month={Oct},
 year={2019},
}

Contact

zhaojiaxing AT mail.nankai.edu.cn

(Visited 11,655 times, 1 visits today)
Subscribe
Notify of
guest

4 Comments
Inline Feedbacks
View all comments
Green hand

您好,我在run.py中进行了更改,在Testing setting中将model的default由“./epoch_resnet”改为了“./epoch_vgg”,然而预测出的图像是全灰色的。请问是还需要改其他地方么?
之后我对图像进行了训练,采用的“python3 run.py --mode train”命令,请问该命令训练的是哪个模型?
之后我将训练好的模型的名字由“resnet50_caffe.pth”和“vgg16_20M.pth”更改为“epoch_resnet.pth”和“epoch_vgg.pth”,将其与训练好的模型复制至同一文件夹中,并替换掉原训练好的模型,最后采用“python3 run.py --mode test --sal_mode s”命令进行预测SOD数据集,依旧出现纯灰色图像,我想问下您这是什么原因?

Green hand

附图

截图.png
zhu

有论文pdf嘛?

Jiaxing Zhao

camera ready搞好就会放出来,就这一两天了~