Salient Object Detection: A Benchmark
[:en]
Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li
Notice: Welcome to contact Ming-Ming Cheng for adding new comparisons. Adding new results should supply either source code or executable.
Abstract
We extensively compare, qualitatively and quan- titatively, 42 state-of-the-art models (30 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over 6 challenging datasets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted just two years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for state-of-the-art models, provide useful hints towards constructing more challenging large scale datasets and better saliency models. Finally, we propose probable solutions for tackling several open problems such as evaluation scores and dataset bias, which also suggest future research directions in the rapidly-growing field of salient object detection.
Papers
- Salient Object Detection: A Benchmark, Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li, IEEE TIP, 2015. [pdf] [Project page] [Bib]
- Salient Object Detection: A Survey, Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li, arXiv eprint, 2014. [pdf] [Project page] [Bib]
Code
- C++ & Matlab: Salient Object Detection: A Benchmark, IEEE TIP, 2015.
- More public available source code on this website.
Downloads
We provide the evaluation data (images, ground truth, saliency maps, etc.) downloads here to facilitate future research. We suggest to use BT software to download these zip files using their url lists, which is available here. Evaluation resutls in form of matlab and csv file for plots and talbes could be downloaded here. If you use any parts of our results, please cite the corresponding paper above. 您也可以通过百度网盘下载。
Performance: FMeasure of saliency maps and salient object segmentations
Max FMeasure of average precision recall curve, average FMeasure for adaptive thresholding results, average FMeasure for SalCut. The subtitle of each column is in the [Dataset]-[Evaluation Metric] format, where [Dataset] is represented by the initial letter for the 6 benchmarks {THUR15K, JuddDB, DUT-OMRON, SED2, MSRA10K, ECSSD}. Click the title of the column to rerank the table according to that metric.
Model T-M T-A T-S J-M J-A J-S D-M D-A D-S S-M S-A S-S M-M M-A M-S E-M E-A E-S
MBD .622 .594 .642 .472 .422 .470 .624 .592 .636 .799 .803 .759 .849 .830 .890 .739 .703 .785
ST .631 .580 .648 .455 .394 .459 .631 .577 .635 .818 .805 .768 .868 .825 .896 .752 .690 .777
QCUT .651 .625 .620 .509 .454 .480 .683 .647 .647 .810 .801 .672 .874 .843 .843 .779 .738 .747
HDCT .602 .571 .636 .412 .378 .422 .609 .572 .643 .822 .802 .758 .837 .807 .877 .705 .669 .74
RBD .596 .566 .618 .457 .403 .461 .63 .58 .647 .837 .825 .75 .856 .821 .884 .718 .68 .757
GR .551 .509 .546 .418 .338 .378 .599 .54 .58 .798 .753 .639 .816 .77 .83 .664 .583 .677
MNP .495 .523 .603 .367 .337 .405 .467 .486 .576 .621 .778 .765 .668 .724 .822 .568 .555 .709
UFO .579 .557 .61 .432 .385 .433 .545 .541 .593 .742 .781 .729 .842 .806 .862 .701 .654 .739
MC .61 .603 .6 .46 .42 .434 .627 .603 .615 .779 .803 .63 .847 .824 .855 .742 .704 .745
DSR .611 .604 .597 .454 .421 .41 .626 .614 .593 .794 .821 .632 .835 .824 .833 .737 .717 .703
CHM .612 .591 .643 .417 .368 .424 .604 .586 .637 .75 .75 .658 .825 .804 .857 .722 .684 .735
GC .533 .517 .497 .384 .321 .342 .535 .528 .506 .729 .73 .616 .794 .777 .78 .641 .612 .593
LBI .519 .534 .618 .371 .353 .416 .482 .504 .609 .692 .776 .764 .696 .714 .857 .586 .563 .738
PCA .544 .558 .601 .432 .404 .368 .554 .554 .624 .754 .796 .701 .782 .782 .845 .646 .627 .72
DRFI .67 .607 .674 .475 .419 .447 .665 .605 .669 .831 .839 .702 .881 .838 .905 .787 .733 .801
GMR .597 .594 .579 .454 .409 .432 .61 .591 .591 .773 .789 .643 .847 .825 .839 .74 .712 .736
HS .585 .549 .602 .442 .358 .428 .616 .565 .616 .811 .776 .713 .845 .8 .87 .731 .659 .769
LMLC .54 .519 .588 .375 .302 .397 .521 .493 .551 .653 .712 .674 .801 .772 .86 .659 .6 .735
SF .5 .495 .342 .373 .319 .219 .519 .512 .377 .764 .794 .509 .779 .759 .573 .619 .576 .378
FES .547 .575 .426 .424 .411 .333 .52 .555 .38 .617 .785 .174 .717 .753 .534 .645 .655 .467
CB .581 .556 .615 .444 .375 .435 .542 .534 .593 .73 .704 .657 .815 .775 .857 .717 .656 .761
SVO .554 .441 .609 .414 .279 .419 .557 .407 .609 .744 .667 .746 .789 .585 .863 .639 .357 .737
SWD .528 .56 .649 .434 .386 .454 .478 .506 .613 .548 .714 .737 .689 .705 .871 .624 .549 .781
HC .386 .401 .436 .286 .257 .28 .382 .38 .435 .736 .759 .646 .677 .663 .74 .46 .441 .499
RC .61 .586 .639 .431 .37 .425 .599 .578 .621 .774 .807 .649 .844 .82 .875 .741 .701 .776
SEG .5 .425 .58 .376 .268 .393 .516 .45 .562 .704 .64 .669 .697 .585 .812 .568 .408 .715
MSS .478 .49 .2 .341 .324 .089 .476 .49 .193 .743 .783 .298 .696 .711 .362 .53 .536 .203
CA .458 .494 .557 .353 .33 .394 .435 .458 .532 .591 .737 .565 .621 .679 .748 .515 .494 .625
FT .386 .4 .238 .278 .25 .132 .381 .388 .259 .715 .734 .436 .635 .628 .472 .434 .431 .257
AC .41 .431 .068 .227 .199 .049 .354 .383 .04 .684 .729 .14 .52 .566 .014 .411 .41 .038
LC .386 .408 .289 .264 .246 .156 .327 .353 .243 .683 .752 .486 .569 .589 .432 .39 .396 .219
OBJ .498 .482 .593 .368 .282 .413 .481 .445 .578 .685 .723 .731 .718 .681 .84 .574 .456 .698
BMS .568 .578 .594 .434 .404 .416 .573 .576 .58 .713 .76 .627 .805 .798 .822 .683 .659 .69
COV .51 .587 .398 .429 .427 .315 .486 .579 .373 .518 .724 .212 .667 .755 .394 .641 .677 .413
SS .415 .482 .523 .344 .321 .397 .396 .443 .502 .533 .696 .641 .572 .642 .675 .467 .441 .574
SIM .372 .429 .568 .295 .292 .384 .358 .402 .539 .498 .685 .725 .498 .585 .794 .433 .391 .672
SeR .374 .419 .536 .316 .285 .388 .385 .411 .532 .521 .714 .702 .542 .607 .755 .419 .391 .596
SUN .387 .432 .486 .303 .291 .285 .321 .36 .445 .504 .661 .613 .505 .596 .67 .388 .376 .478
SR .374 .457 .002 .279 .27 .001 .298 .363 0 .504 .7 .002 .473 .569 .001 .381 .385 .001
GB .526 .571 .65 .419 .396 .455 .507 .548 .638 .571 .746 .695 .688 .737 .837 .624 .613 .765
AIM .427 .461 .559 .317 .26 .36 .361 .377 .495 .541 .718 .693 .555 .575 .75 .449 .357 .571
IT .373 .437 .005 .297 .283 0 .378 .449 .005 .579 .697 .008 .471 .586 .158 .407 .414 .003
AVG .458 .569 .62 .392 .367 .411 .406 .514 .534 .388 .524 .64 .58 .692 .779 .597 .627 .756
Performance: AUC & MAE
Comparison of AUC scores (larger better) and MAE scores (smaller better). Similar to the table above, the subtitle of each column is in the [Dataset]-[Evaluation Metric] format, where [Dataset] is represented by the initial letter for the 6 benchmarks {THUR15K, JuddDB, DUT-OMRON, SED2, MSRA10K, ECSSD}. Click the title of the column to rerank the table according to that metric.
Method T-AUC T-MAE J-AUC J-MAE D-AUC D-MAE S-AUC S-MAE M-AUC M-MAE E-AUC E-MAE
MBD 0.915 0.162 0.838 0.225 0.903 0.168 0.922 0.137 0.964 0.107 0.917 0.172
ST 0.911 0.179 0.806 0.240 0.895 0.182 0.922 0.145 0.961 0.122 0.914 0.193
QCUT 0.907 0.128 0.831 0.178 0.897 0.119 0.860 0.148 0.956 0.118 0.909 0.171
HDCT 0.878 0.177 0.771 0.209 0.869 0.164 0.898 0.162 0.941 0.143 0.866 0.199
RBD 0.887 0.15 0.826 0.212 0.894 0.144 0.899 0.13 0.955 0.108 0.894 0.173
GR 0.829 0.256 0.747 0.311 0.846 0.259 0.854 0.189 0.925 0.198 0.831 0.285
MNP 0.854 0.255 0.768 0.286 0.835 0.272 0.888 0.215 0.895 0.229 0.82 0.307
UFO 0.853 0.165 0.775 0.216 0.839 0.173 0.845 0.18 0.938 0.15 0.875 0.207
MC 0.895 0.184 0.823 0.231 0.887 0.186 0.877 0.182 0.951 0.145 0.91 0.204
DSR 0.902 0.142 0.826 0.196 0.899 0.139 0.915 0.14 0.959 0.121 0.914 0.173
CHM 0.91 0.153 0.797 0.226 0.89 0.152 0.831 0.168 0.952 0.142 0.903 0.195
GC 0.803 0.192 0.702 0.258 0.796 0.197 0.846 0.185 0.912 0.139 0.805 0.214
LBI 0.876 0.239 0.792 0.273 0.854 0.249 0.896 0.207 0.91 0.224 0.842 0.28
PCA 0.885 0.198 0.804 0.181 0.887 0.206 0.911 0.2 0.941 0.185 0.876 0.248
DRFI 0.938 0.15 0.851 0.213 0.933 0.155 0.944 0.13 0.978 0.118 0.944 0.166
GMR 0.856 0.181 0.781 0.243 0.853 0.189 0.862 0.163 0.944 0.126 0.889 0.189
HS 0.853 0.218 0.775 0.282 0.86 0.227 0.858 0.157 0.933 0.149 0.883 0.228
LMLC 0.853 0.246 0.724 0.303 0.817 0.277 0.826 0.269 0.936 0.163 0.849 0.26
SF 0.799 0.184 0.711 0.218 0.803 0.183 0.871 0.18 0.905 0.175 0.817 0.23
FES 0.867 0.155 0.805 0.184 0.848 0.156 0.838 0.196 0.898 0.185 0.86 0.215
CB 0.87 0.227 0.76 0.287 0.831 0.257 0.839 0.195 0.927 0.178 0.875 0.241
SVO 0.865 0.382 0.784 0.422 0.866 0.409 0.875 0.348 0.93 0.331 0.857 0.404
SWD 0.873 0.288 0.812 0.292 0.843 0.31 0.845 0.296 0.901 0.267 0.857 0.318
HC 0.735 0.291 0.626 0.348 0.733 0.31 0.88 0.193 0.867 0.215 0.704 0.331
RC 0.896 0.168 0.775 0.27 0.859 0.189 0.852 0.148 0.936 0.137 0.892 0.187
SEG 0.818 0.336 0.747 0.354 0.825 0.337 0.796 0.312 0.882 0.298 0.808 0.342
MSS 0.813 0.178 0.726 0.204 0.817 0.177 0.871 0.192 0.875 0.203 0.779 0.245
CA 0.83 0.248 0.774 0.282 0.815 0.254 0.853 0.229 0.872 0.237 0.784 0.31
FT 0.684 0.241 0.593 0.267 0.682 0.25 0.82 0.206 0.79 0.235 0.661 0.291
AC 0.74 0.186 0.548 0.239 0.721 0.19 0.831 0.206 0.756 0.227 0.668 0.265
LC 0.696 0.229 0.586 0.277 0.654 0.246 0.827 0.204 0.771 0.233 0.627 0.296
OBJ 0.839 0.306 0.75 0.359 0.822 0.323 0.87 0.269 0.907 0.262 0.818 0.337
BMS 0.879 0.181 0.788 0.233 0.856 0.175 0.852 0.184 0.929 0.151 0.865 0.216
COV 0.883 0.155 0.826 0.182 0.864 0.156 0.833 0.21 0.904 0.197 0.879 0.217
SS 0.792 0.267 0.754 0.301 0.784 0.277 0.826 0.266 0.823 0.266 0.725 0.344
SIM 0.797 0.414 0.727 0.412 0.783 0.429 0.833 0.384 0.808 0.388 0.734 0.433
SeR 0.778 0.345 0.746 0.379 0.786 0.352 0.835 0.29 0.813 0.31 0.695 0.404
SUN 0.746 0.31 0.674 0.319 0.708 0.349 0.789 0.307 0.778 0.306 0.623 0.396
SR 0.741 0.175 0.676 0.2 0.688 0.181 0.769 0.22 0.736 0.232 0.633 0.266
GB 0.882 0.229 0.815 0.261 0.857 0.24 0.839 0.242 0.902 0.222 0.865 0.263
AIM 0.814 0.298 0.719 0.331 0.768 0.322 0.846 0.262 0.833 0.286 0.73 0.339
IT 0.623 0.199 0.586 0.2 0.636 0.198 0.682 0.245 0.64 0.213 0.577 0.273
AVG 0.849 0.248 0.797 0.343 0.814 0.288 0.736 0.405 0.857 0.26 0.863 0.276
Salient object detection datasets
Abbr. Images References
MSRA10K 10000 Learning to Detect A Salient Object , IEEE CVPR 2007, Liu et al. Frequency-tuned Salient Region Detection, IEEE CVPR 2009, Achanta et al. Global Contrast based Salient Region Detection, IEEE TPAMI 2015, Cheng et al.
ECSSD 1000 Hierarchical Saliency Detection, IEEE CVPR 2013, Yan et al.
THUR15K 15000 SalientShape: Group Saliency in Image Collections, The Visual Computer 2013, Cheng et al.
JuddDB 900 What is a salient object? A dataset and a baseline model for salient object detection, arXiv, ePrints
DUTOMRON 5000 Saliency Detection Via Graph-Based Manifold Ranking, IEEE CVPR 2013, Yang et al.
SED 2 100 Image segmentation by probabilistic
bottom-up aggregation and cue integration, IEEE CVPR 2007, Alpert et alInformation about different methods
Detailed information of each method. Regarding source code type: `C' means 'C/C++', 'M' means 'Matlab', 'M+C' means a mixture of Matlab and C/C++.
News
- 2015/4/24: evaluation results of QCUT has been added.
- 2015/10/24: evaluation results of MBD has been added.
[:zh]
Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li
Notice: Welcome to contact Ming-Ming Cheng for adding new comparisons. Adding new results should supply either source code or executable.
Abstract
We extensively compare, qualitatively and quan- titatively, 42 state-of-the-art models (30 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over 6 challenging datasets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted just two years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for state-of-the-art models, provide useful hints towards constructing more challenging large scale datasets and better saliency models. Finally, we propose probable solutions for tackling several open problems such as evaluation scores and dataset bias, which also suggest future research directions in the rapidly-growing field of salient object detection.
Papers
- Salient Object Detection: A Benchmark, Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li, IEEE TIP, 2015. [pdf] [Project page] [Bib]
- Salient Object Detection: A Survey, Ali Borji, Ming-Ming Cheng, Huaizu Jiang, Jia Li, arXiv eprint, 2014. [pdf] [Project page] [Bib]
Code
- C++ & Matlab: Salient Object Detection: A Benchmark, IEEE TIP, 2015.
- More public available source code on this website.
Downloads
We provide the evaluation data (images, ground truth, saliency maps, etc.) to facilitate future research. Link: 百度网盘。
Performance: FMeasure of saliency maps and salient object segmentations
Max FMeasure of average precision recall curve, average FMeasure for adaptive thresholding results, average FMeasure for SalCut. The subtitle of each column is in the [Dataset]-[Evaluation Metric] format, where [Dataset] is represented by the initial letter for the 6 benchmarks {THUR15K, JuddDB, DUT-OMRON, SED2, MSRA10K, ECSSD}. Click the title of the column to rerank the table according to that metric.
Model T-M T-A T-S J-M J-A J-S D-M D-A D-S S-M S-A S-S M-M M-A M-S E-M E-A E-S
MBD .622 .594 .642 .472 .422 .470 .624 .592 .636 .799 .803 .759 .849 .830 .890 .739 .703 .785
ST .631 .580 .648 .455 .394 .459 .631 .577 .635 .818 .805 .768 .868 .825 .896 .752 .690 .777
QCUT .651 .625 .620 .509 .454 .480 .683 .647 .647 .810 .801 .672 .874 .843 .843 .779 .738 .747
HDCT .602 .571 .636 .412 .378 .422 .609 .572 .643 .822 .802 .758 .837 .807 .877 .705 .669 .74
RBD .596 .566 .618 .457 .403 .461 .63 .58 .647 .837 .825 .75 .856 .821 .884 .718 .68 .757
GR .551 .509 .546 .418 .338 .378 .599 .54 .58 .798 .753 .639 .816 .77 .83 .664 .583 .677
MNP .495 .523 .603 .367 .337 .405 .467 .486 .576 .621 .778 .765 .668 .724 .822 .568 .555 .709
UFO .579 .557 .61 .432 .385 .433 .545 .541 .593 .742 .781 .729 .842 .806 .862 .701 .654 .739
MC .61 .603 .6 .46 .42 .434 .627 .603 .615 .779 .803 .63 .847 .824 .855 .742 .704 .745
DSR .611 .604 .597 .454 .421 .41 .626 .614 .593 .794 .821 .632 .835 .824 .833 .737 .717 .703
CHM .612 .591 .643 .417 .368 .424 .604 .586 .637 .75 .75 .658 .825 .804 .857 .722 .684 .735
GC .533 .517 .497 .384 .321 .342 .535 .528 .506 .729 .73 .616 .794 .777 .78 .641 .612 .593
LBI .519 .534 .618 .371 .353 .416 .482 .504 .609 .692 .776 .764 .696 .714 .857 .586 .563 .738
PCA .544 .558 .601 .432 .404 .368 .554 .554 .624 .754 .796 .701 .782 .782 .845 .646 .627 .72
DRFI .67 .607 .674 .475 .419 .447 .665 .605 .669 .831 .839 .702 .881 .838 .905 .787 .733 .801
GMR .597 .594 .579 .454 .409 .432 .61 .591 .591 .773 .789 .643 .847 .825 .839 .74 .712 .736
HS .585 .549 .602 .442 .358 .428 .616 .565 .616 .811 .776 .713 .845 .8 .87 .731 .659 .769
LMLC .54 .519 .588 .375 .302 .397 .521 .493 .551 .653 .712 .674 .801 .772 .86 .659 .6 .735
SF .5 .495 .342 .373 .319 .219 .519 .512 .377 .764 .794 .509 .779 .759 .573 .619 .576 .378
FES .547 .575 .426 .424 .411 .333 .52 .555 .38 .617 .785 .174 .717 .753 .534 .645 .655 .467
CB .581 .556 .615 .444 .375 .435 .542 .534 .593 .73 .704 .657 .815 .775 .857 .717 .656 .761
SVO .554 .441 .609 .414 .279 .419 .557 .407 .609 .744 .667 .746 .789 .585 .863 .639 .357 .737
SWD .528 .56 .649 .434 .386 .454 .478 .506 .613 .548 .714 .737 .689 .705 .871 .624 .549 .781
HC .386 .401 .436 .286 .257 .28 .382 .38 .435 .736 .759 .646 .677 .663 .74 .46 .441 .499
RC .61 .586 .639 .431 .37 .425 .599 .578 .621 .774 .807 .649 .844 .82 .875 .741 .701 .776
SEG .5 .425 .58 .376 .268 .393 .516 .45 .562 .704 .64 .669 .697 .585 .812 .568 .408 .715
MSS .478 .49 .2 .341 .324 .089 .476 .49 .193 .743 .783 .298 .696 .711 .362 .53 .536 .203
CA .458 .494 .557 .353 .33 .394 .435 .458 .532 .591 .737 .565 .621 .679 .748 .515 .494 .625
FT .386 .4 .238 .278 .25 .132 .381 .388 .259 .715 .734 .436 .635 .628 .472 .434 .431 .257
AC .41 .431 .068 .227 .199 .049 .354 .383 .04 .684 .729 .14 .52 .566 .014 .411 .41 .038
LC .386 .408 .289 .264 .246 .156 .327 .353 .243 .683 .752 .486 .569 .589 .432 .39 .396 .219
OBJ .498 .482 .593 .368 .282 .413 .481 .445 .578 .685 .723 .731 .718 .681 .84 .574 .456 .698
BMS .568 .578 .594 .434 .404 .416 .573 .576 .58 .713 .76 .627 .805 .798 .822 .683 .659 .69
COV .51 .587 .398 .429 .427 .315 .486 .579 .373 .518 .724 .212 .667 .755 .394 .641 .677 .413
SS .415 .482 .523 .344 .321 .397 .396 .443 .502 .533 .696 .641 .572 .642 .675 .467 .441 .574
SIM .372 .429 .568 .295 .292 .384 .358 .402 .539 .498 .685 .725 .498 .585 .794 .433 .391 .672
SeR .374 .419 .536 .316 .285 .388 .385 .411 .532 .521 .714 .702 .542 .607 .755 .419 .391 .596
SUN .387 .432 .486 .303 .291 .285 .321 .36 .445 .504 .661 .613 .505 .596 .67 .388 .376 .478
SR .374 .457 .002 .279 .27 .001 .298 .363 0 .504 .7 .002 .473 .569 .001 .381 .385 .001
GB .526 .571 .65 .419 .396 .455 .507 .548 .638 .571 .746 .695 .688 .737 .837 .624 .613 .765
AIM .427 .461 .559 .317 .26 .36 .361 .377 .495 .541 .718 .693 .555 .575 .75 .449 .357 .571
IT .373 .437 .005 .297 .283 0 .378 .449 .005 .579 .697 .008 .471 .586 .158 .407 .414 .003
AVG .458 .569 .62 .392 .367 .411 .406 .514 .534 .388 .524 .64 .58 .692 .779 .597 .627 .756
Performance: AUC & MAE
Comparison of AUC scores (larger better) and MAE scores (smaller better). Similar to the table above, the subtitle of each column is in the [Dataset]-[Evaluation Metric] format, where [Dataset] is represented by the initial letter for the 6 benchmarks {THUR15K, JuddDB, DUT-OMRON, SED2, MSRA10K, ECSSD}. Click the title of the column to rerank the table according to that metric.
Method T-AUC T-MAE J-AUC J-MAE D-AUC D-MAE S-AUC S-MAE M-AUC M-MAE E-AUC E-MAE
MBD 0.915 0.162 0.838 0.225 0.903 0.168 0.922 0.137 0.964 0.107 0.917 0.172
ST 0.911 0.179 0.806 0.240 0.895 0.182 0.922 0.145 0.961 0.122 0.914 0.193
QCUT 0.907 0.128 0.831 0.178 0.897 0.119 0.860 0.148 0.956 0.118 0.909 0.171
HDCT 0.878 0.177 0.771 0.209 0.869 0.164 0.898 0.162 0.941 0.143 0.866 0.199
RBD 0.887 0.15 0.826 0.212 0.894 0.144 0.899 0.13 0.955 0.108 0.894 0.173
GR 0.829 0.256 0.747 0.311 0.846 0.259 0.854 0.189 0.925 0.198 0.831 0.285
MNP 0.854 0.255 0.768 0.286 0.835 0.272 0.888 0.215 0.895 0.229 0.82 0.307
UFO 0.853 0.165 0.775 0.216 0.839 0.173 0.845 0.18 0.938 0.15 0.875 0.207
MC 0.895 0.184 0.823 0.231 0.887 0.186 0.877 0.182 0.951 0.145 0.91 0.204
DSR 0.902 0.142 0.826 0.196 0.899 0.139 0.915 0.14 0.959 0.121 0.914 0.173
CHM 0.91 0.153 0.797 0.226 0.89 0.152 0.831 0.168 0.952 0.142 0.903 0.195
GC 0.803 0.192 0.702 0.258 0.796 0.197 0.846 0.185 0.912 0.139 0.805 0.214
LBI 0.876 0.239 0.792 0.273 0.854 0.249 0.896 0.207 0.91 0.224 0.842 0.28
PCA 0.885 0.198 0.804 0.181 0.887 0.206 0.911 0.2 0.941 0.185 0.876 0.248
DRFI 0.938 0.15 0.851 0.213 0.933 0.155 0.944 0.13 0.978 0.118 0.944 0.166
GMR 0.856 0.181 0.781 0.243 0.853 0.189 0.862 0.163 0.944 0.126 0.889 0.189
HS 0.853 0.218 0.775 0.282 0.86 0.227 0.858 0.157 0.933 0.149 0.883 0.228
LMLC 0.853 0.246 0.724 0.303 0.817 0.277 0.826 0.269 0.936 0.163 0.849 0.26
SF 0.799 0.184 0.711 0.218 0.803 0.183 0.871 0.18 0.905 0.175 0.817 0.23
FES 0.867 0.155 0.805 0.184 0.848 0.156 0.838 0.196 0.898 0.185 0.86 0.215
CB 0.87 0.227 0.76 0.287 0.831 0.257 0.839 0.195 0.927 0.178 0.875 0.241
SVO 0.865 0.382 0.784 0.422 0.866 0.409 0.875 0.348 0.93 0.331 0.857 0.404
SWD 0.873 0.288 0.812 0.292 0.843 0.31 0.845 0.296 0.901 0.267 0.857 0.318
HC 0.735 0.291 0.626 0.348 0.733 0.31 0.88 0.193 0.867 0.215 0.704 0.331
RC 0.896 0.168 0.775 0.27 0.859 0.189 0.852 0.148 0.936 0.137 0.892 0.187
SEG 0.818 0.336 0.747 0.354 0.825 0.337 0.796 0.312 0.882 0.298 0.808 0.342
MSS 0.813 0.178 0.726 0.204 0.817 0.177 0.871 0.192 0.875 0.203 0.779 0.245
CA 0.83 0.248 0.774 0.282 0.815 0.254 0.853 0.229 0.872 0.237 0.784 0.31
FT 0.684 0.241 0.593 0.267 0.682 0.25 0.82 0.206 0.79 0.235 0.661 0.291
AC 0.74 0.186 0.548 0.239 0.721 0.19 0.831 0.206 0.756 0.227 0.668 0.265
LC 0.696 0.229 0.586 0.277 0.654 0.246 0.827 0.204 0.771 0.233 0.627 0.296
OBJ 0.839 0.306 0.75 0.359 0.822 0.323 0.87 0.269 0.907 0.262 0.818 0.337
BMS 0.879 0.181 0.788 0.233 0.856 0.175 0.852 0.184 0.929 0.151 0.865 0.216
COV 0.883 0.155 0.826 0.182 0.864 0.156 0.833 0.21 0.904 0.197 0.879 0.217
SS 0.792 0.267 0.754 0.301 0.784 0.277 0.826 0.266 0.823 0.266 0.725 0.344
SIM 0.797 0.414 0.727 0.412 0.783 0.429 0.833 0.384 0.808 0.388 0.734 0.433
SeR 0.778 0.345 0.746 0.379 0.786 0.352 0.835 0.29 0.813 0.31 0.695 0.404
SUN 0.746 0.31 0.674 0.319 0.708 0.349 0.789 0.307 0.778 0.306 0.623 0.396
SR 0.741 0.175 0.676 0.2 0.688 0.181 0.769 0.22 0.736 0.232 0.633 0.266
GB 0.882 0.229 0.815 0.261 0.857 0.24 0.839 0.242 0.902 0.222 0.865 0.263
AIM 0.814 0.298 0.719 0.331 0.768 0.322 0.846 0.262 0.833 0.286 0.73 0.339
IT 0.623 0.199 0.586 0.2 0.636 0.198 0.682 0.245 0.64 0.213 0.577 0.273
AVG 0.849 0.248 0.797 0.343 0.814 0.288 0.736 0.405 0.857 0.26 0.863 0.276
Salient object detection datasets
Abbr. Images References
MSRA10K 10000 Learning to Detect A Salient Object , IEEE CVPR 2007, Liu et al. Frequency-tuned Salient Region Detection, IEEE CVPR 2009, Achanta et al. Global Contrast based Salient Region Detection, IEEE TPAMI 2015, Cheng et al.
ECSSD 1000 Hierarchical Saliency Detection, IEEE CVPR 2013, Yan et al.
THUR15K 15000 SalientShape: Group Saliency in Image Collections, The Visual Computer 2013, Cheng et al.
JuddDB 900 What is a salient object? A dataset and a baseline model for salient object detection, arXiv, ePrints
DUTOMRON 5000 Saliency Detection Via Graph-Based Manifold Ranking, IEEE CVPR 2013, Yang et al.
SED 2 100 Image segmentation by probabilistic
bottom-up aggregation and cue integration, IEEE CVPR 2007, Alpert et alInformation about different methods
Detailed information of each method. Regarding source code type: `C' means 'C/C++', 'M' means 'Matlab', 'M+C' means a mixture of Matlab and C/C++.
News
- 2015/4/24: evaluation results of QCUT has been added
- 2015/10/24: evaluation results of MBD has been added
[:]
程老师,您好。我关注到最近一些unsupervised的方法使用了这个project中的RBD、DSR、MC和HS这4种方法在MSRA数据集上的maps,但是我在提供的网盘链接中没有找到RBD的maps。请问是否可以提供一下?谢谢
程老师,这个代码是demo依赖于其他9个项目的,配置opencv的时候该如何配置呢?单个项目配置我没问题,但是多个有依赖关系的我还是搞不定,抱歉问这么低级的问题,谢谢您啦
你可以搜索一下如何通过“Microsoft.Cpp.x64.user.props”文件对visual studio进行全局配置。
程老师,你好,我没有找到HDCT这个算法的实现代码,麻烦能不能指以下在哪里
这个页面上有该方法代码的链接。只不过是一个Google site,需要翻墙才能访问。
数据集的下载链接现在无效。 你介意更新它们吗?
数据太大了,建议用百度网盘那个连接下载
我运行exe文件一直报错没有opencv_core300.dll但是无论我下opencv3.0.0RC1版本还是opencv3.0.0版本,解压都只有opencv_world300.dll。3.0.0版本根本不存在core,imgcodecs这几个dll
我并没有调源代码,我用的是matlab调用已有的exe
方便加一下qq吗? 本人804977871 不甚感激
我运行exe文件一直报错没有opencv_core300.dll但是无论我下opencv3.0.0RC1版本还是opencv3.0.0版本,解压都只有opencv_world300.dll。3.0.0版本根本不存在core,imgcodecs这几个dll
程老师能否提供一个通用的,跨版本的调试方法,以便我们后辈踩在巨人的肩膀上前行。
程老师您好。我在使用您的C++代码在VS2013+opencv2.4.13上调试时,虽然我已经添加了库文件路径,并连接了库,但一直报错LINK : fatal error LNK1104: cannot open file ‘../Lib/CmLib.lib’以及CmLibd.lib(CmDefinition.obj) : error LNK2038: mismatch detected for ‘_ITERATOR_DEBUG_LEVEL’: value ‘2’ doesn’t match value ‘0’ in GetGC.obj这是什么原因?
My visual studio does not compile the as there is an error saying ” Cannot open include file: opencv2/opencv.hpp’. Can someone please reply me how to solve this issue.
Thanks,
Please Google the question to see how to install OpenCV properly for Visual Studio
程老师,你好!我电脑上有你在github上关于显著性的exe,其他的都跑通了,RC的效果看起来不是很好,请问是什么原因。而matlab调用getLC.exe(第一个参数为图像路径,第二个参数为输出的文件夹路径)结果出来的图像全是黑的,我用txt打开exe,里面有一行写着不能用dos打开,请问这个exe怎么用?谢谢。
夏老师您好 能方便告诉我调试这个代码的步骤吗 我一直报错无法解决
Dear Mingming,
I downloaded some evaluation results that you provided, and I found there are two kinds of saliency-cut results :‘_FT.PNG’ and ‘_SC.png’.
Could you tell me which is the true saliency-cut results?
Yijun
_CS.png are saliency cut results. _FT.png are segmentation results using the method proposed in the paper of FT method.
Dear Mingming,
Recently I was trying to download more database to test the performance of the saliency methods.
But I can’t access the link of JuddDB and DUTOMRON database.
Is it possible to update them?
Yijun
程老师, 您好!
请问您能提供ASD-1000 和Pascal-S 850数据集的各种方法的saliency map吗?
ASD 1000是MSRA10K的子集。我们公开了evaluation代码,你也可以自己跑那些程序。
您好,我下载了您 用来做evaluation的结果(images, ground truth, saliency maps, masks),其中,cut结果中有以_FT.png和_SC.png 两种。
请问 _FT.png是Adaptive Thresholding方法得到的吗?为什么我用2倍均值的方法求到的结果 和 您网站上下载的不一样?
程老师您好,我刚接触显著区域检测这方面的知识,看了文中提到的LC算法,请问它的英文全称是什么?我看了原文献没看明白。谢谢!
Luminance Contrast
您好! 我在用deep learning做Salient Object Detection。请问能不能提供一点思路呢?
您好,我计算的AUC值和您的值有很大的差异,不知道您是用的什么代码进行AUC值的计算的呢? 可以发给我一份AUC的计算代码吗? 谢谢!
用这个代码吧(另外对所有的saliency map normalize到[0,255]之间)https://github.com/MingMingCheng/CmCode/blob/master/CmLib/Illustration/CmEvaluation.h