MSRA10K Salient Object Database
Ground truth examples: (first row) original images with ground truth rectangles from MSRA dataset, (second row) our ground truth, which has more precisely marked important regions at pixel level accuracy.
The MSRA Salient Object Database, which originally provides salient object annotation in terms of bounding boxes provided by 3-9 users, is widely used in salient object detection and segmentation community. Although an invaluable resource to evaluate saliency detection algorithms, the database with the marked bounding boxes, however, is often too coarse for fine-grained evaluation as observed by Wang and Li [ICASSP 2008], and Achanta et al. [CVPR 2009]. In order to do more extensive and accurate evaluation, we randomly selected 10,000 images with consistent bounding box labeling in MSRA database. We call this dataset MSRA10K because it contains 10,000 images with pixel-level saliency labeling for 10K images from MSRA dataset. In our experiments, we find that saliency detection methods using pixel level contrast (FT, HC, LC, MSS) do not scale well on this lager benchmark (see Fig. 11(a)), suggesting the importance of region-level analysis.
Downloads
- MSRA10K(formally named as THUS10000; 195MB: images + binary masks): Pixel accurate salient object labeling for 10000 images from MSRA dataset. Please cite our paper [BIB] if you use it. Saliency maps and salient object region segmentation for other 20+ alternative methods are also available (百度网盘).
- MSRA-B (111MB: images + binary masks): Pixel accurate salient object labeling for 5000 images from MSRA-B dataset. Please cite the corresponding paper [bib] if you use it.
Suggested references
- Deeply supervised salient object detection with short connections, Q Hou, MM Cheng, X Hu, A Borji, Z Tu, P Torr, IEEE TPAMI, 2018. [pdf] [Project Page] [bib] [source code] [official version]
- Global Contrast based Salient Region Detection. Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, Shi-Min Hu. IEEE TPAMI, 2015. [pdf] [Project page] [Bib]
- Learning to Detect A Salient Object. Tie Liu, Jian Sun, Nan-Ning Zheng, Xiaoou Tang and Heung-Yeung Shum. IEEE CVPR, 2007.
- R. Achanta, S. Hemami, F. Estrada and S. Süsstrunk, Frequency-tuned Salient Region Detection, IEEE CVPR, 2009.
where can i get MSRA10k dataset
Hello Professor, I was wondering how these datasets were created. I understand they have come from creating pixelwise annotations of the original MSRA dataset but was wondering about more details. What instructions were annotators given, and what tool did they use to create the saliency masks? I was also wondering how the original MSRA dataset was created, do you know or do you know who I could contact to find out? The link you have provided only takes me to Microsoft’s ‘people’ page.
For MSRA dataset, you might want to read details in their PAMI 2011 paper: Learning to Detect a Salient Object. The Microsoft webpage is gone as the corresponding people is not working at Microsoft now. You may try to search and find the first author of that paper.
For our dataset, we follow the bounding box of MSRA dataset, which provides us with a unique target. Then we annotate every pixel of that object manually, using image painting tools with the original image overlayed with transparency.
老师,MSRA1000数据集在20多种算法下产生的显著图,您那里有下载的链接吗?求帮助呀
MSRA1000是MSRA-10K 数据集的子集。你按照文件名把对应结果拿出来就行了。不过现在投稿新的论文,用MSRA1000 数据集的结果说服力很弱,容易被reviewer挑。
MSRA-10K 数据集 老师这个数据集的在二十多种算法下产生的显著图,您那里有下载的链接吗?您百度盘上没有MSRA-10K 数据集 在各个算法产生的显著图。
结果是有的。在Share/SalObjRes目录下。抱歉,这个数据集之前名字叫做THUS10000。原图是MSRA收集的,binary mask是在THU标注的。
老师,我找到了,谢谢您,祝老师健康快乐
教授你好,我第一次接觸這研究,
想請問在使用Database的時候,是使用整個Database去分析嗎?
還是會是在Database隨機挑幾張去測試?
可以参考相关论文。肯定是需要整个database去分析,随便挑几张没法与同类方法比较。
好的,謝謝教授的回覆
程教授您好。您这边提供的所有的图片文件我们可以自由下载用来研究吗?
当然可以。此外,更多数据集的数据在:http://mmcheng.net/sodl 。使用对应数据集,只需要在发表论文时加上数据集对应论文的引用就行。
感谢程教授回复。
如果我们用这些数据研发一些课题后,把这些研发成果有机会商用化的时候有什么特殊的限制吗?
我们标注的数据应该没有限制。原图是MSRA从网上收集的,有没有限制我就不清楚了。
好的,明白了。 非常感谢程教授回复!
,教授你好,Deeply supervised salient object detection with short connections,此篇论文 我想得知该网路主要关注的显着物为何呢?(如:人,猫,狗,又或者是透过像素的算法求得呢?)
具体的语义类别可以关注物体检测(object detection)或者语义分割(semantic segmentation)算法。
程教授,您好
Saliency maps and salient object region segmentation for other 20+ alternative methods are also available (5.5GB).
但是这个5.5G的文件链接已经失效无法下载
能否请您将其最新链接再告知一下呢,万分感谢 我的邮箱im_dongning@163.com
在百度网盘上
i need ASD, SED1 and SED2 dataset. Please mail me if you have.
I want database of Achanta’s dataset of 1000 images…please mail me
I want to know whether you have found the 1000 images. I also need this. please email me…thank you
I also want to know whether you have found the 1000 images. I also need this. please email me…thank you
I’ve downloaded the package that contains the results of 17 methods on MSRA10K. I found that beside the aforesaid results, another folder has been included in the package named as “Saliency_Cuts” which contains binary-level results of some methods. I’m wondering if you could tell me how the gray-level saliency maps have been converted to the binary-level ones. Is there any specific method to do so?
Thanks in advance.
I could suggest you read https://mmcheng.net/salobjbenchmark/ and https://mmcheng.net/salobj/ for the response to your problem and related source code.