Global contrast based salient region detection
Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, Shi-Min Hu
Abstract
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object extraction algorithm, which simultaneously evaluates global contrast differences and spatially weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut for high-quality salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Papers
- Global Contrast based Salient Region Detection. Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, Shi-Min Hu. IEEE TPAMI, 2015. [Pdf] [Poster] [Bib] [CVPR 2011 version] [中文版] [Overleaf Latex 中文版] [C++] (#2 most cited paper in CVPR 2011)
Most related projects on this website:
- Efficient Salient Region Detection with Soft Image Abstraction. Ming-Ming Cheng, Jonathan Warrell, Wen-Yan Lin, Shuai Zheng, Vibhav Vineet, Nigel Crook. IEEE International Conference on Computer Vision (IEEE ICCV), 2013. [pdf] [Project page] [bib] [latex] [official version]
BING: Binarized Normed Gradients for Objectness Estimation at 300fp, Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, Philip H. S. Torr, IEEE International Conference on Computer Vision and Pattern Recognition (IEEE CVPR), 2014. [Project page][pdf][bib] (Oral, Accept rate: 5.75%)
SalientShape: Group Saliency in Image Collections. Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Shi-Min Hu. The Visual Computer 30 (4), 443-453, 2014. [pdf] [Project page] [bib] [latex] [Official version]
Deeply supervised salient object detection with short connections, Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, Philip Torr, IEEE TPAMI (CVPR 2017), 2019. [pdf] [Project Page] [bib] [source code & data] [official version] [poster] [中文版海报]
Downloads
1. Data
The MSRA10K benchmark dataset (a.k.a. THUS10000) comprises of per-pixel ground truth annotation for 10, 000 MSRA images (181 MB), each of which has an unambiguous salient object and the object region is accurately annotated with pixel wise ground-truth labeling (13.1M). We provide saliency maps (5.3 GB containing 170, 000 image) for our methods as well as other 15 state of the art methods, including FT [1], AIM [2], MSS [3], SEG [4], SeR [5], SUN [6], SWD [7], IM [8], IT [9], GB [10], SR [11], CA [12], LC [13], AC [14], and CB [15]. Saliency segmentation (71.3MB) results for FT[1], SEG[4], and CB[10] are also available.
2. Windows executable
We supply an windows msi for install our prototype software, which includes our implementation for FT[2], SR[14], LC[28], our HC, RC and saliency cut method.
3. C++ source code
The C++ implementation of our paper as well as several other state of the art works.
4. Supplemental material
Supplemental materials (647 MB) including comparisons with other 15 state of the art algorithms are now available.
Salient object detection results for images with multiple objects. We tested it on the dataset provided by the CVPR 2007 paper: “Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration”.
5. More results for recent methods
If anyone want to share their results on our MSRA10K benchmark (facilitate other researchers to compare with recent methods), please contact me via email (see the header image of this project page for it). I will put your results as well as paper links in this page.
Comparisons with state of the art methods
Method | Time(s) | Code Type |
---|---|---|
FT | 0.247 | Matlab |
SEG | 7.48 | M&C |
CB | 36.5 | M&C |
Our | 0.621 | C++ |
Figure: such illustration could be automatically generated by CmIllustr::Imgs(…). This Supplemental materials (647 MB) gives full results for the entire MSRA10K dataset.
FAQs
Until now, more than 2000+ readers (according to email records) have request to get the source code for this project. Some of them have questions about using the code. Here are some frequently asked questions (some of them are frequently asked questions from many reviewers as well) for new users to refer:
Q1: I’m confused with the sentence in the paper: “In our experiments, the threshold is chosen empirically to be the threshold that gives 95% recall rate in our fixed thresholding experiments”. But all most the case, people have not the ground truth, so cannot compute the call rate. When I use your Cut application, I need to guess threshold value to have good cut image.
A: The recall rate is just used to evaluate the algorithm. When you use it, you typically don’t have to evaluate the algorithm itself very often. This sentence is used to explain what the fixed threshold we use typically means. Actually, when initialized using RC saliency maps, this threshold is 70 with saliency values normalized to [0,255]. It doesn’t mean that the saliency values corresponds to recall rate of 95% for every image, but empirically corresponds to recall rate of 95% for a large number of images. So, just use the suggested threshold of 70 is OK.
Q2: I use your code to get results for the same database you used. But the results seem to have some small difference from yours.
A: It seems that the cvtColor function in OpenCV 1.x is different from those in OpenCv 2.X. I suggest users to use those in recent versions. The segmentation method I used sometimes generates strange results, leading to strange results of saliency maps. This happens at low frequency. When this happens, I rerun the exe again and it becomes OK. I don’t know why, but this really happens when I use the exe first time after compiling (Very strange, maybe because some default initializations). If someone find the bug, please report to me.
Q3: Does your algorithm only get good results for images with single salient object?
A: Mostly yes. As described in our paper, our method is suitable for images with an unambiguous saliency object. Since saliency detection methods typically have no prior knowledge about the target object, thus is very difficult. Much recent researches focus on images with single saliency object. Even for this simple case, state of the art algorithm may also fail. It’s understandable since supervised object detection which uses a large number of training data and prior knowledge also fails in many cases.
However, the value of saliency detection methods lies on their applications in many fields. Because they don’t need large human annotation for learning, and typically much faster than object detection methods, it’s possible to automatically process a large number of images with low cost. Although many of the saliency detection results may be wrong (up to 60% for noise internet image) because of the ambiguous or even missing of salient objects, we can still use efficient algorithms to select those good results and use them in many interesting applications like (Notes: all following projects use our saliency source code, with initial version of SaliencyCut used in our own Sketch2Photo project. Click here for a list of 2000+ citations to the PAMI 2015 (CVPR11) paper):
- Unsupervised joint object discovery and segmentation in internet images, M. Rubinstein, A. Joulin, J. Kopf, and C. Liu, in IEEE CVPR, 2013, pp. 1939–1946. (Used the proposed saliency measure and showed that saliency-based segmentation produces state-of-the-art results on co-segmentation benchmarks, without using co-segmentation!)
- Image retrieval: Sketch2Photo: Internet Image Montage. Tao Chen, Ming-Ming Cheng, Ping Tan, Ariel Shamir, Shi-Min Hu. ACM SIGGRAPH Asia. 28, 5, 124:1-10, 2009.
- SalientShape: Group Saliency in Image Collections. Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Shi-Min Hu. The Visual Computer, 2013
- PoseShop: Human Image Database Construction and Personalized Content Synthesis. Tao Chen, Ping Tan, Li-Qian Ma, Ming-Ming Cheng, Ariel Shamir, Shi-Min Hu. IEEE TVCG, 19(5), 824-837, 2013.
-
Internet visual media processing: a survey with graphics and vision applications, Tao Chen, Ping Tan, Li-Qian Ma, Ming-Ming Cheng, Ariel Shamir, Shi-Min Hu. The Visual Computer, 2013, 1-13.
- Image editing: Semantic Colorization with Internet Images, Yong Sang Chia, Shaojie Zhuo, Raj Kumar Gupta, Yu-Wing Tai, Siu-Yeung Cho, Ping Tan, Stephen Lin, ACM SIGGRAPH Asia. 2011.
- View selection: Web-Image Driven Best Views of 3D Shapes. The Visual Computer, 2011. Accepted. H Liu, L Zhang, H Huang
- Image Collage: Arcimboldo-like Collage Using Internet Images.ACM SIGGRAPH Asia, 30(6), 2011. H Huang, L Zhang, HC Zhang
- Image manipulation: Data-Driven Object Manipulation in Images. Chen Goldberg, Eurographics 2012, T Chen, FL Zhang, A Shamir, SM Hu.
- Saliency For Image Manipulation, R. Margolin, L. Zelnik-Manor, and A. Tal, Computer Graphics International (CGI) 2012.
- Mobile Product Search with Bag of Hash Bits and Boundary Reranking, Junfeng He, Xianglong Liu, Tao Cheng, Jinyuan Feng, Tai-Hsu Lin, Hyunjin Chung and Shih-Fu Chang, IEEE CVPR, 2012.
- Unsupervised Object Discovery via Saliency-Guided Multiple Class Learning, Jun-Yan Zhu, Jiajun Wu, Yichen Wei, Eric Chang, and Zhuowen Tu, IEEE CVPR, 2012.
- Saliency Detection via Divergence Analysis: A Unified Perspective, ICPR 2012 (Best student paper). (The authors of this ICPR paper have derived that our formulation on global saliency has a deep connection with an information-theoretic measure, the so called Cauchy-Schwarz divergence.)
- Much more: http://scholar.google.com/scholar?cites=9026003219213417480
Q4: I’m confused about the definition of saliency. Why the annotation format (isolated points, binary mask regions, and bounding boxes) in different benchmarks for evaluating saliency detection methods are so different?
There are 3 different saliency detection directions: i) fixation prediction, ii) salient object detection, iii) objectness estimation. They have very different research target and very different applications. Personally, I’m mainly interested in the last two problems and will discuss them in a bit more detail.
Eye fixation models aims at predicting where human looks, i.e. a small set of fixation points. The most famous method in this area is Itti’s work in PAMI 1998. The MIT benchmark is designed for evaluating such methods.
Salient object detection, as what is done in this work, aim at finding most salient object in a scene and segment the whole extent of that object. The output is typically a single saliency map (or figure-ground segmentation). The advantages and disadvantages are described in detail in Q3. High precision is a major focus of our work, as we can use shape matching based technique to effectively select good segmentations and build robust applications on top. Most widely used benchmark for evaluating this problem is MSRA1000, which precisely segment 1000 salient objects in MSRA images. Our method achieves 93% precision and 90% recall on MSRA1000 (previous best reported results: 75% precision and 83% recall). Since our results on MSRA100 are mostly comparable to ground truth annotations, we need more challenging benchmark. MSRA10K and THUR15K are built for this purpose.
Objectness estimation is another attractive direction. These methods aim at proposing a small set (typically 1000) of bounding boxes to improve efficiency of classical sliding window pipeline. High recallat a small set of bounding box proposals is a major target. PASCAL VOC is a standard dataset for evaluating this problem. Using purely bottom up data driven methods to produce a single saliency map, as what is done in most salient object detection model, is less likely to succeed in this very challenging dataset. State of the art objectness proposal methods (PAMI12, IJCV13) achieves 90+% recall on challenging PASCAL VOC dataset given a relatively small (e.g. 1000) number of bounding boxes, while been computational efficient (4 seconds per image). This is especially useful for speed up multi-class object detection problem, as each classifier only need to examine a much smaller number of image windows (e.g. 1,000,000 -> 1,000).
Q5: In nearly all 300+ papers citing this work, the F-Measure of RC method used for comparison is significantly lower than that is reported in this paper. Why?
Our salient object segmentation involves a powerful SaliencyCut method, for which we have not yet release the source code (will be released only after the journal version been published). The high performance of our salient object segmentation method could simply be verified by running our published binary code. When reporting the F-Measure of our method, most papers use adaptive threshold to get segmentation results, which produce much worse results than our original version. This is somehow reasonable and make the comparison easier, as they don’t have access to our SaliencyCut code. Notice that our method achieves 92% F-Measure on MSRA benchmark, and I have not yet see any other method get F-Measure better than 90% (achieved by our CVPR11 version). It’s worth mentioning that even latest GrabCut method only achieves ‘comparable’ performance (F-Measure – 89%) on the same benchmark (see “Grabcut in One Cut, Meng Tang, Lena Gorelick, Olga Veksler, Yuri Boykov, ICCV, 2013″).
Q6: The benchmarks you use all have center bias, will this be a problem?
Regarding to the center bias, this seems to be a nature bias in real-world images. In the community of salient object detection, most methods tries to detect the most dominate object rather than dealing with complicated images, where many objects exist and have complicated occlusions, etc. Even (only) dealing with these simple (‘Flickr like’) images is also quite useful for many applications (see Q3). Even trained on thousands of accurately labeled images, state of the art object detection methods still can’t get robust results for PASCAL VOC like images. For salient object detection algorithms, the robustness could come from automatic selection of good results from thousands of images, for which we can get automatic segmentation results for free (no needs for training data annotation). See ‘SalientShape: Group Saliency in Image Collections’ for un-selected and automatic downloaded Flickr images dataset (also have clear center bias) as well as aforementioned applications.
Links to source code of other methods
FT | [1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,“Frequency-tuned salient region detection,” in IEEE CVPR, 2009, pp. 1597–1604. |
AIM | [2] N. Bruce and J. Tsotsos, “Saliency, attention, and visual search: An information theoretic approach,” Journal of Vision, vol. 9, no. 3, pp. 5:1–24, 2009. |
MSS | [3] R. Achanta and S. S ¨ usstrunk, “Saliency detection using maximum symmetric surround,” in IEEE ICIP, 2010, pp. 2653–2656. |
SEG | [4] E. Rahtu, J. Kannala, M. Salo, and J. Heikkila, “Segmenting salient objects from images and videos,” ECCV, pp. 366–379, 2010. |
SeR | [5] H. Seo and P. Milanfar, “Static and space-time visual saliency detection by self-resemblance,” Journal of vision, vol. 9, no. 12, pp. 15:1–27, 2009. |
SUN | [6] L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: A bayesian framework for saliency using natural statistics,” Journal of Vision, vol. 8, no. 7, pp. 32:1–20, 2008. |
SWD | [7] L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu, “Visual saliency detection by spatially weighted dissimilarity,” in IEEE CVPR, 2011, pp. 473–480. |
IM | [8] N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “Saliency estimation using a non-parametric low-level vision model,” in IEEE CVPR, 2011, pp. 433–440. |
IT | [9] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE TPAMI, vol. 20, no. 11, pp. 1254–1259, 1998. |
GB | [10] J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in NIPS, 2007, pp. 545–552. |
SR | [11] X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” in IEEE CVPR, 2007, pp. 1–8. |
CA | [12] S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in IEEE CVPR, 2010, pp. 2376–2383. |
LC | [13] Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues,” in ACM Multimedia, 2006, pp. 815–824. |
AC | [14] R. Achanta, F. Estrada, P. Wils, and S. S ¨ usstrunk, “Salient region detection and segmentation,” in IEEE ICVS, 2008, pp. 66–75. |
CB | [15] H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li,“Automatic salient object segmentation based on context and shape prior,” in British Machine Vision Conference, 2011, pp. 1–12. |
LP | [16] T. Judd, K. Ehinger, F. Durand, A Torralba, Learning to predict where humans look, ICCV 2009. |
请问老师,您为什么用“Efficient graph-based image segmentation”这篇论文的分割方法?我可以替换这个分割方法吗
这个没有啥限制,用什么具体的过分割方法都行。这个只是当年最流行的一个。
程老师,这些代码哪个是分割的啊?我想用您分割的代码分割其他的显著性图
SaliencyCut 可以用于分割,不过那是2011年的代码。当时主流的数据集只有一个最显著的物体,这个代码里有对最大连通区域的操作。不一定适合你的需求。此外,我们的DenseCut 方法也可以用于分割,没有这方面限制。
请问程老师,SaliencyCut和DenseCut在哪里下载啊,谢谢
自然是在代码下载页面对应的论文代码里:https://mmcheng.net/code-data/ 具体你自己找吧
老师,在您此页面提供的不同数据集使用不同算法产生的显著图的百度网盘里面没有找到ASD1000或者MSRA10K数据集使用不同算法产生的显著图,请问其他页面有ASD1000使用不同算法产生的显著图吗
这个页面有 https://mmcheng.net/salobjbenchmark/
老师,我以前一直使用的matlab调试代码,没有使用过vs,因为您的代码都是c++的,今天就安装了vs,在我导入您的这篇论文的代码后,然后找到saliency文件下的saliencymain.cpp编译完成后点击debug,运行结果总是在浏览器出现目录清单,请问这个应该不是正常的运行结果吧?求教老师
matlab 和c++ 的差异请自行解决。
程老师你好,我是西电的一名学生,这几天都在看您的这篇论文《Global contrast based salient region detection》,我想问一下代码的哪个部分是测精确度和召回度的,打扰老师了。
自己看代码找吧,Main函数的最后应该有生成精度召回率等的代码的调用。
程老师,我调试了您关于图像显著性检测的相关代码,在评估算法时候,生成的.m文件在MATLAB上运行有错。
缺少这个函数xticklabel_rotate([1:5], 90, methodLabels, ‘interpreter’, ‘none’);
您能给我这个函数的实现代码吗?
请问你解决了吗,我也遇到同样的问题?
那个只是paper里面用来画图的一段,网上很容易找到。我GitHub上很早共享的代码里也有:https://github.com/MingMingCheng/CmCode/blob/master/CmLib/Illustration/xticklabel_rotate.m
那个只是paper里面用来画图的一段,网上很容易找到。我GitHub上很早共享的代码里也有:https://github.com/MingMingCheng/CmCode/blob/master/CmLib/Illustration/xticklabel_rotate.m
Hello.
I am trying to run the code in Visual Studio 2015.
I downloaded the code from provided GitHub page and ran the SaliencyMain.cpp in Visual Studio 2015 after changing the wkdir path.
It seems the Visual Studio 2015 throws a lot of error regarding CmLib as it suggests it to be old.
Please suggest in the comment below if someone has made the work and how?
Thank you.
Hello
I am trying to run the VS project, It has worked once before for me, However it is now giving the following error
>—— Build started: Project: SpRecoTOG14, Configuration: Release x64 ——
1> Moc’ing SpRecoUI.h…
1> The system cannot find the path specified.
1> Uic’ing SpRecoUI.ui…
1> The system cannot find the path specified.
1> Rcc’ing SpRecoUI.qrc…
1> The system cannot find the path specified.
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(170,5): error MSB6006: “cmd.exe” exited with code 3.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
While the Saliency project builds fine, I get the following error, while I run the entire solution:
Unable to start program ‘path\CmLib.lib’
Please suggest.
Thanks
Regards
Did you use Visual Studio 2015 or other versions?
I solve this trouble by create a folder named Lib,and put the CmLib.lib in it.But there is another question:error 0x000007b and 0x0000005.
Anyone know how to run it?
Where exactly did you put it?
程老师,你好,我配置了qt的,编译也遇到了这个问题,
Moc’ing SpRecoUI.h…
1> 系统找不到指定的路径。
1> Uic’ing SpRecoUI.ui…
1> 系统找不到指定的路径。
1> Rcc’ing SpRecoUI.qrc…
1> 系统找不到指定的路径。
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppCommon.targets(172,5): error MSB6006: “cmd.exe”已退出,代码为 3。
不知道为什么。期待您的回复,谢谢!
您好!我也遇到跟您一样的问题,请问您的问题有没有解决?能不能给我点指导,谢谢!
程老师,你好。编译SpRecoTOG14的时候,总提示
Moc’ing SpRecoUI.h…
1> 系统找不到指定的路径。
1> Uic’ing SpRecoUI.ui…
1> 系统找不到指定的路径。
1> Rcc’ing SpRecoUI.qrc…
1> 系统找不到指定的路径。
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppCommon.targets(172,5): error MSB6006: “cmd.exe”已退出,代码为 3。
系统提示这几个文件被重命名、删除或移动。这是怎么回事?谢谢,期待您的回复。
同学,我也遇到同样的问题,你那个问题解决了吗?
同学你好,我最近也在学习这篇文章,很多东西都不懂,编译的时候遇到了跟你一样的问题,你解决了吗?
这个程序用QT做界面,你没有安装配置QT
同上
老师,请问您这个QT界面大概是怎样的?需要怎样配置QT?
不需要QT。那个visual studio solution里面,集成了很多paper的代码。其他的paper功能需要。
程老师,您好。我在配置这个Saliency项目时,编译通过了,却显示Uable to start program“………../Debug/CmLib.lib”,不知是什么原因?
在CmLib项目里Saliency文件夹内的cpp文件显示无法打开StdAfx.h头文件
老师,您好,请问找不到CmLib.lib这个文件的问题您解决了么?
请问,运行是总是显示无法打开文件“CmLibd.lib”,请问怎么解决的
您好,运行时无法打开CmLibed.lib,怎么解决,另外你用的qt是哪个版本的,谢谢了~
程老师您好,显著性的这个程序我都生成成功了,但是运行的时候出错,显示:”无法启动程序E:\saliency_cheng\CmCode-master\Release\CmLib.lib”这个问题怎么解决?我运行的时候运行的所有项目,CmLib.lib的生成链接也加到Saliency的附加库目录中了。
请问,运行是总是显示无法打开文件“CmLibd.lib”,请问怎么解决的
您好,运行时无法打开CmLibed.lib,怎么解决,另外你用的qt是哪个版本的,谢谢了~
Code promo PMU : 170 euros de bonus incluant 100€ de paris sportifs, 50 euros de paris hippiques et 20€ de poker. En plus, votre 1er dépôt sur les tables de poker sera doublé jusqu’à 500€ maximum.
Ce code PMU est valable sur les Paris sportif, le Turf et le Poker
Cliquez ici et indiquez le code : PMUPOKER
Ce code promotionnel est à indiquer au moment de votre inscription sur PMU.fr à l’endroit ci-dessous.
code promotionnel PMU
PMU.fr est une plateforme de jeux en ligne qui convient à la fois aux amateurs de paris sportifs, de turf ou encore de poker. Pour ces 3 activités, 170 euros de bonus vous seront offerts et avec le code promo PMU, vous recevrez 500€ supplémentaires. Le bonus se compose de 100€ de paris sportifs + 50€ sur le turf + 20€ cash sur le poker et un bonus spécial de 500€ correspondant au doublement de votre 1er dépôt.
CODE PROMOTIONNEL PMU
Code promo PMU sport
Code bonus PMU poker
Code avantage PMU turf
Sachez que les différents bonus offerts sur PMU sont tous cumulables ! Pour profiter de cette offre, rendez-vous sur PMU. De plus, avec le code promo « PMUPOKER », appelé également code avantage PMU, bénéficiez du doublement de votre 1er dépôt jusqu’à 500 euros sur le poker et qui se débloquera au fur et à mesure que vous obtiendrez des points poker.
Accès à PMU.fr
Code promotionnel PMU sport : 1er pari remboursé jusqu’à 100€
le code promo sport de PMUSi vous aimez les paris sportifs, vous aurez accès sur PMU à plus de 30 sports différents, toute une gamme de types de paris variés et même de nombreux challenges en lien avec l’actualité sportive.
Par ailleurs, si vous décidez d’ouvrir un compte sur pmu, vous profiterez automatiquement d’un bonus de bienvenue de 100 euros pour les paris sportifs, le code promo PMU « PMUPOKER » n’est pas obligatoire pour ce bonus.
Contrairement à certains sites de paris en ligne, qui vous propose de doubler le montant de votre 1er dépôt comme Unibet (voir ici), PMU vous rembourse votre 1er pari.
Le fonctionnement de ce bonus est très simple :
Placez un pari de 100€ maximum sur le ou les matchs de votre choix (paris simple, paris combinés…)
Si votre pari est perdant, PMU vous le rembourse intégralement dans un délai de 10 jours
La seule condition est que votre 1er pari sportif doit avoir été placé dans les 15 jours qui ont suivi votre inscription
Ce bonus est valable sur l’ensemble de l’offre sport de PMU. Vous pouvez choisir votre sport favori et le type de pari que vous souhaitez (simple, combiné, en direct etc …). Retrouvez toutes les infos en détails sur le bonus PMU de bienvenue.
Pour bénéficier de cette offre, il vous suffit de vous rendre sur pmu afin de créer votre compte.
Code promo PMU sport : 100 euros offerts
Code promo PMU sport pour bénéficier de 100 euros offerts
Comme exigé par la règlementation ARJEL en terme de jeux d’argent en ligne, la première étape consiste à créer votre compte à l’aide du formulaire d’inscription.
Par la suite, il vous faudra communiquer vos coordonnés bancaires et enfin valider votre compte. Vous pouvez consulter la marche à suivre pas à pas sur cette page.
Bonus PMU.fr code promo
Rendez-vous simplement sur hpmu pour profiter automatiquement du bonus de 100 euros. Vous n’avez pas besoin d’indiquer le code promo PMU « PMUPOKER » si vous souhaitez seulement faire des paris sportifs. Mais, vous pouvez quand même le mettre dans la case « code promotionnel » lors de l’inscription, votre 1er dépôt sur le poker sera alors doublé jusqu’à 500€.
Pour en savoir plus sur les modalités d’obtention du bonus sport ainsi que la procédure à suivre, vous pouvez vous rendre sur pmu/.
Pour bénéficier de 100 euros offerts sur PMU, il vous suffit de vous rendre à l’adresse suivante pmu ou bien en cliquant directement sur le lien ci-dessous, sans que vous n’ayez aucun code promotionnel PMU paris sportifs à rentrer.
Code bonus PMU sport : jusqu’à 100 euros offerts
Code bonus PMU poker : 20 euros sans dépôt + 500 euros offerts
Le code promotionnel PMU pour le pokerPMU met également à votre disposition une section poker. Lors de votre inscription, 2 bonus de bienvenue vous sont proposés que vous pouvez cumuler.
En vous inscrivant pour la 1ére fois sur pmu, vous pouvez obtenir un bonus sans dépôt mais également un bonus 1er dépôt.
Les bonus poker se présentent de la façon suivante :
20 euros offerts sans dépôt : 5 euros sont offerts immédiatement à l’ouverture de votre compte puis 15 euros à la validation. Ils vous seront versés après avoir accumulé 5 points PMU poker dans les 15 jours qui suivent. Aucun code bonus PMU poker n’est à saisir. Il est attribué automatiquement.
500 euros de bonus offerts : PMU double le montant de votre 1er dépôt à hauteur de 500 euros. En revanche, pour obtenir ces 500 euros, vous devez rentrer le code promotionnel « PMUPOKER » pour être éligible. Il doit être débloqué (8 points PMU pour débloquer 1 euros) et sera versé en 5 tranches égales.
En ouvrant un compte, vous pouvez bénéficier de ces 2 bonus. Attention cependant à bien saisir le code bonus « PMUPOKER » lors de votre 1er dépôt sur pmu si vous souhaitez profiter des 500 euros offerts.
Sachez que vous pouvez cumuler le bonus poker avec ceux offerts sur le sport et le turf.
Pour profiter du code promo poker de PMU : 500 euros offerts, cliquez directement sur le lien ci-dessous.
Code promotionnel PMU poker : jusqu’à 500 euros offerts
Code avantage PMU poker : obtenez votre bonus de bienvenue
Lors de la création de votre compte sur sur pmu, vous accédez au formulaire d’inscription qui d’ailleurs est commun aux 3 activités, paris sportifs, poker et turf.
Attention : c’est à cette étape que vous devez rentrer le code promo « PMUPOKER » pour pouvoir prétendre au doublement de votre 1er dépôt jusqu’à 500 euros.
Code bonus poker PMU
En revanche, le bonus de 20 euros offerts sans dépôt à l’ouverture d’un compte s’obtient de façon automatique sans saisir de code promotionnel PMU poker.
Si vous souhaitez ouvrir un compte PMU, vous pouvez :
Utiliser votre ordinateur en vous rendant à l’adresse suivante :pmu
Utiliser votre smartphone ou tablette avec les applications mobile PMU (voir ici). Vous profitez exactement des mêmes bonus et promotions.
PMU met en jeu la somme de 2 millions d’euros garantis chaque mois. Plus de 120 000 joueurs ont déjà rejoint le réseau. Si vous souhaitez intégrer la communauté, il vous suffit de vous rendre sur pmu/ ou de cliquer directement sur le lien ci-dessous.
PMU poker code promo : profitez de 500 euros offerts
Code promo PMU paris hippiques : 50 euros offerts
code bonus PMU turfNaturellement PMU.fr fait figure de référence en matière de paris hippiques sur internet. Vous pouvez miser sur presque toutes les courses de toutes les réunions.
En ouvrant un compte sur pmu, vous avez accès à toutes les formules de paris qui font la renommée du PMU comme le Quinté+, le 2sur 4, le pick 5 …
Il s’agit sans conteste de l’offre turf la plus complète disponible online. A l’inscription, vous profitez de l’offre de bienvenue automatiquement sans que vous n’ayez aucun code promotionnel PMU turf à saisir :
50% de vos mises hippiques sur PMU durant 15 jours, qu’elles soient gagnantes ou perdantes, sont remboursées à hauteur de 50 euros
De plus, ce bonus sous forme de remboursement possède un avantage non négligeable : vous pouvez le retirer en une seule fois après un pari du même montant, sans aucune autre condition. Il vous faudra quand même compter une dizaine de jours avant de le voir crédité.
Pour profiter du code bonus PMU turf, vous pouvez cliquer directement sur le lien ci-dessous :
code promo PMU turf : 50 euros offerts
Code promo turf PMU pour obtenir 50 euros de bonus
En créant votre compte sur pmu, vous profiterez du remboursement de moitié de vos mises jusqu’à 50 euros sans avoir à saisir de code promotionnel PMU pour le turf.
Au préalable il vous faudra procéder à votre inscription. Commencez par remplir le formulaire d’inscription ci-dessous. Il s’agit du même formulaire pour le sport et le poker, rappelons que tous les bonus de PMU sont cumulables.
code avantage PMU
Sur PMU, 50% du montant de vos premiers paris vous est remboursé jusqu’à 50 euros au total et ce pendant 15 jours. Ce bonus vous est octroyé automatiquement sans aucun code bonus PMU à rentrer dans le formulaire.
Pour en bénéficier, rendez-vous simplement sur pmu et accédez directement à la procédure d’inscription en cliquant sur l’encart » Créez votre compte ».
code bonus PMU turf : 50 euros remboursés
程老师,请问在我编译好Saliency这个程序后,要怎么使用呢?
我在cmd中输入Saliency后得到了这样的结果:F:\检测\CmCode-master\x64\Debug>Saliency
“Precision = -1.#IND, recall = -1.#IND, F-Measure = -1.#IND, intUnion = -1.#IND,mae = -1.#IND”
你好,我也碰到相同的问题。请问你解决了吗?
我也是遇见了这样的问题,请问您解决了吗
Precision = -1.#IND, recall = -1.#IND, F-Measure = -1.#IND, intUnion = -1.#IND,mae = -1.#IND
还有提到要下载的其他几个数据labeling (13.1M). saliency maps (5.3 GB )和segmentation (71.3MB) 链接均在“代码和数据”页面,而该页面只有aNYU,THUR15K (787MB),annotations (VOC 2007),MSRA10K:这4个数据,那相对应的分别是哪些呢?
程老师您好,我想用您的Global contrast based salient region detection 方法试试,但提到的数据10, 000 MSRA images打开链接后只有如下两个数据包,而没有10000的,是我找错了吗?
Image set A — 20,000 image labeled by three users. (images, labeled rectangles, and readme files)
Image set B — 5,000 images labeled by nine users. (images, labeled rectangles, and readme files)
程老师您好,上面所说提供的数据中,其中saliency maps (5.3 GB containing 170, 000 image) 我并没有在 “代码和数据” 页面中找到,请问是不再提供下载了吗?还是有其他方法可以下载的到?谢谢!期待您的回复。
谢谢提醒,链接有些问题,已更新。
程老师,请问下,召回率曲线图是通过生成的哪个文件绘制的?
请问绘制出的precision recall图,只有GC和RC的曲线,同样生成的显著图的FT,HC,SR等的曲线如何调出来
程老师你好 老师下载你的code压缩包 但是不知道解压密码
程老师您好
在调试的时候,总是出现这样的问题,在网上找了很多解决的方法,都没有奏效
C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppCommon.targets(134,5): error MSB3073: 命令“xcopy /y D:\VS2012\Self_write\ChengMM_Saliency Detection\x64\Debug\CmLibd.lib ..\..\Lib\
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppCommon.targets(134,5): error MSB3073: :VCEnd”已退出,代码为 4。
我用的是vs2012+opencv2.4.9,solution中有5个工程,在生成解决方案时,4个成功1个失败
我那个默认把lib文件拷贝出来放在一个公共的文件夹下。这只是一个简单的配置命令,不支持文件夹或者文件名中有空格。你再查查?
老师,您可以直接把那个配置命令写上么?这样有了相同的困扰,找以前的评论就好,而不用一直问您重复的问题了啊