ImageSpirit: Verbal Guided Image Parsing

Ming-Ming Cheng1  Shuai Zheng1  Wen-Yan Lin3   Vibhav Vineet3  Paul Sturgess3  Nigel Crook3  Niloy Mitra2  Philip Torr1

1The University of Oxford     2University College London     3Oxford  Brookes University

ImageSpiritFig. 1. Given a source image downloaded from the Internet, our system generates multiple weak object/attributes cues (a). Using a novel multi-label CRF, we generate per-pixel object and attribute labeling (b). Based on this output, additional verbal guidance: ‘Refine the cotton bed in center-middle’, ‘Refine the white bed in center-middle’, ‘Refine the glass picture’, ‘Correct the wooden white cabinet in top-right to window’ allows re-weighting of CRF terms to generate, at interactive rates, high quality scene parsing result (c). Best viewed in color.


Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.


  • ImageSpirit: Verbal Guided Image Parsing. Ming-Ming Cheng, Shuai Zheng, Wen-Yan Lin, Vibhav Vineet, Paul Sturgess, Nigel Crook, Niloy Mitra, Philip Torr, ACM Transactions on Graphics, 2014.  [Project page] [Bib] [Pdf] [Supplemental]

Video (download)


1. C++ source code

Download here.

2. Supplemental material

Supplemental materials including comparisons between our automatic joint estimation and state of the art alternative methods on the entire test set (725 images from NYU). We also show verbal guided parsing results for NYU images as well as Google images.

Links to very related works:

Leave a Reply

Be the First to Comment!

Notify of