Research

SemanticPaint: Interactive 3D Labeling and Lear ning at your Finger tips

[:en]

J. Valentin, V. Vineet, M.M. Cheng, D. Kim, J. ShottonP. Kohli, M. Nießner, A. Criminisi, S. Izadi, P. Torr

SemanticPaint

Figure. Our system allows users to quickly and interactively label the world around them. The environment is scanned using a consumer RGB-D camera, and in real-time a volumetric fusion algorithm reconstructs the scene in 3D with additional color data (left). At any point the user can reach out and touch objects in the physical world, and provide object class labels through voice commands (middle). Then, an inference engine propagates these user-provided labels through the reconstructed scene, in real-time. In the meantime, in the background, a new, streaming random forest learns to assign object class labels to voxels in unlabeled regions of the world. Finally, another round of volumetric label propagation produces visually smooth labels over the entire scene (right).

Abstract

We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing them to immediately correct errors in the segmentation and/or learning – a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user’s environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.

SemanticPaintPiplineFigure.  Overview of 3D semantic modeling pipeline. See our paper for details.

Papers

  • SemanticPaint: Interactive 3D Labeling and Lear ning at your Finger tips. Julien Valentin, Vibhav Vineet, Ming-Ming Cheng, David Kim, Jamie Shotton, Pushmeet Kohli, Matthias Nießner, Antonio Criminisi, Shahram Izadi, Philip Torr, ACM TOG, 2015. [pdf] [project page] [bib] [code]

Related paper & source code

Video

(youku version, download video)

Media Coverage

BBCReport

[:zh]

J. ValentinV. VineetM.M. ChengD. KimJ. ShottonP. KohliM. NießnerA. CriminisiS. IzadiP. Torr

SemanticPaint

Figure. Our system allows users to quickly and interactively label the world around them. The environment is scanned using a consumer RGB-D camera, and in real-time a volumetric fusion algorithm reconstructs the scene in 3D with additional color data (left). At any point the user can reach out and touch objects in the physical world, and provide object class labels through voice commands (middle). Then, an inference engine propagates these user-provided labels through the reconstructed scene, in real-time. In the meantime, in the background, a new, streaming random forest learns to assign object class labels to voxels in unlabeled regions of the world. Finally, another round of volumetric label propagation produces visually smooth labels over the entire scene (right).

Abstract

We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing them to immediately correct errors in the segmentation and/or learning – a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user’s environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.

SemanticPaintPiplineFigure.  Overview of 3D semantic modeling pipeline. See our paper for details.

Papers

  • SemanticPaint: Interactive 3D Labeling and Lear ning at your Finger tips. Julien Valentin, Vibhav Vineet, Ming-Ming Cheng, David Kim, Jamie Shotton, Pushmeet Kohli, Matthias Nießner, Antonio Criminisi, Shahram Izadi, Philip Torr, ACM TOG, 2015. [pdf] [project page] [bib] [code]

Related paper & source code

Video

(youku versiondownload video)

Media Coverage

BBCReport

[:]

(Visited 8,904 times, 1 visits today)
Subscribe
Notify of
guest

1 Comment
Inline Feedbacks
View all comments
weslay_lee

nice