Simultaneous Estimation of Object Region and Depth in Participating Media Using a ToF Camera

Yuki FUJIMURA  Motoharu SONOGASHIRA  Masaaki IIYAMA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E103-D   No.3   pp.660-673
Publication Date: 2020/03/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDP7219
Type of Manuscript: PAPER
Category: Image Recognition, Computer Vision
Keyword: 
time-of-flight,  depth estimation,  participating media,  light scattering,  iteratively reweighted least squares,  

Full Text: PDF(3.2MB)>>
Buy this Article




Summary: 
Three-dimensional (3D) reconstruction and scene depth estimation from 2-dimensional (2D) images are major tasks in computer vision. However, using conventional 3D reconstruction techniques gets challenging in participating media such as murky water, fog, or smoke. We have developed a method that uses a continuous-wave time-of-flight (ToF) camera to estimate an object region and depth in participating media simultaneously. The scattered light observed by the camera is saturated, so it does not depend on the scene depth. In addition, received signals bouncing off distant points are negligible due to light attenuation, and thus the observation of such a point contains only a scattering component. These phenomena enable us to estimate the scattering component in an object region from a background that only contains the scattering component. The problem is formulated as robust estimation where the object region is regarded as outliers, and it enables the simultaneous estimation of an object region and depth on the basis of an iteratively reweighted least squares (IRLS) optimization scheme. We demonstrate the effectiveness of the proposed method using captured images from a ToF camera in real foggy scenes and evaluate the applicability with synthesized data.