Abstract
Light detection and ranging (LIDAR) systems use time of flight (TOF) in combination with raster scanning of the scene to form depth maps, and TOF cameras instead make TOF measurements in parallel by using an array of sensors. Here we present a framework for depth map acquisition using neither raster scanning by the illumination source nor an array of sensors. Our architecture uses a spatial light modulator (SLM) to spatially pattern a temporally-modulated light source. Then, measurements from a single omnidirectional sensor provide adequate information for depth map estimation at a resolution equal that of the SLM. Proof-of-concept experiments have verified the validity of our modeling and algorithms.
© 2012 Optical Society of America
PDF ArticleMore Like This
Ayush Bhandari, Aurélien Bourquard, Shahram Izadi, and Ramesh Raskar
CT4F.2 Computational Optical Sensing and Imaging (COSI) 2015
Ryoichi Horisaki, Yusuke Tampa, and Jun Tanida
CTu3B.4 Computational Optical Sensing and Imaging (COSI) 2012
Hoover Rueda, Daniel L. Lau, and Gonzalo R. Arce
CTh1B.2 Computational Optical Sensing and Imaging (COSI) 2017