Abstract
Three-dimensional (3D) vision plays an important role in industrial vision, where occlusion and reflection have made it challenging to reconstruct the entire application scene. In this paper, we present a novel 3D reconstruction framework to solve the occlusion and reflection reconstruction issues in complex scenes. A dual monocular structured light system is adopted to obtain the point cloud from different viewing angles to fill the missing points in the complex scenes. To enhance the efficiency of point cloud fusion, we create a decision map that is able to avoid the reconstruction of repeating regions of the left and right system. Additionally, a compensation method based on the decision map is proposed for reducing the reconstruction error of the dual monocular system in the fusion area. Gray-code and phase-shifting patterns are utilized to encode the complex scenes, while the phase-jumping problem at the phase boundary is avoided by designing a unique compensation function. Various experiments including accuracy evaluation, comparison with the traditional fusion algorithm, and the reconstruction of real complex scenes are conducted to validate the method’s accuracy and the robustness to the shiny surface and occlusion reconstruction problem.
© 2020 Optical Society of America
Full Article | PDF ArticleMore Like This
Gui-hua Liu, Xian-Yong Liu, and Quan-Yuan Feng
Appl. Opt. 50(23) 4557-4565 (2011)
Wei Feng, Tong Qu, Junhui Gao, Henghui Wang, Xiuhua Li, Zhongsheng Zhai, and Daxing Zhao
Appl. Opt. 60(24) 7086-7093 (2021)
Jiangping Zhu, Fan Yang, Jialing Hu, and Pei Zhou
Opt. Express 31(15) 25318-25338 (2023)