Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method of depth simulation imaging and depth image super-resolution reconstruction for a 2D/3D compatible CMOS image sensor

Not Accessible

Your library or personal account may give you access

Abstract

This paper presents a depth simulation imaging and depth image super-resolution (SR) method for two-dimensional/three-dimensional compatible CMOS image sensors. A depth perception model is established to analyze the effects of depth imaging parameters and evaluate the real imaging effects. We verify its validity by analyzing the depth error, imaging simulation, and auxiliary physical verification. By means of the depth simulation images, we then propose a depth SR reconstruction algorithm to recover the low-resolution depth maps to the high-resolution depth maps in two types of datasets. With the best situation in depth accuracy kept, the root mean square error (RMSE) of Middlebury dataset images are 0.0156, 0.0179, and 0.0183 m. The RMSE of RGB-D dataset images are 0.0223 and 0.0229 m. Compared with other listed conventional algorithms, our algorithm reduces the RMSE by more than 16.35%, 17.19%, and 23.90% in the Middlebury dataset images. Besides, our algorithm reduces the RMSE by more than 9.71% and 8.76% in the RGB-D dataset images. The recovery effects achieve optimized results.

© 2023 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Method for power reduction of demodulation driver circuit in indirect time-of-flight CMOS image sensor

Kaiming Nie, Guan Tian, Quanmin Chen, Zeqing Wang, Jiangtao Xu, and Zhiyuan Gao
Appl. Opt. 60(34) 10649-10659 (2021)

3D reconstruction of light-field images based on spatiotemporal correlation super-resolution

Wei Feng, Junhui Gao, Jichen Sun, and Henghui Wang
Appl. Opt. 62(12) 3016-3027 (2023)

Snapshot super-resolution indirect time-of-flight camera using a grating-based subpixel encoder and depth-regularizing compressive reconstruction

Hodaka Kawachi, Tomoya Nakamura, Kazuya Iwata, Yasushi Makihara, and Yasushi Yagi
Opt. Continuum 2(6) 1368-1383 (2023)

Data availability

The public Middlebury dataset and NYU RGB-D dataset used in this paper are available in Refs. [24] and [25]. The related data of the proposed depth simulation model and SR algorithm presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

24. H. Hirschmuller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2007), pp. 21–28.

25. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (25)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (8)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (37)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.