Abstract
In mask-based lensless imaging, iterative reconstruction methods based on the geometric optics model produce artifacts and are computationally expensive. We present a prototype of a lensless camera that uses a deep neural network (DNN) to realize rapid reconstruction for Fresnel zone aperture (FZA) imaging. A deep back-projection network (DBPN) is connected behind a U-Net providing an error feedback mechanism, which realizes the self-correction of features to recover the image detail. A diffraction model generates the training data under conditions of broadband incoherent imaging. In the reconstructed results, blur caused by diffraction is shown to have been ameliorated, while the computing time is 2 orders of magnitude faster than the traditional iterative image reconstruction algorithms. This strategy could drastically reduce the design and assembly costs of cameras, paving the way for integration of portable sensors and systems.
© 2020 Optical Society of America
Full Article | PDF ArticleMore Like This
Tomoya Nakamura, Takuto Watanabe, Shunsuke Igarashi, Xiao Chen, Kazuyuki Tajima, Keita Yamaguchi, Takeshi Shimano, and Masahiro Yamaguchi
Opt. Express 28(26) 39137-39155 (2020)
Xiuxi Pan, Xiao Chen, Saori Takeyama, and Masahiro Yamaguchi
Opt. Lett. 47(7) 1843-1846 (2022)
Xiao Chen, Xiuxi Pan, Tomoya Nakamura, Saori Takeyama, Takeshi Shimano, Kazuyuki Tajima, and Masahiro Yamaguchi
Opt. Express 31(8) 12739-12755 (2023)