Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Incoherent imaging through highly nonstatic and optically thick turbid media based on neural network

Open Access Open Access

Abstract

Imaging through nonstatic scattering media is one of the major challenges in optics, and encountered in imaging through dense fog, turbid water, and many other situations. Here, we propose a method to achieve single-shot incoherent imaging through highly nonstatic and optically thick turbid media by using an end-to-end deep neural network. In this study, we use fat emulsion suspensions in a glass tank as a turbid medium and an additional incoherent light to introduce strong interference noise. We calibrate that the optical thickness of the tank of turbid media is as high as 16, and the signal-to-interference ratio is as low as 17dB. Experimental results show that the proposed learning-based approach can reconstruct the object image with high fidelity in this severe environment.

© 2021 Chinese Laser Press

Full Article  |  PDF Article
More Like This
Imaging through unknown scattering media based on physics-informed learning

Shuo Zhu, Enlai Guo, Jie Gu, Lianfa Bai, and Jing Han
Photon. Res. 9(5) B210-B219 (2021)

Passive imaging through dense scattering media

Yaoming Bian, Fei Wang, Yuanzhe Wang, Zhenfeng Fu, Haishan Liu, Haiming Yuan, and Guohai Situ
Photon. Res. 12(1) 134-140 (2024)

Towards smart optical focusing: deep learning-empowered dynamic wavefront shaping through nonstationary scattering media

Yunqi Luo, Suxia Yan, Huanhao Li, Puxiang Lai, and Yuanjin Zheng
Photon. Res. 9(8) B262-B278 (2021)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Incoherent scattering imaging experimental system. (1) and (2) are the captured scattered patterns (the raw data and corresponding partial contrast stretched map) with optical thickness of 8 and 16, respectively. Note that these data are recorded in two sets of experiments: (1) capture data by the camera directly; (2) capture data with two additional apertures placed before the camera. KLS, Köhler lighting system; P, polarizer; ambient light, generated by a high-power LED through a diffuse slate (the distance between the slate and the tank side was around 3.5 cm); camera, working with an imaging lens (f=250mm, not shown in the figure). d141cm, d215cm. The 33.6 cm thick tank is equipped with fat emulsion diluent to simulate a dynamic scattering medium. Note that the scattered patterns shown in (2) look dimmed because a significant part of the large-angle scattered light has been blocked out.
Fig. 2.
Fig. 2. Optical thickness of intralipid suspensions with respect to its density. (b)–(j) Speckle patterns corresponding to different densities. Scale bar: 200 μm.
Fig. 3.
Fig. 3. Multiple scattering trajectories in dynamic media. In this illustration, scatterers move from the black circle to the blue circle during time interval Δτ, and ri(i=1,,n,,N) represents the location where scattering event occurs.
Fig. 4.
Fig. 4. Experiment setup. (a) Dual camera acquisition system. (b) Experimental site map of intralipid dilution: 11.47 L purified water (33.6cm×19.5cm×17.5cm) and 2 mL intralipid 20%.
Fig. 5.
Fig. 5. Decorrelation curves for different concentrations of intralipid dilutions. The data points and the error bars represent the mean value and the standard error of the correlation coefficient calculated from 10 image pairs. The solid lines in different colors are the fitting results, and the corresponding intralipid volume VI and optical thickness (OT) are shown in the legend. Here, the coefficient of determination (R-square) is used to describe the goodness of fit. Note that the horizontal axis is logarithmic scale.
Fig. 6.
Fig. 6. Experimental results. (a) Ground truths, and the reconstructed images in the case that the optical thickness equals (b) 8 and (c) 16, respectively.
Fig. 7.
Fig. 7. Robustness against the position change of the object/camera. Δd is the displacement of the object/camera (in pixel). The data points and the error bars represent the mean values and the standard deviations of the SSIM/RMSE of 10 reconstructed images (digits ‘0–9’).
Fig. 8.
Fig. 8. Robustness against the scaling and rotation of the object/camera. β is the scaling factor of the image size, Δθ the rotation angle, and Cg the image contrast gradient. The data points and the error bars in (a)–(d) represent the mean values and the standard deviations of the SSIM/RMSE of 10 reconstructed images (digits ‘0–9’). (e) and (f) SSIM/RMSE of digit ‘5’ with respect to Δθ and Cg. (g) Visualized reconstructed digits.
Fig. 9.
Fig. 9. Reconstruction of nondigit objects with the neural network trained by using digits. (a) First and third rows are the ground truths; second and fourth are the corresponding reconstructed images. (b) Reconstructed USAF target and the highlight of some of its portions.
Fig. 10.
Fig. 10. Experimental results with natural scene object. (a) Scattered patterns. (b) Corresponding ground truth. (b) Reconstructed results.
Fig. 11.
Fig. 11. Proposed neural network architecture. (a) Digits in the format mn below each layer denote the number of input channels m, and the number of output channels n. (5, 5) and (3, 3) denote the size of the convolution kernel in pixel counts. (b) Detailed information of neural network structure.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

S{Io}=SW{Io}+SL{Io},
Is=SW{Io}+SL{Io}+Ia,
I=I0eOT=I0·eμ·c·L,
I(t0)I(t0+Δτ)I(t0)I(t0+Δτ)1=β|E(t0+Δτ)E*(t0)E(t0+Δτ)E*(t0)|2,
C(Δτ)=δI(t0)·δI(t0+Δτ)¯δI(t0)¯·δI(t0+Δτ)¯,
C(1)(Δτ)=[L/Lssinh(L/Ls)]2,
C(2)(Δτ)=1g1sinh2(L/Ls)[sinh(2L/Ls)2L/Ls1],
Ls=Dτe·f(Δτ),
f(Δτ)=[eΔτ/(2τb)1eΔτ/(2τb)]1/2,
C(Δτ)=a{m/f(n·Δτ)sinh[m/f(n·Δτ)]}2+b1{sinh[m/f(n·Δτ)]}2{sinh[2m/f(n·Δτ)]2m/f(n·Δτ)1}.
Rlearn=argminRθ,θΘn=1NL(Io(n),Rθ{Is(n)})+g(θ),
MSE=min1WHN1n=1N1(u,v)W,H[Ip(n)(u,v)Io(n)(u,v)]2,
RMSE={1WH(u,v)W,H[Ip(u,v)Is(u,v)]2}12,
SSIM=(2μIpμIs+c1)(2σIpIs+c2)(μIp2+μIs2+c1)(σIp2+σIs2+c2),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.