Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pseudo color night vision correlated imaging without an infrared focal plane array

Open Access Open Access

Abstract

Night vision is the ability to see in low-light conditions. However, conventional night vision imaging technology is limited by the requisite high-performance infrared focal plane array. In this article, we propose a novel scheme of color night vision imaging without the use of an infrared focal plane array. In the experimental device, the two-wavelength infrared laser beam reflected by the target is modulated by a spatial light modulator, and the output light is detected by a photomultiplier tube. Two infrared night vision images are reconstructed by measuring the second-order intensity correlation function between two light fields. Thus, the processing mode of optical electric detection in conventional night vision imaging is transformed into the processing mode of light field control. Furthermore, two gray images with different spectra are processed to form a color night vision image. We show that a high-quality color night vision image can be obtained by this method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Human beings rely on a variety of senses to perceive the world, of which 83% comes from vision. However, in the environment of extremely weak light intensity, human vision is significantly limited. To expand the scope of our vision, night vision imaging technology has emerged. Night vision technology is a kind of optical imaging that can transform an invisible scene into a visible image by using photoelectric detection and imaging equipment under low illumination at night. Owing to this type of technology, the scope of human observation has now been greatly expanded. To date, night vision imaging technology has been applied in many fields, including military reconnaissance, security monitoring, car-assisted driving and so on. Generally, night vision imaging technology includes thermal imaging and low-light-level (LLL) night vision imaging [1]. The infrared focal plane array (IRFPA) is the indispensable core device of thermal imaging. IRFPAs have high technical requirements and poor imaging quality, preventing their wide use in commercial applications such as visible focal plane arrays (FPAs).

The IRFPA represents one of the key problems restricting the development of night vision imaging technology, and the performance of IRFPAs is not expected to become comparable to that of conventional charge-coupled device (CCDs) or complementary metal oxide semiconductors (CMOSs) in the near future. Accordingly, is there a new scheme that can omit the IRFPA? In this article, we propose a novel color night vision imaging scheme based on the intensity correlation (or second-order intensity correlation) of the light field [26]. Photoelectric conversion in conventional night vision imaging is replaced by the intensity correlation of the light field. Thus, the processing mode of optical electric detection in traditional night vision imaging technology is transformed into the processing mode of light field control. In the experimental setup, the two-wavelength infrared laser beam reflected by the target is modulated by a spatial light modulator (SLM), and the modulated light is detected by a photomultiplier tube (PMT). Although an infrared light source is used in this scheme, no IRFPA is involved in the imaging setup.

Here, light field control involves making two light fields have a second-order spatial intensity correlation. Different from the Michelson interferometer, which describes first-order light field properties [7], the intensity interferometer pioneered by Hanbury Brown and Twiss (HBT), which focuses on the second-order correlation of light intensity fluctuations, is a great method to study optical coherence theory [810]. A spatial HBT-type imaging modality, called correlated imaging (or ghost imaging) [26,11,12], has attracted much attention from researchers. Compared with the classical optical imaging approach, correlated imaging has some unique advantages, e.g., anti-interference [13,14] and super-resolution [15,16], indicating that ghost imaging has important applications in lidar [17,18], remote sensing [19,20], and pattern recognition [21].

Conventional color night vision imaging is mainly used to process gray images output by low-light visible image and infrared night vision via software, such as image fusion and color conversion [2225]. In this article, according to the intensity correlation of the light field, two gray images with different spectra are obtained. Thus, the gray image produced by low-intensity visible light can be replaced by the gray image generated by short-wavelength infrared light. Then, the night vision image with strong sense of nature is obtained by color conversion and gray-level fusion. Compared with the conventional pseudocolor processing mode, this scheme does not need the low-light visible image to participate in pseudocolor processing. The experimental results show that this scheme can produce a color night vision image of comparable quality to that of the classical optical image.

2. Theory

We depict the scheme of color night vision correlated imaging without an IRFPA in Fig. 1. We assume that two infrared lasers with different wavelengths $E_{s}(\omega _{1})$ and $E_{s}(\omega _{2})$ illuminate a target, and the reflected light carrying the target’s information propagates the distance of $z_{1}$ and converges on the SLM surface through a lens. Then, the modulated light, which propagates the distance of $z_{2}$ in free space, is collected by a PMT and can be expressed as [21,22]

$$\begin{aligned}& E_{d}(x,t)=\int d\omega _{i}dq_{i}V(q_{i})E_{s}(\omega _{i})\\ & \times H_{i}(x_{s},q_{i};\omega _{i})T(x_{o})H_{i}^{^{\prime }}(x_{p},q_{i};\omega _{i}), \end{aligned}$$
where $i=1,2$. The subscripts $d$ and $s$ indicate the detection light field and signal light field, respectively. The SLM produces spatial amplitude modulation of the light field distribution represented by the random spatial mask function $V$, which is used to obtain spatial correlations that follow Gaussian statistics [23]. The functions $H$ and $H^{^{\prime }}$ are transfer functions that describe the propagation from the target to the SLM and the SLM to the PMT, respectively. $x$ and $q$ represent the transverse position and wave vector, respectively. $T(x_{o})$ represents the reflection coefficient of the target. The on-target deterministic intensity pattern produced by the SLM can then be calculated via diffraction theory:
$$E_{c}(x,t)=\int d\omega _{i}dq_{i}\,V(q_{i})E_{s}(\omega _{i})H^{^{\prime }}(x_{p},q_{i};\omega _{i}).$$
where the subscript $c$ represents the calculation light field.

 figure: Fig. 1.

Fig. 1. Setup of color night vision correlated imaging without an IRFPA. DM: dichroic mirror, FW: filter wheel, SLM: spatial light modulator, PMT: photomultiplier tube.

Download Full Size | PDF

The image is reconstructed by the intensity cross-correlation measurement, i.e.,

$$\begin{aligned}& G(x_{p},x_{s})\\ & =\left\langle \left\vert E_{d}(x,t)\right\vert ^{2}\left\vert E_{c}(x,t)\right\vert ^{2}\right\rangle -\left\langle \left\vert E_{d}(x,t)\right\vert ^{2}\right\rangle \left\langle \left\vert E_{c}(x,t)\right\vert ^{2}\right\rangle\\ & =\int d\omega _{i}d\omega _{i}^{^{\prime }}dq_{i}dq_{i}^{^{\prime }}H(x_{s},q_{i}^{^{\prime }};\omega _{i}^{^{\prime }})H^{{\ast} }(x_{s},q_{i};\omega _{i})\\ & \times H^{^{\prime }\ast }(x_{p},q_{i};\omega _{i})H^{^{\prime }}(x_{p},q_{i}^{^{\prime }};\omega _{i}^{^{\prime }})H^{^{\prime }\ast }(x_{p},q;\omega )H(x_{p},q_{i}^{^{\prime }};\omega _{i}^{^{\prime }})\\ & \times C(\omega _{i},\omega _{i}^{^{\prime }};q_{i},q_{i}^{^{\prime }}{})T(x_{o})T^{{\ast} }(x_{o}). \end{aligned}$$
where
$$\begin{aligned}& C(\omega _{i},\omega _{i}^{^{\prime }};q_{i},q_{i}^{^{\prime }}{})=\left\langle E_{s}(\omega _{i})E_{s}(\omega _{i}^{^{\prime }})\right\rangle \left\langle E_{i}(\omega _{i}^{^{\prime }})E_{i}(\omega _{i})\right\rangle\\ & \times \left\langle V(q_{i})V(q_{i}^{^{\prime }})\right\rangle \left\langle V(q_{i}^{^{\prime }})V(q_{i})\right\rangle \end{aligned}$$
is the intensity cross-correlation function in the spatial and temporal frequency domain evaluated at the output surface of the SLM [23]. Substituting Eq. (4) into Eq. (3), we can thus rewrite the ghost image as
$$\begin{aligned}& G(x_{o},x_{r})\\ & =I_{d}I_{c}\left\vert \int dx_{r}^{^{\prime }}dx^{^{\prime }}W(x_{p}^{^{\prime }},x_{s}^{^{\prime }})H(x_{s},x_{s}^{^{\prime }};\omega )O(x_{o})\right\vert ^{2}, \end{aligned}$$
where $I_{a}=\left \langle \left \vert \int d\omega E_{a}\left ( \omega \right ) \right \vert ^{2}\right \rangle$, with $a=d,c$ representing the product of the average intensities of the detected light and the calculated light, respectively. The function $W(x_{p}^{^{\prime }},x_{s}^{^{\prime }})$ is the spatial Fourier transform of $\left \langle V(q)V(q)\right \rangle$, and the transfer function $H$ is written in position space. We assume that $H^{^{\prime }}=1$ because the distance ($z_{2}$) between the SLM and bucket detector is fixed. $\left \langle T(x_{o})T^{\ast }(x_{o}^{^{\prime }})\right \rangle =\lambda O(x_{o})\delta \left ( x_{o}-x_{o}^{^{\prime }}\right )$.

Through the above theoretical analysis, we obtain the following conclusions: (1) This imaging scheme produces two infrared gray images. (2) The physical nature of color night vision correlated imaging without an IRFPA is based on the second-order intensity correlation of light (see Eq. (4)). (3) This imaging scheme is not sensitive to the distance of the object and imaging device. The distance from the SLM to the PMT is fixed. Thus, the transfer function $H^{^{\prime }}$ is not affected by the position of the target. The light path from the target to the SLM is adjusted by a combined lens, which is the same as in conventional optical imaging. (4) If the SLM and PMT are regarded as an imaging device and the combined lens is regarded as the camera lens, this imaging scheme is exactly the same as a conventional optical camera in terms of structure.

3. Experiment results

Our experimental setup is illustrated in Fig. 1. Two near-infrared lasers with $\lambda _{1}=785$ nm and $\lambda _{2}=830$ nm (Changchun New Industries Optoelectronics Technology Co., Ltd. MLL-III-785, MDL-III-830) are coupled into a beam by a dichroic mirror (Thorlabs, DMLP805). This two-wavelength beam in the laboratory illuminates a target, and the reflected light is converged on the surface of a two-dimensional amplitude-only ferroelectric liquid crystal SLM (FLC-SLM, Meadowlark Optics A512-450-850), with 512$\times$512 addressable 15 $\mu m$ $\times$15 $\mu m$ pixels. Then, the modulated light is filtered by two bandpass filters (Thorlabs, FL05780-10, FL830-10) mounted on a filter wheel (Daheng Xinjiyuan Technology Co., Ltd. GCM-14). Finally, the light carrying the target’s information is collected by a PMT (Hamamatsu H10721-20). Two infrared night vision images are produced by cross-correlation of the input signal of the SLM and the output signal of the PMT. Next, the grayscale images output by this scheme are processed with color. Pseudocolor image processing includes two main parts: color mapping and gray-level fusion. The flow chart is shown in Fig. 2. First, a look-up table is constructed based on a backpropagation neural network. From the above analysis, two gray images can be obtained at the same time. These two gray images and one color image are selected as training samples. According to the corresponding relationship of pixel positions in different images, the two-dimensional gray vector ($Y_{1}$, $Y_{2}$) and two-dimensional chroma vector $(C_{b},C_{r})$ of the selected training samples are nonlinear fitted by using a back-propagation neural network to determine the best mapping $f(Y_{1},Y_{2})$ to $(C_{b},C_{r})$. With the help of the neural network nonlinear fitting toolbox provided by MATLAB, the double-input and double-output double-layer backpropagation neural network (with the hidden layer containing 10 neurons) is constructed and trained by the Levenberg-Marquardt algorithm. In particular, 70% of the samples are used for network training, 15% are used for generalization performance optimization, and 15% are used for network testing. Taking $Y_{1}$ and $Y_{2}$ as horizontal and vertical coordinates, respectively, a complete two-dimensional color lookup table $T_{clut}$ is constructed by using the mapping $f$ obtained in the previous step and the input values ($Y_{1}$, $Y_{2}$), i.e., ($C_{b}$, $C_{r}$)$=T_{clut}(Y_{1},Y_{2})=f(Y_{1},Y_{2})$, where $Y_{1},Y_{2}=0,1,\ldots 255$, $(Y_{1},Y_{2})$ denotes all the two-dimensional grayscale combinations of 8-bit grayscale images, with a total of $256\times 256$ $(C_{b},C_{r})$ index values. In this way, we build a color lookup table.

 figure: Fig. 2.

Fig. 2. Image processing schematic of color night vision correlated imaging without an IRFPA.

Download Full Size | PDF

For infrared gray image $G_{1}$ and infrared gray image $G_{2}$ (the pixel size is $M\times N$ and the bit depth is 8 bits), according to the gray value combination $(Y_{1},Y_{2})$ $(Y_{1}=G_{1}(i,j),Y_{2}=G_{2}(i,j),Y_{1},Y_{2}\in [0,255])$ of the two images at the same pixel position $(i,j)$ $(i\in [1,M],j\in [1,N])$, the $C_{b}C_{r}$ look-up table $T_{clut}$ is indexed. The index value $T_{clut}(Y_{1},Y_{2})$ is the chromaticity value at the same pixel position $(i,j)$ of color fusion image $G_{f}$, i.e., $[C_{b}(i,j),C_{r}(i,j)]=Gf(i,j)=T_{clut}(Y_{1},Y_{2})$. The $C_{b}$ and $C_{r}$ chroma channel information of the color fusion image can be obtained by indexing the pixels at all positions $(i=1,2\ldots M,j=1,2\ldots N)$ of the two grayscale images. A color night vision image can be obtained by this processing approach (for details, see the experimental results). In the first experiment, we chose a simple color cube as the target. Figure 3(d) compares the image quality of color night vision correlated imaging without an IRFPA, conventional near-infrared night vision and classical color imaging with visible light. Figures 3(a) and (b) show the infrared grayscale image generated by color night vision correlated imaging without an IRFPA. Figure 3(c) presents the experimental results of color night vision correlated imaging without an IRFPA with 200000 sets of data. Correspondingly, Fig. 3(d) depicts the classical color image with visible light (detected by Imaging Source DFK23U618). The experimental results show that the image obtained by our imaging scheme is of high quality and has rich color. Compared with the near-infrared image, our image is more conducive to scene perception and target recognition. Moreover, compared with the visible color image, our color image does not completely restore the natural color of the target, but the color mapping of each point is correct.

 figure: Fig. 3.

Fig. 3. (a) The reconstructed night vision image with $\lambda =785~nm$, (b) The reconstructed night vision image with $\lambda =830~nm$. (c) the reconstructed pseudo color night vision image (the illustration in the lower right corner is the lookup table), (d) conventional color image with visible light.

Download Full Size | PDF

Similar to that of correlated imaging, the image quality of our imaging scheme is related to the number of data used in image reconstruction. Figures 4(a1)-(a4) present the experimental results with different quantities of data. To quantitatively analyze the quality of the reconstructed image with different numbers of data, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used as our evaluation indexes. The results from Fig. 4 show that the image quality is significantly improved by increasing the quantity of data.

 figure: Fig. 4.

Fig. 4. Top row: The reconstructed color night vision images with different realizations. The numbers of frames are (a1) 10000, (a2) 40000, (a3) 70000, and (a4) 100000. Bottom row: The SSIM (b1) and PSNR (b2) curves of reconstructed images with different realizations.

Download Full Size | PDF

To demonstrate that this scheme has a wide range of effectiveness, we selected four complex color targets for the experiment. The experimental parameters are exactly the same as those in the previous experiment. Figures 5(a1)-(a4) show the four visible light images. Correspondingly, Figures 5(b1)-(b4) show that the reconstructed color images exhibit high image quality and rich color. Then, the color colorfulness index (CCI) is used to quantitatively analyze the colorfulness of the reconstructed image. Figure 5(c) shows that the reconstructed image has rich color. Although the color of the reconstructed image cannot be completely restored to the real color of the object, the color image is easier to distinguish and recognize than the infrared gray image. Some materials have similar reflectance in the near infrared spectrum, so it is difficult to render the color of these objects. This problem can be effectively improved by properly enlarging the wavelength difference between the two infrared lasers.

 figure: Fig. 5.

Fig. 5. Top row: different targets. Middle row: The corresponding reconstructed color night vision images. Bottom table: The CCI curves of the reconstructed image with different targets.

Download Full Size | PDF

4. Summary

In this article, we reported a novel color night vision imaging scheme without an IRFPA. Different from conventional night vision techniques based on first-order light field properties, our imaging scheme has a physical nature based on second-order light field properties. The second-order intensity correlations between the light collected by the PMT and the calculated light are generated by modulating the SLM. The infrared night vision image can be obtained by measuring the second-order correlation function between two signals. The color night vision image is produced by further processing the two infrared gray images with different spectra. Because the PMT has no spatial resolution and the SLM does not detect the light field, the IRFPA is not involved in the imaging process. The experimental results show that a night vision image with rich color and high quality is obtained by color conversion and gray-level fusion. Like other color night vision technologies, this scheme can only partially restore the real color of the object, but the reconstructed color image is easier to distinguish and recognize than the traditional gray night vision image. This method provides a promising solution to improve the performance of night vision imaging. Furthermore, this method has the potential to be developed into a new night vision imaging technique.

Funding

National Natural Science Foundation of China (11574178, 11704221, 61675115); Taishan Scholar Project of Shandong Province (tsqn201812059).

Disclosures

The authors declare no conflicts of interest.

References

1. L. F. Bai, J. Han, and J. Yue, Night Vision Processing and Understanding, (Springer, Singapore, 2019).

2. J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004). [CrossRef]  

3. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

4. Y. Bromberg, O. Katz, and Y. Silberberg, “ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

5. X. L. Yin, Y. J. Xia, and D. Y. Duan, “Theoretical and experimental study of color of ghost imaging,” Opt. Express 26(15), 18944–18949 (2018). [CrossRef]  

6. S. Ragy and G. Adesso, “Nature of light correlations in ghost imaging,” Sci. Rep. 2(1), 651 (2012). [CrossRef]  

7. Y. H. Shih, “The physics of ghost imaging, Classical,” Semi-classical and Quantum Noise (Springer, New York, 2012), pp. 169–222.

8. R. H. Brown and R. Q. Twiss, “The Question of Correlation between Photons in Coherent Light Rays,” Nature 178(4548), 1447–1448 (1956). [CrossRef]  

9. R. H. Brown and R. Q. Twiss, “A test of a new type of stellar interferometer on sirius,” Nature 178(4541), 1046–1048 (1956). [CrossRef]  

10. R. J. Glauber, “The Quantum Theory of Optical Coherence,” Phys. Rev. 130(6), 2529–2539 (1963). [CrossRef]  

11. B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum to classical and computational,” Adv. Opt. Photonics 2(4), 405–450 (2010). [CrossRef]  

12. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11(4), 949–993 (2012). [CrossRef]  

13. R. E. Meyers, K. S. Deacon, and Y. Shih, “turbulence-free ghost imaging,” Appl. Phys. Lett. 98(11), 111115 (2011). [CrossRef]  

14. R. E. Meyers, K. S. Deacon, and Y. Shih, “positive-negative turbulence-free ghost imaging,” Appl. Phys. Lett. 100(13), 131114 (2012). [CrossRef]  

15. W. Li, Z. Tong. K. Xiao, Z. Liu, Q. Gao, J. Sun, S. Liu, S. Han, and Z. Wang, “Single-frame wide-field nonoscopy based on ghost imaging via sparsity constraints,” Optica 6(12), 1515–1523 (2019). [CrossRef]  

16. W. L. Gong and S. S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A 376(17), 1519–1522 (2012). [CrossRef]  

17. W. L. Gong, C. Q. Zhao, H. Yu, M. L. Chen, W. D. Xu, and S. S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

18. C. Q. Zhao, W. L. Gong, M. L. Chen, E. R. Li, H. Wang, W. D. Xu, and S. S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101(14), 141123 (2012). [CrossRef]  

19. B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A 29(5), 782–789 (2012). [CrossRef]  

20. D. Y. Duan, Z. X. Man, and Y. J. Xia, “Nondegenerate wavelength computational ghost imaging with thermal light,” Opt. Express 27(18), 25187–25195 (2019). [CrossRef]  

21. X. Qiu, D. Zhang, W. Zhang, and L. Chen, “Structured-Pump-Enabled Quantum Pattern Recognition,” Phys. Rev. Lett. 122(12), 123901 (2019). [CrossRef]  

22. A. Toet and M. A. Hogervorst, “Portable real-time color night vision,” Proc. SPIE 6974, 697402 (2008). [CrossRef]  

23. A. Toet, “Natural colour mapping for multiband night vision imagery,” Inform. Fusion 4(3), 155–166 (2003). [CrossRef]  

24. A. Toet and M. A. Hogervorst, “Progress in color night vision,” Opt. Eng. 51(1), 010901 (2012). [CrossRef]  

25. A. M. Waxman, A. N. Gove, D. A. Fay, J. P. Racamato, J. E. Carrick, M. C. Seibert, and E. D. Savoye, “Color night vision: opponent processing in the susion of visible and IR imagery,” Neural Networks 10(1), 1–6 (1997). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Setup of color night vision correlated imaging without an IRFPA. DM: dichroic mirror, FW: filter wheel, SLM: spatial light modulator, PMT: photomultiplier tube.
Fig. 2.
Fig. 2. Image processing schematic of color night vision correlated imaging without an IRFPA.
Fig. 3.
Fig. 3. (a) The reconstructed night vision image with $\lambda =785~nm$ , (b) The reconstructed night vision image with $\lambda =830~nm$ . (c) the reconstructed pseudo color night vision image (the illustration in the lower right corner is the lookup table), (d) conventional color image with visible light.
Fig. 4.
Fig. 4. Top row: The reconstructed color night vision images with different realizations. The numbers of frames are (a1) 10000, (a2) 40000, (a3) 70000, and (a4) 100000. Bottom row: The SSIM (b1) and PSNR (b2) curves of reconstructed images with different realizations.
Fig. 5.
Fig. 5. Top row: different targets. Middle row: The corresponding reconstructed color night vision images. Bottom table: The CCI curves of the reconstructed image with different targets.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

E d ( x , t ) = d ω i d q i V ( q i ) E s ( ω i ) × H i ( x s , q i ; ω i ) T ( x o ) H i ( x p , q i ; ω i ) ,
E c ( x , t ) = d ω i d q i V ( q i ) E s ( ω i ) H ( x p , q i ; ω i ) .
G ( x p , x s ) = | E d ( x , t ) | 2 | E c ( x , t ) | 2 | E d ( x , t ) | 2 | E c ( x , t ) | 2 = d ω i d ω i d q i d q i H ( x s , q i ; ω i ) H ( x s , q i ; ω i ) × H ( x p , q i ; ω i ) H ( x p , q i ; ω i ) H ( x p , q ; ω ) H ( x p , q i ; ω i ) × C ( ω i , ω i ; q i , q i ) T ( x o ) T ( x o ) .
C ( ω i , ω i ; q i , q i ) = E s ( ω i ) E s ( ω i ) E i ( ω i ) E i ( ω i ) × V ( q i ) V ( q i ) V ( q i ) V ( q i )
G ( x o , x r ) = I d I c | d x r d x W ( x p , x s ) H ( x s , x s ; ω ) O ( x o ) | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.