Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational ghost imaging based on array sampling

Open Access Open Access

Abstract

High-quality computational ghost imaging under low sampling has always attracted much attention and is an important step in the practical application of computational ghost imaging. However, as far as we know, most studies focus on achieving high-quality computational ghost imaging with one single pixel detector. The high efficiency computational ghost imaging method using multiple single pixel detectors for array measurement is rarely mentioned. In this work, a new computational ghost imaging method based on deep learning technology and array detector measurement has been proposed, which can achieve fast and high-quality imaging. This method can resolve the problem of misalignment and overlap of some pixels in the reconstructed image due to the incomplete correspondence between the array detector and the light field area. At the same time, the problem of partial information loss in the reconstructed image because of the gap between the detection units of the array detector has also been solved. Simulation and experiment results show that our method can obtain high computational ghost imaging quality, even at the low sampling rate of 0.03, and as the detection unit of the array detector increases, the number of sampling is further reduced. This method improves the applicability of computational ghost imaging and can be applied to many fields such as real-time detection and biomedical imaging.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) extracts the information of an object by correlating the echo signal of the object and their corresponding speckles. As such, GI has experienced a process from a dual optical path system that collects echo signals and speckle light fields respectively to a single optical path system that only needs a single pixel detector to receive echo signals, which is called computational ghost imaging (CGI). This is mainly conducive to the development of spatial light modulation equipment, such as spatial light modulator [1], digital micromirror device [2], LED array [35], laser array [6], etc. Hence, CGI breaks through the constraints of imaging system and light field modulation equipment, and plays an outstanding role in various imaging fields. In particular, CGI in special bands with expensive array detectors (eg. X-ray imaging [710], terahertz imaging [11,12], neutron imaging [13,14], etc.) and in some complex conditions (eg. turbulence-free imaging [1517], scattering media [18,19], etc.) has been well developed.

Although CGI has many advantages, it still faces some problems of long imaging time and poor imaging quality. Therefore, researchers have improved imaging quality and efficiency through a large number of studies from light field optimization, reconstruction method optimization [2022], compressed sensing [23], deep learning [2426] and other technologies. Light field optimization mainly focuses on multi-scale light field mixing [27,28], modulation matrix orthogonalization [29,30], Hadamard matrix sequence optimization [3133], etc. This method improves the imaging performance of CGI from the aspect of optimizing the modulation light field performance, but the improvement degree is limited, so it must be combined with other methods. Differential CGI [20] and pseudo-inverse CGI [34] and other methods greatly improve the imaging quality, but they are far from the imaging requirements of practical applications. Compressed sensing CGI methods (such as sparse constraint [35], low rank constraint [36], etc.) have greatly improved the imaging quality and speed, and further shorten the gap with the practical application of CGI. The imaging effect of deep learning CGI such as confrontation, migration and small sample learning has made a qualitative leap, so it is expected in the practical application.

At present, most researches only use one single pixel detector for CGI. As far as we know, the research of efficient CGI method using multiple single pixel detectors for array measurement is rarely mentioned. However, this method is one of the effective methods to improve the quality and shorten the imaging time of CGI. In order to solve the above problems , Herman et al. [37] proposed a new scheme that uses an $8\times 4$ array of photodiodes and a multi-aperture to divide the field of view into small areas and acquires the image information in parallel. Subsequently, an Image Retrieval method with a quadrant detector that can get four times the speed increase was reported by Sun [3840]. These two studies were carried out under the condition that the detector and the regional distribution of the light field were exactly corresponding. Especially when the light field is actively modulated and received, there will be offset between the detector and the light field area, which is difficult to correspond completely. In this case, the reconstructed image of CGI will have some problems such as misalignment and overlap of pixels, and imaging quality and accuracy will not be guaranteed.

For this reason, we study the fast and high-quality computational ghost imaging method based on array detector measurement with deep learning technology, analyze and discuss the impact of array detector and light field area offset on the imaging quality of CGI, and use the deep learning network of Compensation-Net system to realize fast and high-quality image reconstruction under offset. The successful implementation of various array detection CGI schemes proves the advantages and effectiveness of this method.

2. Methods

2.1 Image reconstruction method for multi-pixel detection CGI

In multi-pixel detection CGI system, the modulated light field composed of N space light fields is expressed as $I^{m}_{n}{(x,y)}$, where, $x=1,2,3,\ldots,r$, $y=1,2,3,\ldots,c$, $m$ ($m=1,2,3,\ldots,M$) is the number of measurements, $n$ ($n=1,2,3,\ldots,N$) is the number of light field. Accordingly, the signal received by the multi-pixel detector is represented as:$B^{m}_{n}=\iint I^{m}_{n}(x,y)T_n(x,y)dxdy$. Therefore, the multi-space combined light field modulated by the space light modulation device irradiates the target object, and the echo signal is received by the multi-pixel detector, in which each pixel of the multi-pixel detector receives a region of the combined light field independently. The target object can be obtained by computing the correlation between $I^{m}_{n}{(x,y)}$ and $B^{m}_{n}$:

$$O_{n}{(x,y)} = \langle B^{m}_{n}I^{m}_{n}{(x,y)}\rangle,$$
where, the overall image combined with $O_{n}{(x,y)}$ is represented as $O{(x,y)}$. From the point of view of matrix analysis, we can also express eq. (1) as:
$$\textbf{O}_n = \textbf{A}_n^{T}\textbf{A}_n\textbf{T}_n,$$
where, $\textbf {A}_n$ is a measurement matrix with dimension $M\times P$ ($P=r\times c$) consisting of converting each modulated light field $I^{m}_{n}(x,y)$ into a one-dimensional row vector, $\textbf {T}_n$ is a one-dimensional column vector transformed by the target object $T_n(x,y)$. Therefore, the reconstruction result $\textbf {O}_n$ of CGI is a function of the measurement matrix $\textbf {A}_n$, i.e., the more the matrix $\textbf {A}_n^{T}\textbf {A}_n$ tends to identity matrix, the higher the reconstructed image quality of CGI.

Based on this, we chose sine function to generate modulated light field. Since the sine function is orthogonal in [$-\pi, \pi$], we can obtain the identity matrix of $\textbf {A}_n^{T}\textbf {A}_n$, which can guarantee the quality of reconstructed image in terms of light field properties. The sine function used to generate the CGI measurement matrix $\textbf {A}_n$ can be expressed as:

$$\textbf{A}_n = a\cdot sin(\mu x+\upsilon y),$$
where, $a$ denotes the amplitude constant, $\mu$ and $\upsilon$ denote frequencies whose values are determined by $M$ and speckle pixel $P$, i.e., $\mu =m\pi /P$, $\upsilon =m\times p/P$, $p = 1,2,3,\cdot, P$.

2.2 Network architecture and training

In order to solve the problem of partial pixels misalignment and overlap in the reconstructed image, and to realize fast and high-quality image reconstruction under offset, we designed a Compensation-Net to compensate and correct the reconstructed image, which includes Compensation-GAN and Compensation-CNN two deep learning networks. The Compensation-GAN is based on GAN (Generative Adversarial Nets) [41] which consist of generator and discriminator. The overall architecture of our Compensation-Net is shown in Fig. 1. The structure of the Compensation-CNN is the same as the generator of the Compensation-GAN.

 figure: Fig. 1.

Fig. 1. Compensation-GAN. The input of the network is the four-channel data. And each channel is stitched together from the reconstructed image of the corresponding quadrant of the detector.

Download Full Size | PDF

The goal of the generator is to generate an image similar to the real image to fool the discriminator and the input of the generator is the reconstructed images, while the discriminator is to identify the input image which is the fake image generated by generator or the real image. And the input of the discriminator is a mixed image, which is composed of a reconstructed image and a real image or a fake image generated by the generator. The generator network is composed of up-sampling layers, down-sampling layers and attention residual blocks. More specifically both up-sampling layers and down-sampling layers have a convolution layer, a batch normalization layer and an activation function. And the attention residual block include two convolution layers and an attention module [42]. Compared with the generator network, the discriminator network is simpler, which has only 5 convolutional layers, and each convolutional layer is followed by a batch normalization layer and an activation function. Except for the last convolutional layer, the activation function uses Leaky ReLU [43] function which helps to ensure that the gradient can flow through the entire architecture. Then the last activation function is a sigmoid function, which can limit the output result between 0 and 1.

Our Compensation-GAN model is inspired by the pix2pix [44], which is a widely used model in the field of image to image translation. The basic model of our Compensation-GAN is the CGAN(Conditional Generative Adversarial Nets) [45], which is an improvement of the GAN. The generator of the CGAN is called conditional generation. It is implemented by adding additional conditional information to the generator and discriminator of the original GAN. Therefore, it can generate the specified image according to the condition information, which can solve the problem of the original GAN generating images randomly. In addition, the attention module is necessary to improve the efficiency and accuracy of the Compensation-GAN training. It can help the network quickly filter out useful information from a large amount of information by taking the misplaced and overlapping parts of the reconstructed image as the focus of attention.

As we all known, in order to train the Compensation-Net, we need to perform a large number of array detector and light field area offset experiments to obtain reconstructed offset images as the training data set of the network. However, it is expensive and time-consuming to conduct a lot of CGI experiments to get reconstructed offset images. In order to solve this problem, we have established a simulated offset system, which can simulate the CGI experiment of the relative deviation between the detector and the speckle pattern. Through the simulated offset system, we can input standard images into the system to obtain reconstructed offset images with random offset methods and offset distances, which are shown in Fig. 2. And Fig. 2(a) is the original image, Fig. 2(b) is the image shifted upward, Fig. 2(c) is the image shifted to the right, and Fig. 2(d) is the image shifted down and left at the same time. As is evident from Fig. 2, the offset methods of the offset image generated by the simulated offset system includes horizontal and vertical offset, and the offset distance of the system is random. Therefore, we can get a large number of reconstructed offset images as the training data set of the Compensation-Net in a short time, without the need for GI experiments. It has high value for the application of neural networks in the field of CGI. The simulated offset system has been integrated into the Compensation-Net to generate the virtual training data needed by the Compensation-Net.

 figure: Fig. 2.

Fig. 2. Virtual training data generated by the simulated offset system. (a) is the original image, (b) is the image shifted upward, (c) is the image shifted to the right, and (d) is the image shifted down and left at the same time.

Download Full Size | PDF

As shown in Fig. 3, we use the 3$\times$3 array detector as an example to discuss the principle of the offset between the array detector and the light field. Fig. 3(a) shows the case where the array detector and the light field are aligned. $B_{k,l}^{m} = c_{k,l}^{m} + d_{k,l}^{m} + e_{k,l}^{m} + f_{k,l}^{m}$ indicates the detection values at the $m$-th measurement in Fig. 3(a). The offset case of the 3$\times$3 array detector and the light field is shown in Fig. 3(b). And the 3$\times$3 array detector has been shifted to the upper left relative to the light field. $(B_{k,l}^{m})^{\prime} = c_{k,l}^{m} + d_{k,l-1}^{m} + e_{k-1,l}^{m} + f_{k-1,l-1}^{m}$ is used as the detection value under offset, where, $m = 1,2,3,\ldots,M$, $k(k=1,2,3,\ldots,N)$ and $l(l=1,2,3,\ldots,N)$ respectively represent the number of rows and columns of light field. In Fig. 3(b), we can see three colored areas, where the green area is the effective detection area, the blue area is the missing area of the target object information, and the yellow area is the invalid area without light field modulation. The black box is the modulation area of the light field, and the red dotted frame is the detection area of the 3$\times$3 array detector. We can observed that the information detected by the 3$\times$3 array detector includes partial target object information and invalid area information. And the information detected by each detection unit of the 3$\times$3 array detector mainly includes part of its own target information and target information of surrounding detection units. For example, the detection information of the second row and second column of the detection unit of the 3$\times$3 array detector includes its own target information $c_{2,2}^{m}$ and the surrounding detection unit target information $d_{2,1}^{m}$, $e_{1,2}^{m}$, $f_{1,1}^{m}$.

 figure: Fig. 3.

Fig. 3. The offset diagram of the 3$\times$3 array detector and light field area. (a) is the diagram of not offset between the 3$\times$3 array detector and light field area, (b) is the diagram of left upper offset between the 3$\times$3 array detector and light field area.

Download Full Size | PDF

According to Eq. (1), we can obtain the target object with not offset, can be expressed as:

$$\begin{aligned} O_{k,l}{(x,y)} &=\langle B^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle\\ &=\frac{1}{M}\sum_{m=1}^{M}[(B^{m}_{k,l} - \langle B^{m}_{k,l} \rangle)\cdot I^{m}_{k,l}{(x,y)}]\\ &=\frac{1}{M}\sum_{m=1}^{M}[(c_{k,l}^{m} + d_{k,l}^{m} + e_{k,l}^{m} + f_{k,l}^{m} - \langle c_{k,l}^{m} + d_{k,l}^{m} + e_{k,l}^{m} + f_{k,l}^{m} \rangle)\\ &{\kern 10pt}\cdot I^{m}_{k,l}{(x,y)}]\\ &=\langle c^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle + \langle d^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle + \langle e^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle + \langle f^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle\\ &=C_{k,l}{(x,y)} + D_{k,l}{(x,y)} + E_{k,l}{(x,y)} + F_{k,l}{(x,y)}, \end{aligned}$$
where, $C_{k,l}{(x,y)} = \langle c^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle$, $D_{k,l}{(x,y)} = \langle d^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle$, $E_{k,l}{(x,y)} = \langle e^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle$ and $F_{k,l}{(x,y)} = \langle f^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle$ are all sub-regions of target object $O_{k,l}{(x,y)}$.

Thus, the target object with offset $(O_{k,l}{(x,y)})^{\prime}$ can be derived by Eq. (4):

$$\begin{aligned} (O_{k,l}{(x,y)})^{\prime} &=\langle (B^{m}_{k,l})^{\prime}I^{m}_{k,l}{(x,y)} \rangle\nonumber\\ &=\frac{1}{M}\sum_{m=1}^{M}[((B^{m}_{k,l})^{\prime} - \langle (B^{m}_{k,l})^{\prime} \rangle)\cdot I^{m}_{k,l}{(x,y)}]\nonumber\\ &=\frac{1}{M}\sum_{m=1}^{M}[(c_{k,l}^{m} + d_{k,l-1}^{m} + e_{k-1,l}^{m} + f_{k-1,l-1}^{m} - \langle c_{k,l}^{m} + d_{k,l-1}^{m} + e_{k-1,l}^{m} + f_{k-1,l-1}^{m} \rangle)\nonumber\\ &{\kern 10pt}\cdot I^{m}_{k,l}{(x,y)}]\nonumber\\ &=\langle c^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle + \langle d_{k,l-1}^{m}I^{m}_{k,l}{(x,y)} \rangle + \langle e_{k-1,l}^{m}I^{m}_{k,l}{(x,y)} \rangle + \langle f_{k-1,l-1}^{m}I^{m}_{k,l}{(x,y)} \rangle\nonumber\\ &=C_{k,l}{(x,y)} + D_{k,l-1}{(x,y)} + E_{k-1,l}{(x,y)} + F_{k-1,l-1}{(x,y)}, \end{aligned}$$
where, $C_{k,l}{(x,y)} = \langle c^{m}_{k,l}I^{m}_{k,l}{(x,y)} \rangle$, $D_{k,l-1}{(x,y)} = \langle d_{k,l-1}^{m}I^{m}_{k,l}{(x,y)} \rangle$, $E_{k-1,l}{(x,y)} = \langle e_{k-1,l}^{m}I^{m}_{k,l}{(x,y)} \rangle$ and $F_{k-1,l-1}{(x,y)} = \langle f_{k-1,l-1}^{m}I^{m}_{k,l}{(x,y)} \rangle$ respectively indicate the corresponding target object area in Fig. 3(b). Therefore, we can find that when the array detector and the light field are offset, the problem of overlap and loss of some pixels in the reconstructed image will occur.

Finally, the loss function is also a very important part of the network. There are many loss functions in previous studies, such as CGAN loss [45], Wasserstein loss [46], GAN loss, mean square error and binary cross-entropy, etc [24,47,48]. Since Compensation-Net is based on the improvement of CGAN, CGAN loss is a suitable choice for our Compensation-GAN. However, the image generated with CGAN loss is not completely close to the original image. In order to make the generated image as close as possible to the original image, the 1-norm distance is added to the CGAN generator loss [44]. The generator loss function of the Compensation-GAN is defined as:

$$L(G) = {L}_{CGAN}(G) + \lambda {L}_{L1}(G),$$
where, ${L}_{L1}(G)=||y-G||_{1}$ represents the 1-norm distance loss between the image $G$ generated by the generator network and the real image $y$. ${L}_{CGAN}(G)$ is the loss function of the CGAN generator. $\lambda$ is a constant coefficient which is set to 10. Adam optimizer is selected as the optimizer of the Compensation-Net to update the network parameters. And the initial parameters learning rate $r$, $\beta _{1}$ and $\beta _{2}$ of the Adam optimizer are set to 0.0002, 0.5 and 0.999 respectively. In order to improve the quality of the reconstructed image, the mean square error is used as the loss function of the Compensation-CNN. And these two models used the same initial parameters in training process. The training of the Compensation-GAN takes longer than Compensation-CNN. The total training epoch is 200 for CGI with the $2\times 2$ array detector. And the training process of the Compensation-GAN and Compensation-GAN both took about 12 hours. For CGI with the $3\times 3$ array detector, it takes 400 epoch to train. The training of the Compensation-GAN took about 20 hours, and the training time of the Compensation-CNN was about 15 hours. For the more complex CGI with $4\times 4$ array detector, the required training epoch reached 800, and the training time for Compensation-GAN and Compensation-CNN were about 70 hours and 90 hours, respectively. All training tasks were completed on a workstation(@Intel-Xeon CPU and $1\times @$Nvidia-GeForce-2080Ti GPU).

2.3 Performance evaluation

In order to objectively evaluate the performance of our method, we measure the reconstruction quality quantitatively in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [2]. PSNR and SSIM reflect the similarity between the reconstructed image and original image. PSNR is defined using the the maximum possible pixel value and mean squared error(MSE) [2] between the reconstructed image and original image in log-space. SSIM measures the structural similarity between the reconstructed image and original image based on three image parameters: luminance (l), contrast (c) and structure (s).

3. Results

3.1 Numerical simulation results

In order to demonstrate that our method has advantages in imaging speed compared with traditional methods, two numerical simulations of imaging speed comparison are conducted.

In the first numerical simulation, different detectors were used to compare the imaging speed of CGI based on the sinusoidal speckle pattern. The numerical simulation result of CGI with different detectors is shown in Fig. 4. We can notice that the CGI based on the $2\times 2$ array detector only needs 4096 measurements to fully recover the target image, while the single-pixel detector requires 16384 samples to recover the target image. Therefore, the imaging speed of CGI based on the $2\times 2$ array detector is 4 times that of the single-pixel detector. Similarly, we can conclude that the imaging speed of the $3\times 3$ array detector and the $4\times 4$ array detector are 9 times and 16 times that of the single-pixel detector, respectively.

 figure: Fig. 4.

Fig. 4. The numerical simulation results of CGI with different detectors. (a) is the object image of a bird, (b) is the complete recovery result of the $2\times 2$ array detector at $M$ = 4096, and (c) is the complete recovery result of the single-pixel detector at $M$ = 16384.

Download Full Size | PDF

Then, we performed a numerical simulation of CGI based on the $2\times 2$ array detectors with random, Hadamard and sinusoidal speckle pattern to compare the imaging speed.

Fig. 5 shows the reconstructed images of random, Hadamard and sinusoidal speckle pattern under different sampling numbers $M$(sampling rates $\gamma$). From Fig. 5, we can see that the reconstructed image quality of random is poor, even in the case of full acquisition. When $M$ < 4000, the contour and detail information of the object cannot be recovered clearly by Hadamard. Only when $M$ is close to full acquisition, the higher quality reconstructed image can be obtained. However, the image quality of the sinusoidal speckle pattern is very higher than random and Hadamard, at low sampling times. The contour and detail information of the object can clearly reconstructed by the sinusoidal speckle pattern, when $M$ = 1000 ($\gamma$ = 0.06). Even when $M$ = 500 ($\gamma$ = 0.03), the contour information of the object can be recovered.

 figure: Fig. 5.

Fig. 5. The numerical simulation results of CGI with different speckle pattern respectively under different sampling numbers $M$. (a), (b) and (c) are the numerical simulation results of random, Hadamard and sinusoidal speckle pattern respectively.

Download Full Size | PDF

To more specifically compare the imaging quality of these three speckle patterns at different sampling times, we calculated their PSNRs and SSIMs, which are shown in Fig. 6(a) and Fig. 6(b). From Fig. 6(a), we can find that the PSNRs value of the sinusoidal speckle pattern is larger than random and Hadamard. When $M$ is less than 2000, the PSNRs value is significantly larger. Especially, when $M$ = 400 ($\gamma$ = 0.024), the PSNR value reaches more than 20dB. However, the PSNR values of random and Hadamard are less than 15dB. The SSIM value of the sinusoidal speckle pattern is also larger and increase faster than random and Hadamard, when $M$ is less than 2000. Therefore, our method can obtain better imaging quality at low sampling times and realize fast and high-quality image reconstruction.

 figure: Fig. 6.

Fig. 6. The numerical curves of PSNR (a) and SSIM (b) under different M with random, Hadamard and sinusoidal speckle pattern.

Download Full Size | PDF

To prove that the Compensation-Net can realize fast and high-quality image reconstruction under offset, the numerical simulation of the Compensation-Net is conducted. There are two imaging results with using array detector for CGI experiment. The first scenario is that the size of the target image in the array detector is smaller than the size of the detection surface of the array detector. When array detector and light field area are not aligned, the target image is still in the array detector. Consequently, the target information collected by the array detector will not be lost. The second scenario is that the size of the target image in the array detector is very close to the size of the detection surface of the array detector. When array detector and light field area are offset, part of the target information will be outside the detector surface, causing the target information collected by array detector to be partially lost.

First, we carried out a numerical simulation on the $2\times 2$ array detector, whose offset results between array detector and light field area offset is relatively simple, compared with other array detectors. Due to the limitation of experimental conditions, we use a four-quadrant detector as a $2\times 2$ array detector for the CGI experiment. In such a situation, the size of the target image in the $2\times 2$ array detector is smaller than the size of the detection surface of the $2\times 2$ array detector, which conforms to the first scenario mentioned above. Hence, only the first case is considered in the numerical simulation of the $2\times 2$ array detector. We will discuss the second case in the numerical simulation of array detectors with more detection units, such as $3\times 3$ and $4\times 4$ array detectors. Before the network training, we preprocessed the image data, and built the training set and the test set. We used 2400 images in the STL-10 [49] dataset as the training set and 500 images as the test set. All the dataset images($96\times 96$ pixels color images) were grayscaled and resized to $128\times 128$. Then all the training images were input into the simulated offset system to generate training images of the Compensation-Net for training.

After training, we input the test images into the simulated offset system to obtain randomly reconstructed offset images. Then these reconstructed offset images were input into the Compensation-Net to get the compensated images. Figs. 7(a)–7(f) respectively represent compensated results of test images under different offset distances and modes. Fig. 7(a) is offset in the vertical direction. Fig. 7(b) is offset in the horizontal direction. And Figs. 7(c)–7(f) are both shifted horizontally and vertically. The offset mode and distance of Figs. 7(c)–7(f) are different. From the PSNRs and SSIMs in Fig. 7, we can find that the quality of the compensated images are related to the offset distances. As the offset distance decreases, the quality of the compensated image gradually improves. Through the comparison of the compensation images and the real images in Fig. 7, it can be found that the Compensation-GAN and the Compensation-CNN can both successfully compensate the offset image, and the generated compensated images are very close to the real images. The compensation image quality of the GAN is better than the CNN, and it is also more in line with the visual perception of the human eye. Consequently, we can conclude that our Compensation-Net is effective for the problems of partial pixels misalignment and overlap caused by array detector and light field area offset.

 figure: Fig. 7.

Fig. 7. The numerical simulation results of the $2\times 2$ array detector, where PSNRs and SSIMs are presented together. (a) are offset in the vertical direction, (b) are offset in the horizontal direction, and (c)-(f) are shifted horizontally and vertically.

Download Full Size | PDF

Next, to confirm the effectiveness and applicability of our Compensation-Net, the $3\times 3$ array detector with more detection units was used for numerical simulation. The numerical simulation of the $3\times 3$ array detector meets the second case, namely the size of the target image in the $3\times 3$ array detector is close to the size of the detection surface of the $3\times 3$ array detector. Therefore, in this case, when array detector and light field area is offset, the detector will lose part of the detection object information. The numerical simulation of the $3\times 3$ array detector used the same dataset as the $2\times 2$ array detector, except that the size of the images are $126\times 126$. Then we input all the training data into the simulated offset system, and started training the network. After training, in order to verify that the network is still feasible for the $3\times 3$ array detector, we tested the network with different images from the numerical simulation of the $2\times 2$ array detector. Fig. 8 shows the numerical simulation results of the $3\times 3$ array detector. From Figs. 8(a)–8(f), we can see that the results are similar to the Fig. 7. The results of the horizontal and vertical offset are displayed in Fig. 8(a) and Fig. 8(b) respectively. And Figs. 8(c)–8(f) are offset in two directions at the same time. In Fig. 8, it can be observed from the offset images that when array detector and light field area are offset, the information detected by the adjacent detection units of the array detector affects each other. That is, the detection unit of the array detector may detect the information of its neighboring units, resulting in partial misalignment of the reconstructed image. Fig. 8 shows that the Compensation-GAN and Compensation-CNN both perform well in the contour and detail restoration of the offset images. The PSNR and SSIM values are relatively close. Therefore, the results indicate that the Compensation-Net can perform well in the numerical simulation of the $3\times 3$ array detector.

 figure: Fig. 8.

Fig. 8. The numerical simulation results of the $3\times 3$ array detector, where PSNRs and SSIMs are presented together. (a) and (b) have a offset in the horizontal and vertical directions respectively, and (c)-(f) are offset in two directions at the same time.

Download Full Size | PDF

Lastly, in order to further verify that our Compensation-Net is useful in solving the problem of partial pixels misalignment and overlap of reconstructed images caused by the relative offset array detector and light field area, the numerical simulation of $4\times 4$ array detector was carried out based on the numerical simulation of $3\times 3$ array detector. We used the same dataset as the numerical simulation of $2\times 2$ array detector. After Training, we also selected other test images for testing. The results of the numerical simulation of $4\times 4$ array detector are shown in Fig. 9. The results are similar to Fig. 8, but the offset images in Fig. 9 are more complicated than the offset images in Fig. 8. It is difficult for the human eyes to recognize the target type of the offset images in Fig. 9. However, we can notice that the quality of the compensated images generated by the Compensation-GAN and Compensation-CNN are both still great. The image quality recovered by CNN is better than GAN. And the PSNR and SSIM values of the CNN is higher than GAN. Through the above numerical simulation results, it can be concluded that our Compensation-Net has very good performance to solve the problem of partial misalignment and overlap of images caused by the relative offset array detector and light field area.

 figure: Fig. 9.

Fig. 9. The numerical simulation results of the $4\times 4$ array detector, where PSNRs and SSIMs are presented together. (a) and (b) have a offset in the horizontal and vertical directions respectively, and (c)-(f) are offset in two directions at the same time.

Download Full Size | PDF

3.2 Experimental results

In order to illustrate the advantages and effectiveness of our method in fast and high-quality image reconstruction under offset, the actual array detection CGI experiment is conducted. Due to the limitation of experimental conditions, we took a simple $2\times 2$ array four-quadrant detector (First Sensor, QP50-6, photodiode with 4$\times$12 mm$^{2}$ active area, 50 mm$^{2}$ Quadrant PIN detector) as an example for CGI experimental verification. The experiment system configuration is illustrated in Fig. 10, which includes a commercial digital light projector (DLP, HCP-839X), a four-quadrant detector. The applied DLP was used as the light source which project sinusoidal speckle pattern to illuminate the object. The target object is an image (see the object of Fig. 10) of a horse with the size of 8.5cm$\times$8cm. It is about 0.35m away from the DLP, and about 1.2m away from the four-quadrant detector. The four-quadrant detector was placed on the reflection direction of beam splitter to collect the reflected signal. The speckle patterns used were $128\times 128$ pixels sinusoidal speckle patterns constructed from four small $64\times 64$ pixels sinusoidal speckle patterns.

 figure: Fig. 10.

Fig. 10. The experiment system diagram of CGI.

Download Full Size | PDF

According to the different offset methods, we have carried out three kinds of four-quadrant detectors and optical field area offset experiments.

Left offset CGI experiment between four-quadrant detector and light field area. The experimental results with a left offset distance of 1.62 cm at different sampling times ($M$ = 200, 500, 1000, 2000, 3000, 4000 and 4096) are displayed in Fig. 11. The left offset image is the reconstructed images of CGI. And the compensated images are the output results of the Compensation-GAN and Compensation-CNN respectively. From Fig. 11, it can be observed that the left offset image has partial misalignment and overlap problems. Namely, the horse’s body on the left half of the offset image is superimposed on the right half of the image. To resolve this problem, we input the reconstructed offset images at different sampling times into the Compensation-GAN and Compensation-CNN respectively to compensate and correct the reconstructed image. We can find that the compensation image has successfully recovered the target information in Fig. 11. The horse’s body on the left half are restored, and the extra horse’s body in the right half disappears. As the number of samples increases, the quality of reconstructed images gradually improves, and the compensation results of the network are getting better and better. Our method can clearly recover the detailed information of the object, when M$\geq$1000(sampling rate $\gamma \geq$ 0.06). Even when the number of samples is very low ($M$ = 500, sampling rate $\gamma$ = 0.03), our Compensation-Net can still output better compensation results. The results in Fig. 11 confirm that our method can realize fast and high-quality image reconstruction under left offset.

 figure: Fig. 11.

Fig. 11. Experimental results at different sampling times when the four-quadrant detector is offset to the left from the light field area. The left offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.

Download Full Size | PDF

Lower offset CGI experiment between four-quadrant detector and light field area. The distance of the downward shift is 0.81 cm. In Fig. 12 the reconstructed image of CGI and the results of the Compensation-Net are shown as the lower offset image and the compensated image, respectively. We can notice that the part of the horse’s legs in the lower half of the lower offset image appears in the upper half from Fig. 12. However, the target has been successfully recovered by the Compensation-GAN and Compensation-CNN (as shown in the compensated image). The excess horseâĂŹs legs information in the upper half has disappeared, and the lost information in the lower half has been restored. By this method the object information can be recovered under lower offset when $M>200$ (sampling rate $\gamma$ > 0.01). From Fig. 12, we can find that the object information can be recovered successfully at $M$ = 1000 (sampling rate $\gamma$ =0.06), and the contour information of the horse can still be clearly reconstructed, even when ($M \leq$ 500, sampling rate $\gamma \leq$ 0.03).

 figure: Fig. 12.

Fig. 12. Experimental results at different sampling times when the four-quadrant detector shifts downward from the light field area. The lower offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.

Download Full Size | PDF

Lower left offset CGI experiment between four-quadrant detector and light field area. The target of the lower offset image reconstructed by CGI is abnormal in Fig. 13. And the left and lower offset distances are 1.62 cm and 0.81 cm respectively. In order to solve the problem, the lower left offset image was input into the Compensation-GAN and Compensation-CNN respectively. And the compensated image output by the Compensation-GAN and Compensation-CNN both perfectly restored the target information. The quality of the lower left offset image is lower than the left offset image in Fig. 11 and the lower offset image in Fig. 12. However our method can still compensate the lower left offset image to output the high quality image. And the object information can be roughly recovered at low sampling rate $\gamma$ = 0.03 ($M$ = 500). The results verify that our method can realize fast and high-quality image reconstruction under lower left offset, and solve the problem of partial misalignment and overlap of reconstructed images caused by the relative offset of array detector and light field area.

 figure: Fig. 13.

Fig. 13. Experimental results at different sampling times when the four-quadrant detector shifts from the light field area to the lower left at the same time. The lower left offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.

Download Full Size | PDF

In addition, as can be seen from Figs. 1113, the central area of reconstructed offset images still has the problem of partial pixels loss, even if the four-quadrant and the light field area are not offset. However, the problem of partial pixels loss disappeared in the compensated images is solved by the Compensation-Net, which is used to compensate reconstructed offset images. Therefore, our method is not only suitable for solving the problem of partial pixels misalignment in the reconstructed image due to the gap between the detection units of the four-quadrant detector, but it is also effective for detectors with more detection units, such as $8\times 8$, $16\times 16$, etc. And as the detection unit increases, the number of samples is further reduced. The results in Figs. 1113 confirm that our method has great potential in fast and high-quality CGI, making real-time CGI possible and expanding the application field of CGI.

4. Conclusion

In this paper, we have proposed and demonstrated a new array detecting CGI method based on deep learning technology, which uses Compensation-Net system to realize fast and high-quality image reconstruction under offset. The numerical simulations and experiments confirm that the method is effective and advanced. Both Compensation-GAN and Compensation-CNN can compensate the offset images to obtain high-quality reconstructed images. There are two advantages in our network structure. First of all, the residual block structure in Compensation-Net makes our network suitable for array detection with more detector units in more complex environments; secondly, the attention mechanism in Compensation-Net can significantly improve the training speed of the network and achieve higher quality reconstructed images. Moreover, our method is also effective for the problem of some pixels of the reconstructed image lost due to the gap between the detection units of the array detector. In addition, the sinusoidal speckle pattern used in our method can achieve high-quality reconstruction with a very low sampling number, further shortening the sampling time. In summary, our method is meaningful for CGI using multiple single pixel detectors for array measurement. And it will be also valuable in real-time detection and biomedical imaging, etc.

Funding

Jilin Province Advanced Electronic Application Technology Trans-regional Cooperation Science and Technology Innovation Center (20200602005ZP); Industrial Innovation Funds of Jilin Province of China (2019C025); Science and Technology Planning Project of Jilin Province (20200404141YY).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

2. C. Zhou, G. Wang, H. Huang, L. Song, and K. Xue, “Edge detection based on joint iteration ghost imaging,” Opt. Express 27(19), 27295–27307 (2019). [CrossRef]  

3. Z.-H. Xu, W. Chen, J. Penuelas, M. Padgett, and M.-J. Sun, “1000 fps computational ghost imaging using led-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

4. W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-speed color imaging with single-pixel detectors at low light level,” Phys. Rev. Applied 12(3), 034049 (2019). [CrossRef]  

5. E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3d imaging by using an led array,” Opt. Express 26(12), 15623–15631 (2018). [CrossRef]  

6. C. Liu, J. Chen, J. Liu, and X. Han, “High frame-rate computational ghost imaging system using an optical fiber phased array and a low-pixel apd array,” Opt. Express 26(8), 10048–10064 (2018). [CrossRef]  

7. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

8. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]  

9. A. Schori and S. Shwartz, “X-ray ghost imaging with a laboratory source,” Opt. Express 25(13), 14822–14828 (2017). [CrossRef]  

10. A.-X. Zhang, Y.-H. He, L.-A. Wu, L.-M. Chen, and B.-B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

11. S.-C. Chen, Z. Feng, J. Li, W. Tan, L.-H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z.-H. Zhai, Z.-R. Li, C.-W. Qiu, X.-C. Zhang, and L.-G. Zhu, “Ghost spintronic thz-emitter-array microscope,” Light Sci Appl 9(1), 1–9 (2020). [CrossRef]  

12. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]  

13. A. M. Kingston, G. R. Myers, D. Pelliccia, F. Salvemini, J. J. Bevitt, U. Garbe, and D. M. Paganin, “Neutron ghost imaging,” Phys. Rev. A 101(5), 053844 (2020). [CrossRef]  

14. Y.-H. He, Y.-Y. Huang, Z.-R. Zeng, Y.-F. Li, J.-H. Tan, L.-M. Chen, L.-A. Wu, M.-F. Li, B.-G. Quan, S.-L. Wang, and T.-J. Liang, “Single-pixel imaging with neutrons,” Science Bulletin 66(2), 133–138 (2021). [CrossRef]  

15. R. E. Meyers, K. S. Deacon, and Y. Shih, “Turbulence-free ghost imaging,” Appl. Phys. Lett. 98(11), 111115 (2011). [CrossRef]  

16. M.-Q. Yin, L. Wang, and S.-M. Zhao, “Experimental demonstration of influence of underwater turbulence on ghost imaging,” Chinese Phys. B 28(9), 094201 (2019). [CrossRef]  

17. Q.-W. Zhang, W.-D. Li, K. Liu, L.-W. Zhou, Z.-M. Wang, and Y.-J. Gu, “Effect of oceanic turbulence on the visibility of underwater ghost imaging,” J. Opt. Soc. Am. A 36(3), 397–402 (2019). [CrossRef]  

18. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

19. F. Li, M. Zhao, Z. Tian, F. Willomitzer, and O. Cossairt, “Compressive ghost imaging through scattering media with deep learning,” Opt. Express 28(12), 17395–17408 (2020). [CrossRef]  

20. F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

21. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

22. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. 34(21), 3343–3345 (2009). [CrossRef]  

23. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

24. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

25. X. Zhai, Z.-D. Cheng, Y.-D. Hu, Y. Chen, Z.-Y. Liang, and Y. Wei, “Foveated ghost imaging based on deep learning,” Optics Communications 448, 69–75 (2019). [CrossRef]  

26. C. Hu, Z. Tong, Z. Liu, Z. Huang, J. Wang, and S. Han, “Optimization of light fields in ghost imaging using dictionary learning,” Opt. Express 27(20), 28734–28749 (2019). [CrossRef]  

27. S. Sun, W.-T. Liu, H.-Z. Lin, E.-F. Zhang, J.-Y. Liu, Q. Li, and P.-X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci Rep 6(1), 1–7 (2016). [CrossRef]  

28. F. Liu, X.-F. Liu, R.-M. Lan, X.-R. Yao, S.-C. Dou, X.-Q. Wang, and G.-J. Zhai, “Compressive imaging based on multi-scale modulation and reconstruction in spatial frequency domain,” Chinese Phys. B 30(1), 014208 (2021). [CrossRef]  

29. C. Yang, C. Wang, J. Guan, C. Zhang, S. Guo, W. Gong, and F. Gao, “Scalar-matrix-structured ghost imaging,” Photon. Res. 4(6), 281–285 (2016). [CrossRef]  

30. B. Luo, P. Yin, L. Yin, G. Wu, and H. Guo, “Orthonormalization method in ghost imaging,” Opt. Express 26(18), 23093–23106 (2018). [CrossRef]  

31. M.-J. Sun, L.-T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A russian dolls ordering of the hadamard basis for compressive single-pixel imaging,” Sci Rep 7(1), 1–7 (2017). [CrossRef]  

32. C. Zhou, T. Tian, C. Gao, W. Gong, and L. Song, “Multi-resolution progressive computational ghost imaging,” J. Opt. 21(5), 055702 (2019). [CrossRef]  

33. W.-K. Yu and Y.-M. Liu, “Single-pixel imaging with origami pattern construction,” Sensors 19(23), 5135 (2019). [CrossRef]  

34. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22(24), 30063–30073 (2014). [CrossRef]  

35. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101(14), 141123 (2012). [CrossRef]  

36. G. Wu, T. Li, J. Li, B. Luo, and H. Guo, “Ghost imaging under low-rank constraint,” Opt. Lett. 44(17), 4311–4314 (2019). [CrossRef]  

37. M. A. Herman, J. Tidman, D. Hewitt, T. Weston, and L. McMackin, “A higher-speed compressive sensing camera through multi-diode design,” in Compressive Sensing II, vol. 8717 (International Society for Optics and Photonics, 2013), p. 871706.

38. M.-J. Sun, W. Chen, T.-F. Liu, and L.-J. Li, “Image retrieval in spatial and temporal domains with a quadrant detector,” IEEE Photonics Journal 9(5), 1–6 (2017). [CrossRef]  

39. M.-J. Sun, H.-Y. Wang, and J.-Y. Huang, “Improving the performance of computational ghost imaging by using a quadrant detector and digital micro-scanning,” Sci. Rep. 9(1), 1–7 (2019). [CrossRef]  

40. S. Wang, L. Li, W. Chen, and M. Sun, “Improving seeking precision by utilizing ghost imaging in a semi-active quadrant detection seeker, Chin. J. Aeronautics (2021), in press. [CrossRef]  

41. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Adv. Neural Info. Proc. Syst. 27, 1–9 (2014).

42. H. Jie, S. Li, and S. Gang, “Squeeze-and-excitation networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2018).

43. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30 (Citeseer, 2013), p. 3.

44. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1125–1134.

45. M. Mirza and S. Osindero, “Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784 (2014).

46. D. Berthelot, T. Schumm, and L. Metz,Began: Boundary equilibrium generative adversarial networks, arXiv preprint arXiv:1703.10717 (2017).

47. Y. Ni, D. Zhou, S. Yuan, X. Bai, Z. Xu, J. Chen, C. Li, and X. Zhou, “Color computational ghost imaging based on a generative adversarial network,” Opt. Lett. 46(8), 1840–1843 (2021). [CrossRef]  

48. R. Zhu, H. Yu, Z. Tan, R. Lu, S. Han, Z. Huang, and J. Wang, “Ghost imaging based on y-net: a dynamic coding and decoding approach,” Opt. Express 28(12), 17556–17569 (2020). [CrossRef]  

49. A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (JMLR Workshop and Conference Proceedings, 2011), pp. 215–223.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Compensation-GAN. The input of the network is the four-channel data. And each channel is stitched together from the reconstructed image of the corresponding quadrant of the detector.
Fig. 2.
Fig. 2. Virtual training data generated by the simulated offset system. (a) is the original image, (b) is the image shifted upward, (c) is the image shifted to the right, and (d) is the image shifted down and left at the same time.
Fig. 3.
Fig. 3. The offset diagram of the 3$\times$3 array detector and light field area. (a) is the diagram of not offset between the 3$\times$3 array detector and light field area, (b) is the diagram of left upper offset between the 3$\times$3 array detector and light field area.
Fig. 4.
Fig. 4. The numerical simulation results of CGI with different detectors. (a) is the object image of a bird, (b) is the complete recovery result of the $2\times 2$ array detector at $M$ = 4096, and (c) is the complete recovery result of the single-pixel detector at $M$ = 16384.
Fig. 5.
Fig. 5. The numerical simulation results of CGI with different speckle pattern respectively under different sampling numbers $M$. (a), (b) and (c) are the numerical simulation results of random, Hadamard and sinusoidal speckle pattern respectively.
Fig. 6.
Fig. 6. The numerical curves of PSNR (a) and SSIM (b) under different M with random, Hadamard and sinusoidal speckle pattern.
Fig. 7.
Fig. 7. The numerical simulation results of the $2\times 2$ array detector, where PSNRs and SSIMs are presented together. (a) are offset in the vertical direction, (b) are offset in the horizontal direction, and (c)-(f) are shifted horizontally and vertically.
Fig. 8.
Fig. 8. The numerical simulation results of the $3\times 3$ array detector, where PSNRs and SSIMs are presented together. (a) and (b) have a offset in the horizontal and vertical directions respectively, and (c)-(f) are offset in two directions at the same time.
Fig. 9.
Fig. 9. The numerical simulation results of the $4\times 4$ array detector, where PSNRs and SSIMs are presented together. (a) and (b) have a offset in the horizontal and vertical directions respectively, and (c)-(f) are offset in two directions at the same time.
Fig. 10.
Fig. 10. The experiment system diagram of CGI.
Fig. 11.
Fig. 11. Experimental results at different sampling times when the four-quadrant detector is offset to the left from the light field area. The left offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.
Fig. 12.
Fig. 12. Experimental results at different sampling times when the four-quadrant detector shifts downward from the light field area. The lower offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.
Fig. 13.
Fig. 13. Experimental results at different sampling times when the four-quadrant detector shifts from the light field area to the lower left at the same time. The lower left offset image is the reconstructed images of CGI. And the compensated image is the output results of the Compensation-Net.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

O n ( x , y ) = B n m I n m ( x , y ) ,
O n = A n T A n T n ,
A n = a s i n ( μ x + υ y ) ,
O k , l ( x , y ) = B k , l m I k , l m ( x , y ) = 1 M m = 1 M [ ( B k , l m B k , l m ) I k , l m ( x , y ) ] = 1 M m = 1 M [ ( c k , l m + d k , l m + e k , l m + f k , l m c k , l m + d k , l m + e k , l m + f k , l m ) I k , l m ( x , y ) ] = c k , l m I k , l m ( x , y ) + d k , l m I k , l m ( x , y ) + e k , l m I k , l m ( x , y ) + f k , l m I k , l m ( x , y ) = C k , l ( x , y ) + D k , l ( x , y ) + E k , l ( x , y ) + F k , l ( x , y ) ,
( O k , l ( x , y ) ) = ( B k , l m ) I k , l m ( x , y ) = 1 M m = 1 M [ ( ( B k , l m ) ( B k , l m ) ) I k , l m ( x , y ) ] = 1 M m = 1 M [ ( c k , l m + d k , l 1 m + e k 1 , l m + f k 1 , l 1 m c k , l m + d k , l 1 m + e k 1 , l m + f k 1 , l 1 m ) I k , l m ( x , y ) ] = c k , l m I k , l m ( x , y ) + d k , l 1 m I k , l m ( x , y ) + e k 1 , l m I k , l m ( x , y ) + f k 1 , l 1 m I k , l m ( x , y ) = C k , l ( x , y ) + D k , l 1 ( x , y ) + E k 1 , l ( x , y ) + F k 1 , l 1 ( x , y ) ,
L ( G ) = L C G A N ( G ) + λ L L 1 ( G ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.