Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image restoration for synthetic aperture systems with a non-blind deconvolution algorithm via a deep convolutional neural network

Open Access Open Access

Abstract

Optical synthetic aperture imaging systems, which consist of in-phase circular sub-mirrors, can greatly improve the spatial resolution of a space telescope. Due to the sub-mirrors’ dispersion and sparsity, the modulation transfer function is decreased significantly compared to a fully filled aperture system, which causes obvious blurring and loss of contrast in the collected image. Image restoration is the key to get the ideal clear image. In this paper, an appropriative non-blind deconvolution algorithm for image restoration of optical synthetic aperture systems is proposed. A synthetic aperture convolutional neural network (CNN) is trained as a denoiser prior to restoring the image. By improving the half-quadratic splitting algorithm, the image restoration process is divided into two subproblems: deconvolution and denoising. The CNN is able to remove noise in the gradient domain and the learned gradients are then used to guide the image deconvolution step. Compared with several conventional algorithms, scores of evaluation indexes of the proposed method are the highest. When the signal to noise ratio is 40 dB, the average peak signal to noise ratio is raised from 23.7 dB of the degraded images to 30.8 dB of the restored images. The structural similarity index of the results is increased from 0.78 to 0.93. Both quantitative and qualitative evaluations demonstrate that the proposed method is effective.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The spatial resolution of conventional diffraction-limited optical systems is proportional to the pupil diameter. Currently, higher spatial resolution is required for humans’ ambitions to explore the universe, leading to larger space telescope, which results in larger size, weight, volume, and cost [12]. To cope with these problems, some researchers suggested optical synthetic aperture system, which can greatly improve the spatial resolution [2]. The system consists of several in-phase circular sub-mirrors. Thus it needs a smaller light collecting area to achieve the same resolution as a fully filled aperture system. Because of its sub-mirrors’ dispersion and sparsity, the modulation transfer function (MTF) of the system is decreased significantly, which causes obvious blurring and loss of contrast in the collected image [35]. Fortunately, using image restoration methods, a clear and high contrast image can be retrieved.

A high-speed and efficient image restoration method for synthetic aperture system employs the deconvolution algorithm [6]. Generally, the method is non-blind, which takes the optical transfer function (OTF) or point spread function (PSF) of the synthetic aperture system as the convolution kernel. Traditional algorithms include Wiener filter and its various modified forms, linear least-squares filter, or wavelet transformation [7]. Usually, these algorithms need prior knowledge of the PSF. However, in practice, the collected image contains varieties of noises, which may prevent successful restoration [8]. For example, in the wiener filter, the restoration results are affected by different degree of ringing artifact due to the different values of signal noise ratio (SNR). This generates many visual artifacts on the final restored image [9]. Although many studies show that Wiener filter and similar techniques work well, it still has the above deficiencies [10].

In recent years, convolutional neural network (CNN) has been widely used in the field of image restoration [1112]. The nonlinear terms and high dimensional structure make CNN more expressive than traditional algorithms [12]. But to the best of our knowledge, it remains uninvestigated to develop CNN for synthetic aperture image restoration. To make up for this, we train a CNN to learn effective image priors. The CNN networks demonstrate powerful prior modeling capacity because of its deep architecture [13]. In this paper, synthetic aperture convolutional neural network is designed to image restoration of synthetic aperture systems. We treat the CNN as a denoiser and put it into the iteration of the half-quadratic splitting method. After a few iterations, the proposed method performs well in the restoration of our dataset. And several evaluation indexes prove that the proposed method is effective.

The rest of paper is structured as follows. Section 2 presents the image degradation process and the image restoration algorithm. Section 3 discusses the results of the proposed method. Finally, Section 4 summarizes the conclusions.

2. Method

In this section, we establish the dataset according to the image degradation model, which contains degraded images of three types of optical synthetic aperture systems. Then we apply the half-quadratic splitting algorithm and combine it with the CNN.

2.1. Image degradation model

When the PSF is considered as spatially invariant without considering the influence of aberration, the imaging process of the synthetic aperture system is usually modeled by:

$$g = f^{\ast} PSF + n$$
where g, f, n denote the degraded image, original image and noise respectively, and * is the convolution operator. There are infinite solutions to satisfy Eq. (1). Thus recovering the original image is an ill-posed problem [5].

The intensity PSF of the synthetic aperture system can be obtained by taking the square of its modulus after Fourier transform of the pupil function, which is determined by the array configuration [4]. The pupil function of each sub-aperture can be expressed as:

$$\begin{array}{l} {P_{sub}}(x,y) = circ\left( {\frac{{\sqrt {{x^2} + {y^2}} }}{{d/2}}} \right)\\ \textrm{ = }\left\{ {\begin{array}{cc} {1}, & {\sqrt {{x^2} + {y^2}} \le d/2}\\ {0}, & {others\textrm{ }} \end{array}} \right. \end{array}$$
where (x, y) represents the center of the sub-aperture, d denotes the radius of sub-aperture. The coherent transfer function of the sub-aperture is the Fourier transform of the pupil function:
$${A_{sub}} = F({{P_{sub}}} )= \left( {\frac{{\pi {D^2}}}{{4\lambda f}}} \right)\frac{{2{J_1}({\pi Dr/\lambda f} )}}{{\pi Dr/\lambda f}}$$
where F(•) denotes the Fourier transform, λ, f denote the wavelength and the focal length of the system respectively. J1(•) is the first order Bessel function. The pupil function the synthetic aperture system is:
$$P(x,y) = \sum\limits_{i = 1}^N {{P_{sub}}(x - {x_i},y - {y_i}){e^{i{\phi _i}(x,y)}}}$$
where N is the number of sub-apertures, (xi, yi) is the center of the i-th sub-aperture, Фi denotes the phase of each sub-aperture. So the coherent transfer function of the synthetic aperture system is as follows:
$$A(u,v) = {A_{sub}}(u,v) \bullet \sum\limits_{n = 1}^N {{e^{ - i2\pi (\frac{u}{{\lambda f}}{x_n} + \frac{v}{{\lambda f}}{y_n})}}}$$

The intensity PSF of the optical synthetic aperture system can be expressed as:

$$PSF({u,v} )= {|{A({u,v} )} |^2}$$

Here, we calculate the PSF of the Golay-3 structure as shown in Fig. 1. The pupil function is as follows:

$$\begin{array}{l} P\textrm{ = }circ\left( {\frac{{\sqrt {{{(x + D/2)}^2} + {{(y - \sqrt 3 D/2)}^2}} }}{{d/2}}} \right)\\ + circ\left( {\frac{{\sqrt {{{(x - D/2)}^2} + {{(y - \sqrt 3 D/2)}^2}} }}{{d/2}}} \right) + circ\left( {\frac{{\sqrt {{x^2} + {{(y + \sqrt 3 D/2)}^2}} }}{{d/2}}} \right) \end{array}$$
where d is the diameter of the sub-mirrors, D is the diameter of the circumferential circle. From the above Eq. (2) to Eq. (7), the PSF of the Golay-3 system is:
$$PS{F_3} = {(\frac{{\pi {d^2}}}{{4\lambda f}})^2}{(\frac{{2{J_1}(\pi rd/\lambda f)}}{{\pi rd/\lambda f}})^2}{\left|{\sum\limits_{i = 1}^3 {{e^{ - \frac{{2\pi i}}{{\lambda f}}(x{x_i} + y{y_i})}}} } \right|^2}$$
where (xi, yi) is the center of the i-th sub-aperture.

 figure: Fig. 1.

Fig. 1. Three types of typical array configurations of optical synthetic systems.

Download Full Size | PDF

There are three types of typical array configurations of synthetic aperture systems as shown in Fig. 1. They are known as Golay-3, Annulus, and the structure of Giant Magellan Telescope (GMT), respectively. The PSF of the Annulus and GMT are as follows:

$$PS{F_{Annulus}} = {\left( {\frac{{\pi {d^2}}}{{4\lambda f}}} \right)^2}{\left( {\frac{{2{J_1}({\pi rd/\lambda f} )}}{{\pi rd/\lambda f}}} \right)^2}{\left|{\sum\limits_{i = 1}^6 {{e^{ - \frac{{2\pi i}}{{\lambda f}}({x{x_i} + y{y_i}} )}}} } \right|^2}$$
and
$$PS{F_{GMT}} = {\left( {\frac{{\pi {d^2}}}{{4\lambda f}}} \right)^2}{\left( {\frac{{2{J_1}({\pi rd/\lambda f} )}}{{\pi rd/\lambda f}}} \right)^2}{\left|{\sum\limits_{i = 1}^7 {{e^{ - \frac{{2\pi i}}{{\lambda f}}({x{x_i} + y{y_i}} )}}} } \right|^2}$$

In order to obtain the actual PSF, Zemax is used for simulation. Table 1 details the parameters of the telescopes. Since no zeros presence in the midband MTF is required, the fill factor of the three arrays is set to 36%, 38%, 69% [6]. Figure 2 shows the 3D model of the three types of array and Fig. 4 shows the corresponding PSF. As shown in Fig. 3, the PSF of optical synthetic aperture system contains side lobe outside the central peak region. The presence of these side lobe results in obvious blurring and loss of contrast in the final collected image.

 figure: Fig. 2.

Fig. 2. 3D model of the three kinds of array configurations.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The PSF of the three kinds of array configurations.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Illustration of the dataset. The (a) and (c) are original natural images in the BSD500 dataset. The (b) and (d) are obtained by convolving the PSF in the corner with the clear images and adding Gaussian noise.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of a Cassegrain telescope system.

After simulating the PSF of three types of synthetic aperture systems, we can obtain the dataset of degraded images according to Eq. (1). The BSD500 dataset is used because using a larger training dataset can only bring little improvement in image restoration [12]. The dataset consists of 500 natural images. These original clear images and their degraded images are taken as input of the proposed method, as shown in Fig. 4. The degraded images are obtained by convolving the PSF of the three systems with the original images and then adding noise. Since multiplicative noise depends on the state of the system, they generally do not obey the Gaussian distribution. It is difficult to denoising an image containing both multiplicative noise and additive noise through a convolutional neural network. In addition, the noise of optical synthetic aperture is proved to be modeled as Gaussian white noise [14]. Therefore, additive Gaussian noise is added to the image. Eventually 1500 degraded images are obtained. The training set contains 1,200 of them. The validation set consists of the remaining 300 images. In order to prevent the training leak in deep learning, the 50 images in test dataset are made up of other clear images from the web. To better analyze the robustness to image noise of our algorithm, two networks of different Gaussian noise levels are trained. The signal to noise ratio (SNR) is set to 30 dB and 40 dB, respectively. The specific network training process is described in the following section.

2.2. Image restoration

2.2.1 Algorithms

The half-quadratic splitting (HQS) algorithm has been widely used in non-blind image deconvolution [15,16]. In conventional image restoration task, the objective is generally defined as:

$$\hat{f} = \mathop {\min }\limits_f \frac{\alpha }{2}||{g - f^{\ast} PSF} ||_2^2 + \sum\limits_{i = h,v} {\rho ({\nabla _i}f)}$$
where α is a trade-off parameter and $||\cdot||_2^2$ denotes an L2 norm. ${\nabla _i}f$ is the gradient of image, and * is the convolution operator. When i is equal to h and v, the horizontal gradient and the vertical gradient are represented respectively. It can be calculated by the following equation:
$$\left\{ {\begin{array}{c} {{\nabla_h}f = {p_h}^{\ast} f}\\ {{\nabla_v}f = {p_v}^{\ast} f} \end{array}} \right.$$
where Ph, Pv are the horizontal and vertical gradient operators, and * is the convolution operator. ρ(•) in Eq. (11) is the regularization term, which is also known as image prior. Here, it denotes the regularization of image gradient of f. The regularization term makes the minimizing computationally intractable. The HQS method introduce an auxiliary variable z. Equation (11) can be rewritten as:
$$\hat{f} = \mathop {\min }\limits_f \frac{\alpha }{2}||{g - f^{\ast} PSF} ||_2^2 + \sum\limits_{i = h,v} {\rho ({z_i}\textrm{) }s.\textrm{ }t.\textrm{ }} {z_i} = {p_i}^{\ast} f$$

Then, to deal with the constraint, a new term is added to the equation. HQS method tries to minimize the following cost function:

$${\cal L}({f,z} )= \frac{\alpha }{2}||{g - f^{\ast} PSF} ||_2^2 + \beta \sum\limits_{i = h,v} {||{{z_i} - {p_i}^{\ast} f} ||_2^2 + \sum\limits_{i = h,v} {\rho ({{z_i}} )} }$$
where β is a weight. When β approach infinity, the solution of Eq. (14) is equivalent to that of Eq. (11). Since there are two parameters to be solved, Eq. (14) can be divided into two sub-minimizing functions. When calculating f, the term not involving f is omitted. The z is calculated in a similar way. Two sub-minimizing functions can be solved via the following iterative scheme:
$${f_{k + 1}} = \mathop {\min }\limits_f \frac{\alpha }{2}||{g - f^{\ast} PSF} ||_2^2 + \beta \sum\limits_{i = h,v} {||{{z_{ik}} - {p_i}^{\ast} f} ||_2^2}$$
and
$${z_{i(k + 1)}} = \mathop {\min }\limits_{{z_i}} \beta \sum\limits_{i = h,v} {||{{z_i} - {p_i}^{\ast} {f_{k + 1}}} ||_2^2 + \sum\limits_{i = h,v} {\rho ({{z_i}} )} }$$
where k is the number of iterations. Specifically, the fist subproblem is associated with a quadratic regularized least-squares problem, which has various fast solutions for different degradation matrices. An efficient implementation of the solution can be computed by fast Fourier transform (FFT) as:
$${f_{k + 1}} = {F^{ - 1}}\left( {\frac{{\alpha \overline {F({PSF} )} F(g )+ 2\beta \sum\nolimits_{i = h,v} {\overline {F({{p_i}} )} F({{z_{ik}}} )} }}{{\alpha \overline {F({PSF} )} F({PSF} )+ 2\beta \sum\nolimits_{i = h,v} {\overline {F({{p_i}} )} F({{p_i}} )} }}} \right)$$

The regularization term can be reformulated as:

$${z_{i(k + 1)}} = \mathop {\min }\limits_{{z_i}} \frac{1}{{2{{\left( {\sqrt {1/2\beta } } \right)}^2}}}\sum\limits_{i = h,v} {||{{z_i} - {p_i}^{\ast} {f_{k + 1}}} ||_2^2 + \sum\limits_{i = h,v} {\rho ({{z_i}} )} }$$

According to Bayesian probability, Eq. (18) represents a Gaussian denoiser with noise level $\sqrt {1/\lambda \beta } $ [12]. Thus the second subproblem is actually a denoising problem. There exists several complementary algorithms that can solve this problem. But conventional denoising algorithm usually needs to know the noise in the image, while the noise level of the output image for each iteration is unknown. Therefore, it is difficult to apply these conventional denoising method to the proposed half-quadratic splitting algorithm loop. Convolutional neural network can automatically learn the information in the image and avoids the difficulty of designing the objective function.

The final process of the algorithm is shown in Fig. 5. The original images and the PSF of the system are taken as inputs. Equation (17) is used to deconvolve the images for the first time, where the initial Zl is a zero matrix. The images after initial deconvolution contains a lot of noise. It is proved that the image gradient can model the details and structures of images and the denoised results from gradient domain contain little noise and artifacts [12]. We get the vertical and horizontal gradients of the processed images from Eq. (12) and then process them using the CNN. The CNN is detailed in next section. To avoid training extra parameters, the vertical gradients is transposed so they can share the same CNN denoiser. Finally, the results with less noise are generated. The denoised image gradients are treated as the new Zl in Eq. (16) for the next deconvolution. Then Eq. (16) and Eq. (17) are used to iterate.

 figure: Fig. 5.

Fig. 5. Illustration of the flow chart of the algorithm. We take the gradients of the degraded images as the inputs of next iteration and the CNN model to denoise the gradients.

Download Full Size | PDF

2.2.2 Architecture of CNN

The convolutional neural network is used to remove noise from the optical synthetic aperture over the gradient domain. The architecture of the CNN is carefully designed. It has been pointed out that the effect of denoising neural networks correlates with its receptive filed [13]. In CNN, to decrease the computational burden and reduce computation time, it is appropriate to use small filter with a large depth [16]. Thus the proposed CNN consists of 7 layers with 3×3 convolution filter. Thus the receptive filed of CNN is (2×7 + 1)×(2×7 + 1). In order to keep the size of the output image the same as the input one, we pad zeros before each convolution and set the stride and padding to 1. The detailed parameters of the CNN is shown in Table 2. Where “CR” denotes the convolutional layer followed by a ReLU non-linear function, “CBR” denotes the convolutional layer with Batch Normalization (BN) followed by a ReLU non-linear function [17]. “C” denotes the convolutional layer. To improve the performance of our model, residual learning and Batch Normalization is introduced into the network. The residual mapping is much easier to be learned than the original unreferenced mapping. And the batch normalization has several characteristics, such as fast training, better performance, and low sensitivity to initialization. These two strategies can benefit from each other for Gaussian denosing [16]. To sum up, we use the network module of the form of “convolution-BatchNorm-Relu” and the strategy of residual learning, as shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Illustration of the architecture of the CNN. The network module is the form of “convolution-BatchNorm-Relu” with residual learning.

Download Full Size | PDF

Tables Icon

Table 2. The parameters of the CNN.

The training objective is to minimize the difference between the actual residual images and the residuals that the CNN learns from the noisy input. It has been proved that the L2 norm loss function is not robust to outliers and usually leads to results contain noise and ringing artifacts. The L1 norm loss function outperforms the L2 norm loss function [12]. Thus the L1 norm loss function is adopted, which can be expressed as:

$$l(\theta ) = \frac{1}{{2N}}\sum\limits_{i = 1}^N {{{||{R({y_i};\theta ) - ({y_i} - {x_i})} ||}_1}}$$
where y = x + n is the input noisy gradients. θ is the network parameters that we finally need to train, R(•) denotes a residual mapping, R(yi)=n. N is the number of training image patches.

The proposed CNN is trained by using MatConvNet, a toolbox of MATLAB. Training was conducted on a NVIDIA Titan X GPU. To train the network, the parameters of the CNN are optimized iteratively. When the parameters of the last iteration are fixed, the network parameters of the next iteration are trained. We use Xavier initialization for the weights for each layer. The patch size is set to 60×60. Stochastic gradient descent with momentum is used to train the network. The learning rate is 0.01, the weight decay is 0.0001 and the momentum is set to 0.9.

3. Simulation results

In this part, we show the performance of the proposed method. The effect of the introduction of residul learning and Batch Normalization is analyzed. Under different SNR, the performance of image restoration is compared between the proposed method and other algorithms.

In order to evaluate the proposed method objectively, we calculate the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM), which are widely used in the analysis of image restoration effect [18]. The denoising performance of the convolutional neural network in the gradient domain is evaluated. Horizontal and vertical gradients are evaluated separately. The results are shown in Fig. 7. For the convenience of display, the gradient in the Fig. 7 is the result of the colormap function in MATLAB instead of the original grayscale. Table 3 details the average PSNR and SSIM of the horizontal and vertical gradients for the entire testset. Both PSNR and SSIM of denoised gradient images are raised.

 figure: Fig. 7.

Fig. 7. Illustration of the denoising effect of the convolutional neural network in gradient domain.

Download Full Size | PDF

Tables Icon

Table 3. The average PSNR and SSIM of the horizontal and vertical gradient.

The proposed method is compared with several conventional algorithms which are popular in image restoration of synthetic aperture systems. These conventional algorithms are Wiener filters, Hyper-Laplacian (HL) priors and the Lucy-Richardson (LR) algorithms [19,20]. The test dataset includes images that are widely used for image restoration validation [16]. Some of them are shown in Fig. 8. Table 4 details the recovery results. As can be seen from the table, under different SNR, the method proposed on these widely used pictures achieved the best results.

 figure: Fig. 8.

Fig. 8. Some of the widely used testing images in image restoration.

Download Full Size | PDF

Tables Icon

Table 4. PSNR and SSIM of the proposed method.

 figure: Fig. 9.

Fig. 9. Illustration of the results of the final restored images when the SNR is 40 dB. (a) The original clear images. (b) The degraded images and their corresponding synthetic aperture array configurations. (c) The results obtained by the proposed method. (d) The results obtained by the Lucy-Richardson algorithm. (e) The results obtained by the Hyper-Laplacian priors. (f) The results obtained by the Wiener algorithm.

Download Full Size | PDF

Because the optical synthetic aperture imaging systems are applied to ground-based or space telescope, which take pictures of celestial objects. These images are different from those of daily life. To demonstrate the generalization capability of the performance in real situations, 20 images of the Hubble space telescope are included in the test set. The effect of image restoration is shown in Fig. 9 and Fig. 10. The stars in Fig. 9(b) and Saturn in Fig. 10(b) are clearly blurred. Both the proposed method and other conventional methods make the images visually clearer and has a higher contrast than the degraded images. When the SNR is different, the performance of our method and other methods are affected. The conventional methods need to balance the sharpness and noise by adjusting parameters. If the restored image contains little noise, the image remains blurry. For example, in Fig. 9, the results of Wiener contain more noise in order to obtain clear restored images. In contrast, the results of Hyper-Laplacian are less noisy and less sharp, such as the stars below. Here, in order to make the comparison fair, the parameters of various other methods are determined by taking the highest PSNR of the recovery result after multiple experiments.

 figure: Fig. 10.

Fig. 10. Illustration of the results of the final restored images when the SNR is 30 dB.

Download Full Size | PDF

The proposed method for the entire testset has the highest PSNR and SSIM, as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. The average PSNR and SSIM of the proposed method under different SNR.

Download Full Size | PDF

The result of the proposed architecture with that of CNN without residual learning and batch normalization is compared, as shown in Fig. 12. The PSNR and SSIM of CNN are superior to CNN without residual learning and batch normalization after first iteration. Their performance continues to improve with each iteration. After three iterations, results of the proposed method are better than other algorithms.

 figure: Fig. 12.

Fig. 12. Illustration of comparison of the PSNR and SSIM of the proposed method.

Download Full Size | PDF

Apart from the widely used PSNR and SSIM indices, several other evaluation matrices are also adopt, namely information fidelity criterion (IFC), visual information fidelity (VIF), weighted peak signal-to noise ratio (WPSNR), and multi-scale structure similarity index (MSSSIM) [21]. The final results are shown in Table 5. The proposed method yields the highest scores in all evaluation matrices under different noise levels. All these qualitative evaluations demonstrate that the proposed method is effective.

Tables Icon

Table 5. Comprehensive assessment of the proposed method.

4. Experimental results

In this section, we experimentally validate the proposed method. An experimental platform is set up, as shown in Fig. 13. The resolution board (USAF1951), which is produced according to the United States MIL-STD-150A standard, is illuminated by an LED source. In the resolution board, group is the group, and frequency is the number of line pairs per 1 mm. Then the beams of the LED source pass through the collimator, which is used to produce the parallel beams. Then the parallel beams are received by a Golay-3 system using a pupil-mask to simulate a 3 sub-aperture telescope. The small aperture is 6 mm and the ex-circle is 15 mm. Finally the beams are focused by an imaging lens of 34 mm and collected by a CCD camera (Point Grey GS2-GE-20S4M). The pixel size of the CCD is 4.4 µm × 4.4 µm.

 figure: Fig. 13.

Fig. 13. Optical setup of the experiment.

Download Full Size | PDF

The actual degraded images of optical synthetic aperture system are obtained by the experiment. It is proved that the calculated PSF is a better match for image restoration of the optical synthetic aperture system [9]. Therefore, the calculated PSF is substituted to the proposed method for image recovery. Figure 14(a) shows the collected image of the resolution target based on the Golay-3 type. As shown in the simulation, the collected image of the object is blurred and low contrast. Figure 14(b) shows the image restoration result of the proposed result. It is obvious that the restored image is visually clearer and has a higher contrast than the collected image. To give a more explicit sharpness and contrast enhancement, Fig. 14(c1), Fig. 14(c2) and Fig. 14(c3) show the results of the line traces of the horizontal bars of group 4, group 5, and group 6 respectively. The differences between the peaks and valleys become larger, which proves the deblurring and contrast enhancement.

 figure: Fig. 14.

Fig. 14. Illustration of the collected image (a) and the recovered image (b); (c1) - (c3) the line traces of the horizontal bars of group 4, group 5, and group 6 of (a) and (b), respectively.

Download Full Size | PDF

5. Conclusion

Although it is popular to use deep learning for image restoration, its application to optical synthetic aperture systems deserve further study. In this paper, we proposed and trained the CNN to restore the images of synthetic aperture system. We first simulated three kinds of synthetic aperture systems. Then a dataset consisting of degraded images through these systems is established. To restore the degraded images, the CNN is used based on the half-quadratic splitting algorithm. The CNN is treated as a denoiser and apply it to learn the image prior. Residual learning and Batch Normalization is added to improve the denoising performance. The CNN is able to effectively remove noise in the gradient domain and the learned gradients are then used to guide the image deconvolution step. When the signal to noise ratio is 40 dB, the average peak signal to noise ratio is raised from 23.7 dB for the degraded images to 30.8 dB for the restored images, and the structural similarity index of the results is increased from 0.78 to 0.93. We also analyze the performance of the proposed method and the robustness to image noise by using several evaluation matrices. It proves that each index score of the proposed method is higher than several conventional methods for image restoration of synthetic aperture system. Both quantitative and qualitative evaluations demonstrate that the proposed method performs well. And it is extensible to other image restoration tasks of different arrays of synthetic aperture system.

Funding

National Natural Science Foundation of China (61475018).

Disclosures

The authors declare no conflicts of interest.

References

1. A. B. Meinel, “Aperture Synthesis Using Independent Telescopes,” Appl. Opt. 9(11), 2501 (1970). [CrossRef]  

2. J. S. Fender, “Synthetic apertures: An Overview,” Proc. SPIE 0440, 2–7 (1984). [CrossRef]  

3. C. Zhao and Z. Wang, “Mid-frequency “MTF compensation of optical sparse aperture system,” Opt. Express 26(6), 7117 (2018). [CrossRef]  

4. Z. L. Xie, H. T. Ma, B. Qi, G. Ren, X. J. He, L. Dong, and Y. F. Tan, “Active sparse aperture imaging using independent transmitter modulation with improved incoherent Fourier ptychographic algorithm,” Opt. Express 25(17), 20541–20556 (2017). [CrossRef]  

5. A. J. Stokes, B. D. Duncan, and M. P. Dierking, “Improving mid-frequency contrast in sparse aperture optical imaging systems based upon the Golay-9 array,” Opt. Express 18(5), 4417–4427 (2010). [CrossRef]  

6. J. R. Fienup and J. J. Miller, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” Proc. SPIE 4792, 1–8 (2002). [CrossRef]  

7. H. Chen, Z. Cen, C. Wang, L. Shun, and L. XiaoTong, “Image Restoration via Improved Wiener Filter Applied to Optical Sparse Aperture Systems,” Optik 147, 350–359 (2017). [CrossRef]  

8. Z. Zhou, D. Wang, and Y. Wang, “Effect of noise on the performance of image restoration in an optical sparse aperture system,” J. Opt. 13(7), 075502 (2011). [CrossRef]  

9. L. Xu, X. Tao, and J. Jia, “Inverse Kernels for Fast Spatial Deconvolution,"European Conference on Computer VisionSpringer International Publishing, (2014).

10. D. Wang and S. Tao, “Experimental study on imaging and image restoration of optical sparse aperture systems,” Opt. Eng. 46(10), 103201 (2007). [CrossRef]  

11. D. Guerra-Ramos, L. Díaz-García, J. Trujillo-Sevilla, and J. M. Rodríguez-Ramos, “Piston alignment of segmented optical mirrors via convolutional neural networks,” Opt. Lett. 43(17), 4264–4267 (2018). [CrossRef]  

12. J. Zhang, J. Pan, W. S. Lai, R. Lau, and M. H. Yang, “Learning fully convolutional networks for iterative non-blind deconvolution,” IEEE Conference on Computer Vision and Pattern Recognition, (2016).

13. K. Zhang, W. Zuo, S. Gu, and L. Zhang. “Learning Deep CNN Denoiser Prior for Image Restoration,” IEEE Conference on Computer Vision and Pattern Recognition, (2017).

14. L. Li, Y. Jiang, and C. Wang, “Noise Analysis and Image Restoration for Optical Sparse Aperture Systems,” International Workshop on Geoscience and Remote Sensing. (2008)

15. L Li, J Pan, S Lai W, X Gao C, N Sang, and Ming Hsuan Yang, “Blind Image Deblurring via Deep Discriminative Priors,” International Journal of Computer Vision, (2019).

16. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Processing, (2017).

17. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” International Conference on International Conference on Machine Learning.JMLR.org, (2015).

18. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarit,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

19. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

20. D. Krishnan and R. Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” Annual Conference on Neural Information Processing Systems, (2009)

21. C. Y. Yang, C. Ma, and M. H. Yang, “Single-image super-resolution: A benchmark,” in Proc. Eur. Conf.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Three types of typical array configurations of optical synthetic systems.
Fig. 2.
Fig. 2. 3D model of the three kinds of array configurations.
Fig. 3.
Fig. 3. The PSF of the three kinds of array configurations.
Fig. 4.
Fig. 4. Illustration of the dataset. The (a) and (c) are original natural images in the BSD500 dataset. The (b) and (d) are obtained by convolving the PSF in the corner with the clear images and adding Gaussian noise.
Fig. 5.
Fig. 5. Illustration of the flow chart of the algorithm. We take the gradients of the degraded images as the inputs of next iteration and the CNN model to denoise the gradients.
Fig. 6.
Fig. 6. Illustration of the architecture of the CNN. The network module is the form of “convolution-BatchNorm-Relu” with residual learning.
Fig. 7.
Fig. 7. Illustration of the denoising effect of the convolutional neural network in gradient domain.
Fig. 8.
Fig. 8. Some of the widely used testing images in image restoration.
Fig. 9.
Fig. 9. Illustration of the results of the final restored images when the SNR is 40 dB. (a) The original clear images. (b) The degraded images and their corresponding synthetic aperture array configurations. (c) The results obtained by the proposed method. (d) The results obtained by the Lucy-Richardson algorithm. (e) The results obtained by the Hyper-Laplacian priors. (f) The results obtained by the Wiener algorithm.
Fig. 10.
Fig. 10. Illustration of the results of the final restored images when the SNR is 30 dB.
Fig. 11.
Fig. 11. The average PSNR and SSIM of the proposed method under different SNR.
Fig. 12.
Fig. 12. Illustration of comparison of the PSNR and SSIM of the proposed method.
Fig. 13.
Fig. 13. Optical setup of the experiment.
Fig. 14.
Fig. 14. Illustration of the collected image (a) and the recovered image (b); (c1) - (c3) the line traces of the horizontal bars of group 4, group 5, and group 6 of (a) and (b), respectively.

Tables (5)

Tables Icon

Table 1. Parameters of a Cassegrain telescope system.

Tables Icon

Table 2. The parameters of the CNN.

Tables Icon

Table 3. The average PSNR and SSIM of the horizontal and vertical gradient.

Tables Icon

Table 4. PSNR and SSIM of the proposed method.

Tables Icon

Table 5. Comprehensive assessment of the proposed method.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

g = f P S F + n
P s u b ( x , y ) = c i r c ( x 2 + y 2 d / 2 )  =  { 1 , x 2 + y 2 d / 2 0 , o t h e r s  
A s u b = F ( P s u b ) = ( π D 2 4 λ f ) 2 J 1 ( π D r / λ f ) π D r / λ f
P ( x , y ) = i = 1 N P s u b ( x x i , y y i ) e i ϕ i ( x , y )
A ( u , v ) = A s u b ( u , v ) n = 1 N e i 2 π ( u λ f x n + v λ f y n )
P S F ( u , v ) = | A ( u , v ) | 2
P  =  c i r c ( ( x + D / 2 ) 2 + ( y 3 D / 2 ) 2 d / 2 ) + c i r c ( ( x D / 2 ) 2 + ( y 3 D / 2 ) 2 d / 2 ) + c i r c ( x 2 + ( y + 3 D / 2 ) 2 d / 2 )
P S F 3 = ( π d 2 4 λ f ) 2 ( 2 J 1 ( π r d / λ f ) π r d / λ f ) 2 | i = 1 3 e 2 π i λ f ( x x i + y y i ) | 2
P S F A n n u l u s = ( π d 2 4 λ f ) 2 ( 2 J 1 ( π r d / λ f ) π r d / λ f ) 2 | i = 1 6 e 2 π i λ f ( x x i + y y i ) | 2
P S F G M T = ( π d 2 4 λ f ) 2 ( 2 J 1 ( π r d / λ f ) π r d / λ f ) 2 | i = 1 7 e 2 π i λ f ( x x i + y y i ) | 2
f ^ = min f α 2 | | g f P S F | | 2 2 + i = h , v ρ ( i f )
{ h f = p h f v f = p v f
f ^ = min f α 2 | | g f P S F | | 2 2 + i = h , v ρ ( z i s .   t .   z i = p i f
L ( f , z ) = α 2 | | g f P S F | | 2 2 + β i = h , v | | z i p i f | | 2 2 + i = h , v ρ ( z i )
f k + 1 = min f α 2 | | g f P S F | | 2 2 + β i = h , v | | z i k p i f | | 2 2
z i ( k + 1 ) = min z i β i = h , v | | z i p i f k + 1 | | 2 2 + i = h , v ρ ( z i )
f k + 1 = F 1 ( α F ( P S F ) ¯ F ( g ) + 2 β i = h , v F ( p i ) ¯ F ( z i k ) α F ( P S F ) ¯ F ( P S F ) + 2 β i = h , v F ( p i ) ¯ F ( p i ) )
z i ( k + 1 ) = min z i 1 2 ( 1 / 2 β ) 2 i = h , v | | z i p i f k + 1 | | 2 2 + i = h , v ρ ( z i )
l ( θ ) = 1 2 N i = 1 N | | R ( y i ; θ ) ( y i x i ) | | 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.