Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rotated rectangular aperture imaging through multi-frame blind deconvolution with Hyper-Laplacian priors

Open Access Open Access

Abstract

Rotated rectangular aperture imaging has many advantages in large aperture telephoto systems due to its lower cost and lower complexity. This technology makes it possible to build super large aperture telescopes. In this paper, we combine the ideas of deblurring with rotated rectangular aperture imaging and propose an image synthesis algorithm based on multi-frame deconvolution. In the specific reconstruction process, Hyper-Laplacian priors and sparse priors are used, and an effective solution is developed. The simulation and real shooting experiments show that our algorithm has excellent performance in visual effect and objective evaluation. The synthetic images are significantly sharper than the results of the existing methods.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Traditional imaging systems usually use circular optical lenses, which has a circular symmetric point spread function (PSF) that has the same degradation in all directions. In this way, every direction of the object is well imaged. When high-resolution imaging is required, we need to design and assemble huge lens [1]. The manufacturing is challenging. And the manufacturing costs of huge lens will be extremely high according to the telescope size-cost scaling law [2]. In order to alleviate this contradiction, rotated synthetic aperture (RSA) imaging was proposed [35]. Under the architecture of RSA, rectangular aperture is used to instead of circular aperture [6]. Actually, the eyes of many animals in nature have elongated pupil shapes, which are the results of evolution and help them to hunt prey or escape predation [7]. The principle and pipeline of rotated rectangular aperture imaging are shown in Fig. 1. We can obtain the imaging results of the rectangular aperture at different angles by rotation, and then use algorithms to fuse these results to get the final result. On the one hand, rotated rectangular aperture has higher resolution on one axis under the same aperture area [8]; on the other hand, it has small volume and mass under the same aperture diameter, which is conducive to space launch. RSA-based telescopes have been developed for many years and are available for deployment at present [6,9]. Compared with traditional circular telescope, RSA-based telescope has low complexity and low cost [9]. Besides, it has been shown that rotated rectangular pupil telescope have higher contrast at low separation between a bright star and a faint companion [10]. It can be predicted that rotated rectangular aperture imaging will have a bright future in the space telescope system.

 figure: Fig. 1.

Fig. 1. Overview of rotated rectangular aperture imaging. (a) Principle and pipeline of rotated rectangular pupil imaging. It mainly includes rotating multiple shots and images synthesis. (b) The imaging results of the same object under different apertures. The upper right corner is the corresponding aperture. Left image is the result of the circular aperture. The middle three images are the results of rectangular aperture at different angles. As we can see, ${\textrm{0}^\textrm{o}}$ frame has obvious degradation in the vertical direction while $\textrm{9}{\textrm{0}^\textrm{o}}$ frame has serious degradation in the horizontal direction. The right image is the synthetic result of multiple rectangular images.

Download Full Size | PDF

Although rotational aperture imaging has many advantages and high cost performance, it requires a reliable image fusion algorithm to output clear image. The open source fusion methods of RSA are mainly based on matched filter [1113], which usually has a good signal-to-noise ratio. However, this non-blind restoration method requires the point spread function at each angle as input. And the synthetic result of multiple shots is not particularly satisfactory. In this paper, we synthesis the images from the perspective of deblurring. Because the main reason of image degradation is the blur cause by the rectangular aperture. Image deblurring is an enduring topic. Various deblurring algorithms have been proposed in the past few decades, including non-blind deconvolution methods [1416] and blind deconvolution methods [1720]. In general, these methods are the maximum a posteriori estimations of the images under the framework of Bayesian probability. Therefore, the prior based regularization term is crucial because blind deblurring is an ill-posed problem. At present, the researchers have designed various regularization terms according to different priors such as Tikhonov regularization [21], sparse prior [22], dark channel prior [23], gradient related priors etc. The Tikhonov regularization is popular due to its easy calculation, but its results tend to be smooth. The gradient related priors mainly include $L0$ norm of gradient [24,25], Total Variation (TV) regularization [26], and Hyper-Laplacian priors [27]. The famous TV regularization has better edge preservation but it’s more computationally complicated. Fortunately, Wang et al. proposed a fast TV algorithm by using half-quadratic model [28]. The Hyper-Laplacian priors well represent the heavy-tailed distribution of the real image gradients. However, the Hyper-Laplacian optimization problem is non-convex. To solve this problem, Krishnan et al. proposed look-up table method under the half-quadratic model [24]. Candes et al. proposed iteratively reweighted $L1$ minimization (IRL1) method [29]. Chartrand et al. proposed iteratively reweighted least squares (IRLS) method [30]. And Wang et al. proposed a generalized iterated shrinkage algorithm (GISA) [31]. The proposal of these algorithms makes $Lp$ problem well solved.

In addition to the priors based regularization methods mentioned above, some innovative approach were proposed for deblurring, such as the Radon transform of the kernel [32], directional filter to reduce noise [33] and inertial sensor to assist kernel estimation [34]. Unlike single frame deblurring, multi-frame deblurring has multiple inputs with the same target. Therefore, more information are provided. Piane et al. analyzed and discussed in which cases multiple blurred frames are better than one [35]. In terms of multi-frame deblurring, the priors are the same. Wen et al. [36] proposed an efficient splitting method that generalized some fast single-image TV minimization methods to the multi-frame case. Dong et al. proposed a multi-frame deblurring algorithm based on sparse priors [37]. There are also some optical applications based on multi-frame deconvolution [3840].

Inspired by the success of the Hyper-Laplacian priors in the field of deblurring, we proposed a multi-frame blind deconvolution method for rotated rectangular aperture images synthesis under the framework of maximum a posterior (MAP) estimation. Our multi-frame blind deconvolution algorithm mainly consists of two steps: kernel estimation and image reconstruction. We adopted an alternating minimization strategy and a half-quadratic splitting scheme to solve the resulted optimization problems. Specifically, we impose $L1$ norm regularization on the kernel in the process of estimating the kernel. And the Hyper-Laplace priors are used to constrain the latent image. In the specific solution process, we adopt IRLS algorithm due to its high cost performance [41]. More details will be explained in Sec. 3. To the best of our knowledge, this is the first time that multi-frame deconvolution has been applied to images synthesis of rotated rectangular aperture imaging. And the coincidence of the gradient distribution of natural images with the Hyper-Laplacian priors provide a guarantee for high-quality image restoration. In summary, we combine the ideas of deblurring with RSA imaging and propose a multi-frame deconvolution algorithm based on Hyper-Laplacian priors, which broadens the RSA image synthesis methods. In addition, an efficient solution is developed through generalized some image deconvolution methods to the multi-frame case. The effectiveness of our method is verified both on simulations and real experiments with a built prototype. The corresponding results demonstrate that our algorithm has excellent performances in both visual effects and objective evaluations. And the synthetic images showed that our method has significantly improved their optical resolutions.

2. Rotated rectangular aperture imaging

Rotating aperture imaging technology reduces the manufacturing difficulty and cost of large aperture telescopes [9], and has important values in the astronomy and aerospace fields. Its imaging principle is the same as a circular lens, as showed in Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic of rotated rectangular aperture imaging.

Download Full Size | PDF

Suppose a monochromatic plane wave field with amplitude A is incident perpendicular to the rectangular lens, the amplitude distribution behind the lens is as follows:

$$U({x,y} )= A \cdot P({x,y} )\cdot \exp \left( { - j\frac{\pi }{{\lambda f}}({{x^2} + {y^2}} )} \right)$$

Here, f is the focal length of the lens, $({x,y} )$ represents the spatial coordinate, $P({x,y} )$ represents the pupil function. Different from the conventional round pupil, the $P({x,y} )$ is rectangular, as shown below:

$$P({x,y} )= \left\{ {\begin{array}{c} {1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{in} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{aperture}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{else}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \end{array}} \right.$$

Thus, the complex amplitude distribution of the sensor surface is

$${U_f}({{x_f},{y_f}} )= \frac{1}{{j\lambda f}}\exp \left( {\frac{{j\pi }}{{\lambda f}}({{x_f}^2 + {y_f}^2} )} \right) \cdot {\cal F}\left\{ {U({x,y} )\cdot \exp \left( {\frac{{j\pi }}{{\lambda f}}({{x^2} + {y^2}} )} \right)} \right\}$$
where $({{x_f},{y_f}} )$ represents focal plane coordinates, ${\cal F}$ represents Fourier transform and the spatial frequencies are ${f_x} = {{{x_f}} / {f\lambda }},{\kern 1pt} {\kern 1pt} {f_y} = {{{y_f}} / {f\lambda }}$.

According to the above principle, we can simulate the point spread function under different apertures, as shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Different apertures and the corresponding PSF. (a) Circular aperture. (b) The simulated PSF of the circular aperture. (c) Rectangular aperture with aspect ratio of 5:1. (d) The simulated PSF of the rectangular aperture.

Download Full Size | PDF

As we can see from the figure above that the point spread function of the circular aperture has almost the same degradation in the horizontal and vertical directions. In contrast, the degradation of the PSF of the horizontal rectangular aperture in the vertical direction is significantly higher than that in the horizontal direction. This is not difficult to understand, because the diffraction in the vertical direction is significantly stronger than the horizontal direction due to the limitation of the aperture size. We rotate the aperture to obtain the images at all angles. And the PSF is also rotated accordingly, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The PSF obtained by rotating the rectangular aperture in Fig. 3 at a certain angle counterclockwise. All the angles in this paper are recorded in this way.

Download Full Size | PDF

From the above analysis, we know that the PSF of rectangular aperture obviously degenerates in a certain direction, which leads to the significant blur of the image in this direction (see Fig. 1(b)).

3. Image synthesis based on multi-frame blind deconvolution

From the introduction of the above chapters, we can know that the images we shot at multiple angles are blurred. And the main reason is the degraded PSFs formed by aperture size. Therefore, the imaging process of rotating rectangular aperture can be modeled as PSFs convolution clear image:

$${y_i} = x \ast {k_i} + n$$

Where ${k_i}$ represents the PSF at a certain angle, x and ${y_i}$ represent latent image and corresponding blurred image respectively, n stands for additive noise, and ${\ast} $ is the two-dimensional convolutional operator.

Under the framework of MAP estimation, the image reconstruction can be written as:

$$({x,{k_1},{k_2}, \cdots ,{k_m}} )= \mathop {\arg \max }\limits_{x,{k_1}, \cdots {k_m}} \mathop \prod \limits_1^m {p_{y|x,k}}({{y_i}|x,{k_i}} )\cdot {p_x}(x )\cdot {p_k}({{k_i}} )$$

Here, p represents probability density. Generally, we write it in the form of log likelihood,

$$({x,{k_1},{k_2}, \cdots ,{k_m}} )= \mathop {\arg \min }\limits_{x,{k_1}, \cdots {k_m}} \left( { - \sum\limits_{i = 1}^m {\log {p_{y|x,k}}({{y_i}|x,{k_i}} )} - m \cdot \log {p_x}(x )- \sum\limits_{i = 1}^m {log{p_k}({{k_i}} )} } \right)$$

In this paper, we assume that the noise is independent and identically distributed Gaussian noise. Therefore, the first term in the above formula is a Gaussian likelihood. For latent image, we adopt the Hyper-Laplacian priors which are proved to conform to the distribution of the natural image gradients [24]. As for the convolution kernel, sparse distribution is an obvious feature. Thus, $L1$ norm was used to constrain kernel. In conclusion, according to the probability density of the priors, we can rewrite Eq. (6) as follows:

$$({x,{k_1},{k_2}, \cdots ,{k_m}} )= \mathop {\arg \min }\limits_{x,{k_1}, \cdots {k_m}} \left( {\frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + \sum\limits_{j = 1}^2 {||{{f_j} \ast x} ||_p^p} } + \frac{\gamma }{2}\sum\limits_{i = 1}^m {{{||{{k_i}} ||}_1}} } \right)$$

Here ${f_j}$ is the first-order derivative filter such as ${f_1} = [{1, - 1} ],{\kern 1pt} {\kern 1pt} {\kern 1pt} {f_2} = {[{1, - 1} ]^T}$, $\lambda ,{\kern 1pt} {\kern 1pt} \gamma $ are regularization weights. Since $0 < p < 1$, it is a quasi-norm, the sign ${||\cdot ||_p}$ is used only for convenience. The solution of (7) can be obtained by alternatively solving

$$x = \mathop {\arg \min }\limits_x \frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + \sum\limits_{j = 1}^2 {||{{f_j} \ast x} ||_p^p} }$$
and for i in $({1, \cdots ,m} )$
$${k_i} = \mathop {\arg \min }\limits_{{k_i}} ||{{k_i} \ast x - {y_i}} ||_2^2 + \mu {||{{k_i}} ||_1}$$
where $\mu = {\gamma / \lambda }$. The following chapters solve the above two optimization problems in detail.

3.1 Reconstruction of a latent image

In this section, we are committed to solving problem (8). Under the framework of half-quadratic splitting, two auxiliary variables ${w_1}$ and ${w_2}$ are introduced:

$$x = \mathop {\arg \min }\limits_x \frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + \sum\limits_{j = 1}^2 {||{{w_j}} ||_p^p} } + \frac{\beta }{2}\sum\limits_{j = 1}^2 {||{{f_j} \ast x - {w_j}} ||_2^2}$$

Here $\beta $ is a penalty coefficient that increases gradually during iteration. As $\beta \to + \infty $, the solution of Eq. (10) converges to that of Eq. (8). The alternative minimization strategy is used again. For a fixed $\beta $, Eq. (10) can be written as the following two sub-problems:

$$x = \mathop {\arg \min }\limits_x \frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + } \frac{\beta }{2}\sum\limits_{j = 1}^2 {||{{f_j} \ast x - {w_j}} ||_2^2}$$
and for j in $({1,2} )$
$${w_j} = \mathop {\arg \min }\limits_{{w_j}} \frac{\beta }{2}||{{f_j} \ast x - {w_j}} ||_2^2 + ||{{w_j}} ||_p^p$$

The former is a quadratic convex problem and the latter is a $Lp$ problem. By using the two-dimensional fast Fourier transform, we can easily obtain the closed-form solution of the $x$sub-problem in the frequency domain:

$$x = {{\cal F}^{ - 1}}\left( {\frac{{\lambda \sum\limits_{i = 1}^m {\overline {{\cal F}({{k_i}} )} {\cal F}({{y_i}} )+ \beta \sum\limits_{j = 1}^2 {\overline {{\cal F}({{f_j}} )} {\cal F}({{w_j}} )} } }}{{\lambda \sum\limits_{i = 1}^m {\overline {{\cal F}({{k_i}} )} {\cal F}({{k_i}} )+ \beta \sum\limits_{j = 1}^2 {\overline {{\cal F}({{f_j}} )} {\cal F}({{f_j}} )} } }}} \right)$$
where ${\cal F}({\cdot} )$ and ${{\cal F}^{ - 1}}({\cdot} )$ represent the Fast Fourier Transform (FFT) and inverse FFT, respectively; $\bar{c}$ stands for the conjugate of c.

Given a fixed $x$, the w sub-problems are two independent one-dimensional problems. And we use IRLS algorithm [41] to solve the problems. The Eq. (12) can be approximated in the following form:

$${w_j} = \mathop {\arg \min }\limits_{{w_j}} \frac{\beta }{2}||{{f_j} \ast x - {w_j}} ||_2^2 + \sum\limits_{t = 1}^n {{{({{w_{jt}}^2 + \varepsilon } )}^{\frac{p}{2}}}}$$

Here $t$ denotes pixel index and $\varepsilon $ is an extremely mini number to prevent 0 from being divided. Thus, the solutions of ${w_j}$ can be obtained by iterations,

$$w_{jt}^l = \frac{{\beta {{({{f_j} \ast x} )}_t}}}{{\beta + {p / {{{({{{({w_{jt}^{l - 1}} )}^2} + \varepsilon } )}^{1 - \frac{p}{2}}}}}}}$$
where $l$ denotes the number of iterations. The main process of solving Eq. (8) is summarized in Algorithm 1.

oe-29-8-12145-i001

3.2 Estimating kernel

With the intermediate image $x$, we solve for each ${k_i}$ in terms of Eq. (9). Previous deblurring works [22,31] shows that the estimation of kernel in gradient domain is more efficient and accurate. Thus, we estimate the blur kernels by:

$${k_i} = \mathop {\arg \min }\limits_{{k_i}} ||{\nabla x \ast {k_i} - \nabla {y_i}} ||_2^2 + \mu {||{{k_i}} ||_1}$$
where $\nabla $ represents gradient operator. The above problem can be solved by IRLS algorithm again, as follows:
$${k_i}^{lk} = \mathop {\arg \min }\limits_{{k_i}} ||{\nabla x \ast {k_i} - \nabla {y_i}} ||_2^2 + \mu \cdot k_i^T{({K_i^{lk}} )^{ - 1}}{k_i}$$
where ${K_i}^{lk} = \textrm{diag}({|{{k_i}^{lk - 1}} |} )$ and ${({{K_i}} )^{ - 1}}$ is the inverse of it. As we can see that the above objective function is symmetric positive definite. Therefore, conjugate gradient (CG) method can be used to solve Eq. (17) in a fast form. We iterate five times in spatial domain to get the kernels. After getting the kernel, we take $tk$ as the threshold to return the small value to zero. And the estimated ${k_i}$ must be normalized for the purpose of energy conservation. Combined with the previous content, the entire RSA images synthesis algorithm based on multi-frame deconvolution is summarized in Algorithm 2.

oe-29-8-12145-i002

4. Implementation and results

In this section, we focus on our implementation process and the corresponding results. First of all, we carried out simulation experiments to verify the correctness of the principle. After that, we built a prototype and restored the real shot images. The details are as follows.

4.1 Simulation experiment

In this chapter, we carry out imaging simulation according to the principle of Sec. 2. The apertures and the corresponding PSFs are shown in Fig. 3. The aspect ratio of the rectangular aperture is 5:1. In multiple shooting simulations, we rotate 10 degrees each time, and the corresponding PSF at 18 angles is shown in Fig. 4. The corresponding shooting records (see Fig. 5) can be simulated by Eq. (4), in which the noise is set as Gaussian noise and the variance is 2.55. From the figure below, we can see that the images of the rectangular aperture are obviously blurred than the circular aperture.

 figure: Fig. 5.

Fig. 5. Simulated blurred images of rotated rectangular aperture imaging. The image on the left is ground truth, and on the right are three simulated images, which are circular aperture, 30 degree rectangular aperture and 140 degree rectangular aperture. The top left corner is the corresponding PSFs.

Download Full Size | PDF

After 18 blurred inputs are obtained, Algorithm 2 was used for images synthesis. In order to use Algorithm 2, we should first estimate the size of the degraded kernel. In this simulation experiment, $ks$ is set to 25. And other parameters are detailed as follows:$\mu = 0.04$, $tk = 0.08$, $L = 10$, $\lambda = 1000$, $p = 0.8$, ${\beta _0} = 1,{\beta _{\textrm{rate}}} = 2,{\beta _{\max }} = 2e20$, ${L_{im}} = 5$, and ${\lambda _{rf}} = 3000$. The estimated blurred kernels are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. The estimated kernels through Algorithm 2

Download Full Size | PDF

As we can see that the estimated blurred kernels are almost the same as the real PSFs in Fig. 4. The effectiveness of our algorithm is proven. In order to better evaluate the similarity of the estimated PSFs, we performed some quantitative analysis. First, we calculate the PSNR and SSIM values between real PSFs and estimated PSFs. Secondly, we use these PSFs to convolve the original image and calculate the PSNR and SSIM values between the blurred images. The corresponding objective evaluation values are shown in Fig. 7. The PSNR values are all greater than 40 and the SSIM values are all greater than 0.99. These values intuitively show the high accuracy of our estimated PSFs. And the accurate estimations of PSFs lay a solid foundation for the subsequent high-quality image reconstruction.

 figure: Fig. 7.

Fig. 7. The quantitative evaluation of estimated kernels. (a) PSNR values between real PSFs and estimated PSFs. PSNR values between blurred images. And the blurred images are obtained by convolving the original image with these two PSFs. (b) Corresponding SSIM values.

Download Full Size | PDF

After the PSF is estimated, we use these PSFs for non-blind reconstruction. The synthesis results and comparison with other methods are shown in the figure below. The Ofek’s method is a non-blind restoration method commonly used in RSA imaging [10,12,13]. It can be seen from the Fig. 8 that our synthetic image is much clearer and richer texture details than the result of the Ofek’s method. Objective evaluation values are also significantly higher than other methods.

 figure: Fig. 8.

Fig. 8. Synthesis results and visual comparisons. On the left is the real image. The middle two are the results of Ofek's method, which is a non-blind reconstruction. GK means using the real kernels (see Fig. 4) as input, and EK means using the estimated kernels (see Fig. 6) as input. On the right is the result of our method. The quantitative evaluation is at the bottom of each image.

Download Full Size | PDF

4.2 Comparison with other priors

In order to illustrate the excellent effect of Hyper-Laplacian prior in RSA images reconstruction, we did the restoration experiments with two commonly used priors: $L0$ norm gradient regularization [24,25] and dark channel prior [23]. We take the real simulated blur kernels as the inputs to perform non-blind restorations, which can more fairly compare the effects of priors.

Under the $L0$ norm gradient regularization, the optimization problem is converted to:

$$x = \mathop {\arg \min }\limits_x \frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + {{||w ||}_0}} + \frac{\beta }{2}||{\nabla x - w} ||_2^2$$

Similarly, the above problem can be solved by alternating minimization strategy. The $x$sub-problem is the same as that of Hyper-Laplacian prior. Refer to [24], the solution to the $w$sub-problem is as follows:

$$w = \left\{ {\begin{array}{c} {\nabla x,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \;|{\nabla x} |{{\kern 1pt}^2} \ge {\textstyle{2 \over \beta }}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{otherwise}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \end{array}} \right.$$

With the dark channel prior, the problem is converted to:

$$x = \mathop {\arg \min }\limits_x \frac{\lambda }{2}\sum\limits_{i = 1}^m {||{{k_i} \ast x - {y_i}} ||_2^2 + {{||w ||}_0}} + \frac{\beta }{2}||{Mx - w} ||_2^2$$
where $M$ represents the matrix that used to extract the dark channel information of the image. Refer to Pan’s method [23], the $x$sub-problem can be solved by FFT:
$$x = {{\cal F}^{ - 1}}\left( {\frac{{\lambda \sum\limits_{i = 1}^m {\overline {{\cal F}({{k_i}} )} {\cal F}({{y_i}} )+ \beta {\cal F}({{M^T}w} )} }}{{\lambda \sum\limits_{i = 1}^m {\overline {{\cal F}({{k_i}} )} {\cal F}({{k_i}} )+ \beta } }}} \right)$$

And the solution of the $w$ sub-problem is as follows:

$$w = \left\{ {\begin{array}{c} {Mx,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \;|{Mx} |{{\kern 1pt}^2} \ge {\textstyle{2 \over \beta }}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }\\ {0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{otherwise}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \end{array}} \right.$$

We also take the 18 blurred images in simulated experiment as inputs to test the above two methods. The reconstructed results are shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. Reconstructed results under different priors. All methods are non-blind restoration with the real kernels as the input. The top four images are ground truth and the results of three methods. The four images below are comparison of the red box area.

Download Full Size | PDF

The image reconstructed by $L0$ norm is too smooth, which leads to the loss of the details. The image reconstructed by dark channel prior has black dot noises, which is due to the characteristics of the dark channel. It is evident that the result of Hyper-Laplacian prior is superior to that of $L0$ norm regularization and dark channel prior. That is mainly due to the nature of the prior itself. The Hyper-Laplacian prior can well represent the heavy-tailed distribution of the real image gradients, which provides a guarantee for high-quality image restoration (see Fig. 1 in [27]). We also compared the running times of the different methods, summarized in Table 1. All methods are run on a computer with i5 CPU and 16G memory. The operating environment is Win10 and Matlab R2018b.

Tables Icon

Table 1. The running time of the different methods.

4.3 Real shot experiment

After verifying the principles of forward imaging and backward synthesis, we built a prototype for real shooting experiments. Our prototype consists of a 102 mm diameter lens and a Sony imx253 CMOS sensor behind it. The picture of our prototype is shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Our prototype for real shooting experiments. The area in the red box in the upper right corner is a rotating device. And the rectangular aperture is inside it. The bottom left image is the rectangular aperture.

Download Full Size | PDF

In front of the lens, we add a blocking device that only transmits light in the central rectangular area, and install this device on the rotating table. The aspect ratio of the rectangular area is 3:1. The ISO12233 test chart was placed at a distance of about 10 meters as an object. We shoot every 22.5 degrees of rotation. Therefore, a total of 8 blurred images are obtained. For the convenience of processing, we only cut 1K × 1K parts from each large image, as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. The real shot images. The bottom number shows the shooting angle.

Download Full Size | PDF

In order to synthesize a clear image from the above 8 blurred inputs, the parameters of Algorithm 2 are set as follows: $tk = 0.05$, $\lambda = 1500$, ${\lambda _{rf}} = 4000$. Other parameters are the same as the simulation. Figure 12 shows the final synthetic image. And the estimated kernels shown in Fig. 13.

 figure: Fig. 12.

Fig. 12. Comparisons on a real shooting image. The top three images are the results of three methods, where the left is the result of the circular aperture imaging, the middle is the result of the Ofek’s method, and the right is our result. The following three sets of images are partial comparisons of the corresponding color boxes.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. The estimated kernels of real shot experiment.

Download Full Size | PDF

In order to show our result, we removed the shading device and photographed the image of full circular aperture, as shown in Circular in Fig. 12. As we can see from the above figures, our result is obviously better than those of the other two methods. The details in the red and blue boxes in Fig. 12 show that our method significantly improves the optical resolution of the image. In addition, the contents of the green boxes indicates that the image we synthesized is very clear in all directions. Furthermore, the rotation angles of the estimated kernels in Fig. 13 further verify the effectiveness of our method.

5. Conclusion

In this paper, we present a new image synthesis method for rotated rectangular aperture imaging. Our method perfectly combines multi-frame deconvolution and RSA image synthesis. Under the framework of maximum a posterior estimation, Hyper-Laplacian prior and sparse prior are used to constrain image and kernels respectively. And an effective solution is developed through generalized some image deconvolution methods to the multi-frame case. To prove the effectiveness of our algorithm, we first designed a simulation experiment, and then conducted a real shot experiment with a built prototype. Both experimental results show that our algorithm greatly improves the image resolutions, and performs well in visual effects and objective evaluations.

Funding

National Natural Science Foundation of China (61975175); Science and Technology Support Program of Jiangsu Province (BE2019063).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Kodak24 dataset http://r0k.us/graphics/kodak/kodim15.html.

References

1. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light, 7th ed. (Elsevier, 2013).

2. L. M. Stepp, L. G. Daggert, and P. E. Gillett, “Future giant telescopes,” Proc. SPIE 4840, 309–321 (2003). [CrossRef]  

3. G. L. Rafanelli and M. J. Rehfield, “Full aperture image synthesis using rotating strip aperture image measurements,” U.S. patent application 5,234,351 (7 Sep. 1993).

4. B. R. Hunt, G. L. Rafanelli, P. J. Sementilli, A. M. Bisbee, J. F. Montgomery, and S. K. Johnson, “Super-resolved full aperture scene synthesis using rotating strip aperture image measurements,” U.S. patent application 6,155,704 (5 Dec. 2000).

5. G. L. Rafanelli, S. B. Mount, S. K. Johnson, M. A. Sperka, E. B. Jensen, and M. J. Rehfield, “Moving object and transient event detection using rotation strip aperture image measurements,” U.S. patent application 6,192,322 (20 Feb. 2001).

6. G. L. Rafanelli, C. M. Cosner, S. B. Spencer, D. Wolfe, A. Newman, R. Polidan, and S. Chakrabarti, “Revolutionary astrophysics using an incoherent synthetic optical aperture,” Proc. SPIE 10398, 103980P (2017). [CrossRef]  

7. M. S. Banks, W. W. Sprague, J. Schmoll, A. Q. Parnell, and G. D. Love, “Why do animal eyes have pupils of different shapes?” Sci. Adv. 1(7), e1500391 (2015). [CrossRef]  

8. G. Nir, B. Zackay, and E. O. Ofek, “Can telescopes with elongated pupils achieve higher contrast and resolution?” Proc. SPIE 10701, 107010Q (2018). [CrossRef]  

9. B. Monreal, C. Rodriguez, A. Carney, R. Halliday, and M. Wang, ““Wide aperture exoplanet telescope: a low-cost flat configuration for a 100+ meter ground based telescope,” J. Astronomical Telescopes Instruments, and Systems 4(2), 024001 (2018).

10. G. Nir, B. Zackay, and E. O. Ofek, “A possible advantage of telescopes with a non-circular pupil,” The Astronomical J. 158(2), 70 (2019). [CrossRef]  

11. B. Zackay, E. O. Ofek, and A. Gal-Yam, “Proper image subtraction—optimal transient detection, photometry, and hypothesis testing,” The Astronomical J. 830(1), 27 (2016). [CrossRef]  

12. B. Zackay and E. O. Ofek, “How to COAAD images. I. optimal source detection and photometry of point sources using ensembles of images,” The Astrophysical Journal 836(2), 187 (2017). [CrossRef]  

13. B. Zackay and E. O. Ofek, “How to COAAD images. II. a coaddition image that is optimal for any purpose in the background-dominated noise limit,” The Astrophysical J. 836(2), 188 (2017). [CrossRef]  

14. A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. on Image Process. 21(4), 1715–1728 (2012). [CrossRef]  

15. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” The Astronomical J. 79(6), 745–754 (1974). [CrossRef]  

16. W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol. 43(5), 926–933 (2011). [CrossRef]  

17. R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” in ACM SIGGRAPH (2006), pp. 787–794.

18. Z. Xu and E. Y. Lam, “Maximum a posteriori blind image deconvolution with Huber–Markov random-field regularization,” Opt. Lett. 34(9), 1453–1455 (2009). [CrossRef]  

19. Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 1–10 (2008). [CrossRef]  

20. A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1964–1971.

21. A. N. Tikhonov, “On the stability of inverse problems,” Dokl. Akad. Nauk. SSSR 39(5), 195–198 (1943).

22. M. Yin, J. Gao, D. Tien, and S. Cai, “Blind image deblurring via coupled sparse representation,” J. Vis. Commun. Image Representation 25(5), 814–821 (2014). [CrossRef]  

23. J. Pan, D. Sun, H. Pfister, and M. H. Yang, “Blind image deblurring using dark channel prior,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (IEEE, 1628–1636 (2016).

24. J. Pan, Z. Hu, Z. Su, and M. H. Yang, “Deblurring text images via L0-regularized intensity and gradient prior,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 2901–2908.

25. W. Dong, S. Tao, G. Xu, and Y. Chen, “Blind deconvolution for poissonian blurred image with total variation and L0-norm gradient regularizations,” IEEE Trans. on Image Process. 30, 1030–1043 (2021). [CrossRef]  

26. T. F. Chen and C. K. Wong, “Total variation blind deconvolution,” IEEE Trans. on Image Process. 7(3), 370–375 (1998). [CrossRef]  

27. D. Krishnan and R. Fergus, “Fast image deconvolution using Hyper-Laplacian priors,” Adv. Neural Inf. Process. Systems 22, 1033–1041 (2009).

28. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imaging Sci. 1(3), 248–272 (2008). [CrossRef]  

29. E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted L1 minimization,” J. Fourier Anal. Appl. 14(5-6), 877–905 (2008). [CrossRef]  

30. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2008), pp. 3869–3872.

31. W. Zuo, D. Meng, L. Zhang, X. Feng, and D. Zhang, “A generalized iterated shrinkage algorithm for non-convex sparse coding,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 217–224.

32. T. S. Cho, S. Paris, B. P. Horn, and W. T. Freeman, “Blur kernel estimation using the radon transform,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 241–248.

33. L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, “Handling noise in single image deblurring using directional filters,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 612–619.

34. C. Yang, H. Feng, Z. Xu, Y. Chen, and Q. Li, “Image deblurring utilizing inertial sensors and a short-long-short exposure strategy,” IEEE Trans. on Image Process. 29, 4614–4626 (2020). [CrossRef]  

35. M. Piana and M. Bertero, “Regularized deconvolution of multiple images of the same object,” J. Opt. Soc. Am. A 13(7), 1516–1523 (1996). [CrossRef]  

36. Y. W. Wen, C. Liu, and A. M. Yip, “Fast splitting algorithm for multiframe total variation blind video deconvolution,” Appl. Opt. 49(15), 2761–2768 (2010). [CrossRef]  

37. W. Dong, H. Feng, Z. Xu, and Q. Li, “Multi-frame blind deconvolution using sparse priors,” Opt. Commun. 285(9), 2276–2288 (2012). [CrossRef]  

38. T. J. Schulz, B. E. Stribling, and J. J. Miller, “Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope,” Opt. Express 1(11), 355–362 (1997). [CrossRef]  

39. D. A. Hope and S. M. Jefferies, “Compact multiframe blind deconvolution,” Opt. Lett. 36(6), 867–869 (2011). [CrossRef]  

40. Y. V. Zhulina, “Multiframe blind deconvolution of heavily blurred astronomical images,” Appl. Opt. 45(28), 7342–7352 (2006). [CrossRef]  

41. Q. Lyu, Z. Lin, Y. She, and C. Zhang, “A comparison of typical Lp minimization algorithms,” Neurocomputing 119, 413–424 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are available in Kodak24 dataset http://r0k.us/graphics/kodak/kodim15.html.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Overview of rotated rectangular aperture imaging. (a) Principle and pipeline of rotated rectangular pupil imaging. It mainly includes rotating multiple shots and images synthesis. (b) The imaging results of the same object under different apertures. The upper right corner is the corresponding aperture. Left image is the result of the circular aperture. The middle three images are the results of rectangular aperture at different angles. As we can see, ${\textrm{0}^\textrm{o}}$ frame has obvious degradation in the vertical direction while $\textrm{9}{\textrm{0}^\textrm{o}}$ frame has serious degradation in the horizontal direction. The right image is the synthetic result of multiple rectangular images.
Fig. 2.
Fig. 2. Schematic of rotated rectangular aperture imaging.
Fig. 3.
Fig. 3. Different apertures and the corresponding PSF. (a) Circular aperture. (b) The simulated PSF of the circular aperture. (c) Rectangular aperture with aspect ratio of 5:1. (d) The simulated PSF of the rectangular aperture.
Fig. 4.
Fig. 4. The PSF obtained by rotating the rectangular aperture in Fig. 3 at a certain angle counterclockwise. All the angles in this paper are recorded in this way.
Fig. 5.
Fig. 5. Simulated blurred images of rotated rectangular aperture imaging. The image on the left is ground truth, and on the right are three simulated images, which are circular aperture, 30 degree rectangular aperture and 140 degree rectangular aperture. The top left corner is the corresponding PSFs.
Fig. 6.
Fig. 6. The estimated kernels through Algorithm 2
Fig. 7.
Fig. 7. The quantitative evaluation of estimated kernels. (a) PSNR values between real PSFs and estimated PSFs. PSNR values between blurred images. And the blurred images are obtained by convolving the original image with these two PSFs. (b) Corresponding SSIM values.
Fig. 8.
Fig. 8. Synthesis results and visual comparisons. On the left is the real image. The middle two are the results of Ofek's method, which is a non-blind reconstruction. GK means using the real kernels (see Fig. 4) as input, and EK means using the estimated kernels (see Fig. 6) as input. On the right is the result of our method. The quantitative evaluation is at the bottom of each image.
Fig. 9.
Fig. 9. Reconstructed results under different priors. All methods are non-blind restoration with the real kernels as the input. The top four images are ground truth and the results of three methods. The four images below are comparison of the red box area.
Fig. 10.
Fig. 10. Our prototype for real shooting experiments. The area in the red box in the upper right corner is a rotating device. And the rectangular aperture is inside it. The bottom left image is the rectangular aperture.
Fig. 11.
Fig. 11. The real shot images. The bottom number shows the shooting angle.
Fig. 12.
Fig. 12. Comparisons on a real shooting image. The top three images are the results of three methods, where the left is the result of the circular aperture imaging, the middle is the result of the Ofek’s method, and the right is our result. The following three sets of images are partial comparisons of the corresponding color boxes.
Fig. 13.
Fig. 13. The estimated kernels of real shot experiment.

Tables (1)

Tables Icon

Table 1. The running time of the different methods.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = A P ( x , y ) exp ( j π λ f ( x 2 + y 2 ) )
P ( x , y ) = { 1 , in aperture 0 , else
U f ( x f , y f ) = 1 j λ f exp ( j π λ f ( x f 2 + y f 2 ) ) F { U ( x , y ) exp ( j π λ f ( x 2 + y 2 ) ) }
y i = x k i + n
( x , k 1 , k 2 , , k m ) = arg max x , k 1 , k m 1 m p y | x , k ( y i | x , k i ) p x ( x ) p k ( k i )
( x , k 1 , k 2 , , k m ) = arg min x , k 1 , k m ( i = 1 m log p y | x , k ( y i | x , k i ) m log p x ( x ) i = 1 m l o g p k ( k i ) )
( x , k 1 , k 2 , , k m ) = arg min x , k 1 , k m ( λ 2 i = 1 m | | k i x y i | | 2 2 + j = 1 2 | | f j x | | p p + γ 2 i = 1 m | | k i | | 1 )
x = arg min x λ 2 i = 1 m | | k i x y i | | 2 2 + j = 1 2 | | f j x | | p p
k i = arg min k i | | k i x y i | | 2 2 + μ | | k i | | 1
x = arg min x λ 2 i = 1 m | | k i x y i | | 2 2 + j = 1 2 | | w j | | p p + β 2 j = 1 2 | | f j x w j | | 2 2
x = arg min x λ 2 i = 1 m | | k i x y i | | 2 2 + β 2 j = 1 2 | | f j x w j | | 2 2
w j = arg min w j β 2 | | f j x w j | | 2 2 + | | w j | | p p
x = F 1 ( λ i = 1 m F ( k i ) ¯ F ( y i ) + β j = 1 2 F ( f j ) ¯ F ( w j ) λ i = 1 m F ( k i ) ¯ F ( k i ) + β j = 1 2 F ( f j ) ¯ F ( f j ) )
w j = arg min w j β 2 | | f j x w j | | 2 2 + t = 1 n ( w j t 2 + ε ) p 2
w j t l = β ( f j x ) t β + p / ( ( w j t l 1 ) 2 + ε ) 1 p 2
k i = arg min k i | | x k i y i | | 2 2 + μ | | k i | | 1
k i l k = arg min k i | | x k i y i | | 2 2 + μ k i T ( K i l k ) 1 k i
x = arg min x λ 2 i = 1 m | | k i x y i | | 2 2 + | | w | | 0 + β 2 | | x w | | 2 2
w = { x , | x | 2 2 β 0 , otherwise
x = arg min x λ 2 i = 1 m | | k i x y i | | 2 2 + | | w | | 0 + β 2 | | M x w | | 2 2
x = F 1 ( λ i = 1 m F ( k i ) ¯ F ( y i ) + β F ( M T w ) λ i = 1 m F ( k i ) ¯ F ( k i ) + β )
w = { M x , | M x | 2 2 β 0 , otherwise
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.