Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Controlled angular and radial scanning for super resolution concentric circular imaging

Open Access Open Access

Abstract

Poor motion estimation and subsequent registration are detrimental to super-resolution (SR). In this paper, we present a camera sampling method for achieving SR in concentric circular trajectory sampling (CCTS). Using this method, we can precisely control regular radial and angular shifts in CCTS. SR techniques can be subsequently applied ring by ring in radial and angular dimensions. Not only does the proposed camera sampling method eliminate the transient behavior and increases the sampling speed in CCTS, it also preserves the SR accuracy. Our experimental results demonstrate that our approach can accurately discriminate SR pixels from blurry images.

© 2016 Optical Society of America

1. Introduction

Multiple-image super-resolution (SR) techniques use sub-pixel overlapping of low-resolution (LR) images to reconstruct a high-resolution (HR) image. Typically, motion estimation of LR images is essential for SR techniques [1], as poor motion estimation and subsequent registration are detrimental to SR [2]. For example, a low signal-to-noise ratio (SNR) may cause large registration errors leading to edge jaggedness in the SR image thereby hampering the reconstruction of fine details [3]. Usually image registration can be done either in frequency or in spatial domain [4,5]. Most frequency domain methods are based on the fact that two shifted images differ by only a phase variation that can be estimated using a phase correlation method [4,6,7]. These methods have robustness to noises and separability of rotational and translational components because of their intrinsic Fourier representation of signals, though they are limited to global motion models. Spatial domain methods, however, use either the whole image or extracted feature vectors to estimate the relative motion [8,9]. Though image registration algorithms achieved high development, the widespread uses of these algorithms are limited because of the computational cost and difficulty in validating the results, as well as their sensitivity to imaging conditions.

Instead of using complex image registration algorithms, several attempts have been made to use controlled or known motion between LR images for SR. These researches are based on the fact that the interpolation for SR can be performed using a set of spatially regularly shifted LR images [10–12] and the SR problem can be modeled using the generalized sampling theorem (GST) [13,14]. Using GST, regular shifts of LR images are formulated in a forward image formation matrix and the aliasing is formulated as the combination of frequency sub-bands that have different weights in each LR image. The relatively large determinant of the resultant matrix reduces noise amplification [12]. Regular sub-pixel shifts of the LR images can solve the maximization of the determinant for weakly regularized reconstructions.

Most shift-based SR methods concentrate on sampling by regular motion of entire LR images in lateral directions wherein a set of spatially sub-pixel shifted LR images are merged into a finer grid by up- and down-sampling algorithms [10,11]. A subsequent deblurring filter then deconvolves the combined image to achieve SR image. Prasad provides a theoretical analysis of the shift-based SR problem from the viewpoint of GST [12]. He generalizes the determinant conditions that can guarantee mutual independence between the collected LR images. Based on this, he concludes that the magnitude of the determinant exhibits a maximum and drops sharply as the sub-pixel shifts deviate from the regular spacing by even a small fraction of one HR pixel. Moreover, he notes that N × N regularly sub-pixel shifted LR images cannot generate an N-rank SR reconstruction matrix. This issue is solved in practice by introducing regularization functions in SR algorithm. While regularization functions are effective in solving the SR smoothness in the derivative direction, they cannot improve HR restoring.

While there are a few SR algorithms that have been preliminarily developed to include rotational sampling, these methods use the estimate of a single rotation angle for each LR image registration [1,7–9] for SR. Active control of camera rotation has potential to restore upper-limited resolution from LR images, given a rotation sampling and control solution [15]. We have demonstrated that a concentric circle trajectory sampling (CCTS) method can reduce motion vibration and image mapping errors compared to conventional raster scanning [16,17]. The CCTS can achieve large fields of view with higher speeds, while achieving SR image without increasing hardware cost. In essence, before applying the developed SR techniques, we transform rotational sampling LR pixels from Polar coordinates to Cartesian coordinates, and acquire LR images by interpolation. We have demonstrated that we can control the relative rotation between LR images for registration and SR. However, motion in the radial dimension has been unexplored for SR because sampling points of CCTS are irregularly distributed along radial direction thereby limiting the SR image quality. One pixel imaging also limits the implementation of CCTS in practical applications. In this paper, we regularize the concentric circle sampling algorithm and incorporate radial motion into SR image reconstruction. To improve the sampling speed and reduce vibration, we propose a camera array based CCTS method for SR. The SR results of both sampling methods are evaluated. We organize the paper as follows: Section 2 contains the proposed imaging and SR methods. Section 3 demonstrates the experimental results, and we draw conclusions in Section 4.

2. Method

The proposed SR scheme consists of three steps. First, a target image is divided into a number of rings associated with their respective radii given by the sampling algorithm. Second, the LR image of every ring is fed into an SR algorithm. The computation of this step occurs in the local Polar coordinates of each ring separately. Third, the SR image of every ring is transformed into Cartesian coordinates and the SR images of all rings are stitched into a complete SR image.

2.1 Ring-based CCTS for SR

In [16], we designed an optimized CCTS algorithm that maintains rotational sampling in Cartesian composite image as uniformly as possible. We used a set of strict constraints in angular motion and radial motion to maintain sufficient and irredundant neighboring sampling points for all Cartesian coordinates. However, this algorithm results in irregular sampling coordinates along radial and angular directions (see Fig. 1(a)). An ideal solution to this sampling is to divide the sampling area by the Voronoi algorithm, which results in irregularly shaped and sized LR pixels along radial and angular directions (see Fig. 1(a)). Such LR pixels cannot be regularly shifted in the HR Polar coordinates for SR. Regular-shift sampling for SR requires a regular spacing array so that the basis unit in each dimension can describe one integer shift in high resolution coordinates. Furthermore, it is difficult to describe the irregularly shaped and sized LR pixels in regular array in SR algorithms. To solve this problem, we attempt to uniformly divide each circular ring into sub-rings in radial direction with the width of a LR pixel size (see Fig. 1(b)). We regularly shift the primary sampling points along the radial and angular directions in each sub-ring to generate overlapping areas between LR pixels, which results in HR sector bases (see Fig. 1(c)). This local regular shift sampling method allows for conventional SR techniques ring by ring.

 figure: Fig. 1

Fig. 1 CCTS sampling position. (a) Bottom right quadrant optimized CCTS with sampling position (blue dot), pixel coverage (cyan rectangle) and ideal pixel shape (blue polygon). (b) Ring-based CCTS. (c) HR sector basis of ring-based CCTS for SR.

Download Full Size | PDF

In the following, we elaborate upon the ring-based CCTS in Fig. 1(b). We first regularize the constraints in angular motion. As discussed in [16], the ideal tangential sampling density (TSD) for the maximum angular speed approximately equals to 2 as the ring radius increases. Alternatively speaking, the ideal number of sampling points along any ring l is Nl = 2Рl. In order to make the computation easier, we apply the integer operation [∙] on this result and have,

Nl^=[2πΡl],
where Рl is the radius of ring l. All sub-rings in this ring have the same number of sampling pointsNl^. Using the above sampling frequencyNl^, we achieve the angular increment in ring l,

ΔΦl=2πNl^.

Second, we solve Рl by regularizing the constraints in radial motion. Equation (16) and Fig. 3 in [16] offer an optimized solution to radial motion that balances the missing and overlapping areas in circular sampling by forcing circle l with its radius Рl to be the nearest to the intersections of rectangular pixels in the circle. In this paper, we relax these constraints and simplify the radial motion between rings by an easy setting of radius for the innermost sub-ring of ring l,

Ρl=l·wl·dX,
wherewlis the number of sub-rings in ring l, and dX is the size of each LR pixel. Here we let the width of sub-ring equal to the size of LR pixel.

2.2 Formulation of image formation process

The image formation process can be simplified into a continuous-discrete transformation from scene to an acquired image ring by ring. We assume three coordinate systems in our SR process: continuous Polar coordinates (CPC) ρ-θ, discrete LR Polar coordinates (LPC) R-ϕ, and discrete HR Polar coordinates (HPC) ξ-η systems. HPC is an intermediate coordinate system that is assumed for the SR image in Cartesian coordinates. Given an image I(ρ, θ) in CPC system, LR pixels are acquired on equidistant spacing grids in radial and angular dimensions. The projection of the CPC scene to the LPC pixel at (R, ϕ) is formulated by,

g(R,ϕ)=RΔR/2R+ΔR/2ϕΔϕ/2ϕ+Δϕ/2b(Rρ,ϕθ)·I(ρ,θ)dθdρ+n(R,ϕ),
where b(·) denotes the continuous blurring function centered at the coordinates(·), and I(·) is the CPC intensity value at (·), and n(·) is the imaging noise centered at (·). Assume an HPC image H(ξ,η) has a coordinate transformation with the CPC imageI(ρ,θ),(ρ,θ)=s(ξ,η), and each LPC pixel at (R, ϕ) is a sector area covering m × n HPC pixels, where m and n are respectively the partition numbers of HPC pixels respectively in radial and angular dimensions. The HPC pixel has the same size of unit shifts, radial resolution Δρ and angular resolution Δθ. The HPC image can present the LPC image by,
g(R,ϕ)=ξ=Rm·Δρ/2R+m·Δρ/2η=ϕn·Δθ/2ϕ+n·Δθ/2B((R,ϕ)(ξ,η))·H(ξ,η)+n(R,ϕ),
where B((R,ϕ)(ξ,η)) denotes the discrete blurring function that assigns weights to the HR pixels acquired within the corresponding LR pixel at the coordinate(R,ϕ). Without loss of generalization, Eq. (5) is valid for any LPC image acquisition using any in-plane motion assuming the continuous image I(ρ,θ) has a single value in each HR pixel area with the size of (Δρ, Δθ).

2.3 HR image reconstruction and stitching

As shown in Eq. (5), we construct each LR image by assigning down-sampling and blurring operations to the original HR pixels. Extracting SR pixels from LR image is equivalent to solving the inverse image formation problem. In this paper, we consider the following optimization objective for SR image reconstruction,

H(R,ϕ)^=argmin(lgl(R,ϕ)gl(R,ϕ)^),
where · denotes the Euclidean Norm. We solve the above optimization problem using the Iterative Back-Projection (IBP) algorithm in [18,19], Projection Onto Convex Sets (POCS) and its special case Papoulis-Gerchberg algorithm [6].

The above workflow results in SR images of all rings in the HRC system. Using Polar-Cartesian transform, we can project the SR images into the continuous Cartesian coordinates. We, then, interpolate the SR-pixel values in the continuous Cartesian coordinates for a digital image in integer grids.

2.4 Camera sampling for SR

In this section, we extend the ring-based CCTS algorithm to sampling by a camera. When a camera moves in plane, the relative positions between the camera pixels remain rigid. Hence, the neighboring camera pixels in the same CCTS can generate a sequence of relatively shifted LR images in Polar coordinates. For example, the camera pixels of the same angular direction in each ring compose a one-dimensional array that rotationally samples the area in the ring and acquires the same image as that of ring-based CCTS. However, its sampling time will be only 1/wiof the ring-based CCTS sampling time (refer to Eq. (3) and Fig. 1 (b)). Furthermore, shifts between rings can be avoided which in turn reduces vibration. In the following, we discuss the two dimensional array of pixels for CCTS sampling.

Figure 2(a) shows a camera array with a size of (M, N) pixels, and a pixel resolution of dX × dX. The camera has a local Cartesian system X’-Y’. In Figs. 2(b) and 2(c), we assign the camera array origin on its bottom left corner. We use the image center, denoted by o, for the camera CCTS rotation center. We assign duplex integers i and j, i = 1,... M and j = 1,... N, to each camera-array pixel to respectively denote its indices of rows and columns in the camera-array’s local Cartesian coordinates system Xc’-Yc’. The center of each pixel {Qi,j} is highlighted by a solid triangle, and has Polar coordinates registered by the coordinates X’-Y’ and the motion of the camera. Let OQi,j¯=ri,j. We have the camera array initially located by the pixel (1, 1). We can assign Polar coordinates to all the camera-array pixels by,

ri,j=(r1,1+(i1)dX)2+((j1)dX)2,
and

 figure: Fig. 2

Fig. 2 Registration of camera pixels in rotational system. (a) Camera-array resolution and camera-array pixels in rotational sampling system. (b-c) Matching of camera sampling and ring sampling.

Download Full Size | PDF

αi,j=α1,1tan1(((j1)dX)/(r1,1+(i1)dX)).

For each sub-ring, we sort LR images by the order of Δri,j and Δαi,j. To use the array of camera pixels for LR pixels the same way as in sections 2.1-2.3, we need to define the regular shifts by radial resolution Δr and angular resolution Δα in the Polar grids of HR images in advance. For each ring, we select the camera array with its pixels having both radial and angular coordinate shifts near enough to the regular shift in ring-based sampling for SR. This selection offers accurate and smoothing interpolation result for LR image for ring-based SR − the ideal sampling trajectories have the sampling points aligned with or equidistant to the interpolation coordinates [16]. Hence, to replace ring-based sampling by camera sampling, we need to consider their positioning difference and shift regularity in both radial and angular directions. Assume each ring in Eq. (3) has the same width of M•dX as that of the camera array in the above. As analyzed in the beginning of this section, the pixels {Qi,1} of the camera array form one-dimensional arrays. Rotating the one-dimensional arrays, called one column array, with the angular increment of Eq. (2) for sampling, we can acquire the same sampling positions as section 2.1 (see Fig. 1(b)). For ring l, let r=l·M·dX. Clustering the pixels of each ring into arrays of the size M × N, we can easily measure the positioning difference between the paired pixels of camera sampling and ring-based sampling by (refer to Appendix A),

Er(i,j)=[l·M+(i1)]2+[(j1)]2l·M+(i1)1,
and
Eα(i,j)=l·M·tan1[j1l·M+(i1)]j11,
respectively for radial and angular directions. To incorporate the pixel array {Qi,j}, i = 1,…,M, j = 1,…,N, into camera array for sampling, the positioning difference above needs to be constrained by
Er(i,j)<βr,
and,

Eα(i,j)<βα.

Similarly, let βΔr and βΔα be the thresholds for the misalignments respectively in radial and angular shifts (refer to Appendix A). Array pixel (i, j) on ring l is allowed for sampling when

|1fr(l,i,j)|<βΔr,
and,
|fα(l,i,j)|<βΔα.
Herefr(l,i,j)={1+[(j1)l·M+(i1)]2}1/2 and fα(l,i,j)=(j1)[l·M+(i1)]2+(j1)2·drdX·1dαl.

Equations (11)–(14) offer the criteria for determining the camera array size N that is the maximum columns in the array. For camera sampling with large radius, please refer to Appendix B for our detailed analysis.

3. Experimental results

We evaluate the performance of the proposed sampling and SR imaging techniques using synthetic and real imaging data. We use RMSE (refer to [16]) to evaluate the result of SR from our radial and angular shift-based sampling.

3.1 Evaluation of sampling efficiency

Figure 3 demonstrates the maximum column numbers calculated by Eqs. (11)–(14) when βr = 0.1, βα = 0.1, βΔα = 0.25, and βΔr = 0.1. The plots in Fig. 3 illustrates clearly our analytical result in Eqs. (28)–(30) (see Appendix B). When the camera array is located far from the rotation center, the shifts of array pixels in the radial and angular dimensions of the global Polar coordinates can be approximated by the difference of their local coordinates. The maximum column numbers approximately linearly increase with the sampling radii growth. However, the angular gradient constraint provides a slower growth of maximum column number for camera sampling. The resulting maximum column number for each radius is highlighted by a line (see Fig. 3(a)) that is dominated by the angular gradient constraint (Eq. (15)). This is reasonable because our sampling method has approximately linear growth of radii while the angular growth is much slow to maintain the sampling uniformity in Cartesian coordinates. The curves determined by Eqs. (11) and (12) have similar growth whereby our camera sampling has equal priority in both angular and radial directions.

 figure: Fig. 3

Fig. 3 The determination of column number of camera array for camera sampling (e.g. Max column # for 16 ring is: 4.) (a) maximum column numbers determined respectively by Eqs. (11-14) for rings 1-16 (highlight of the small range in rectangle of (b). (b) maximum column numbers for rings 1-100.

Download Full Size | PDF

Using Eqs. (11)–(14) for ring-based sampling and camera sampling, the respective number of radial motions or sub-rings and angular sampling positions can be calculated to achieve an LR image (see Table 1). The camera sampling needs no radial motion when camera size in radial sampling direction is larger than the necessary view of field. Moreover, camera sampling has much smaller number of angular sampling positions than ring-based sampling whereby it reduces the number of sampling images. As shown in Fig. 4, the only transient behavior or jerks between rings caused by radial motion and angular motion in constant angular velocity (CAV) CCTS can be avoided in camera sampling. The vibration magnitude can be reduced by one order of magnitude.

Tables Icon

Table 1. Comparison of ring−based sampling and camera sampling methods for M × M SR (M is the number of regular shifts in radial direction and angular direction).

 figure: Fig. 4

Fig. 4 Vibration of camera sampling and ring-based CCTS: row 1 – acceleration time series along X axis; row 2 – acceleration time series along Y axis; row 3 – spectra of acceleration along Y axis; column 1 – Camera sampling with concentric CAV circular scan; column 2 – Ring-based sampling with concentric CAV circular scan.

Download Full Size | PDF

To evaluate the reduction of positioning errors using camera sampling, we tracked a line of spots (see Fig. 5(a)) at the start of each ring of both ring-based scan and camera scan. The spots are dots of the fourth zone of a dot distortion target (AP-DD100-P-RM [20]) with the diameter 0.2 mm and dot pitch 0.4 mm. The dot center coordinates are identified by the Hough Transform circle detection algorithm [21]. We used a 1 × lens to acquire m spot images at the start of each ring for both scans (m = 16 in the following). Assuming the rotation center is located at ‘O’, the spots regularly shift one pixel along Y direction for each ring to simulate the transient motion between rings in the ring-based scan. As shown in Fig. 5(a)), the spots are indexed, j = 1,…n and n = 16. The dot centers for each scan can be registered in two matrices [Xi,j] and [Yi,j], i = 1,…,n, j = 1,…m, where Xi,j and Yi,j are respectively the X and Y coordinates of the ith dot at its start of the jth ring. The motion variation for each dot can be calculated by

 figure: Fig. 5

Fig. 5 Positioning error at the start of each ring for ring-based and camera sampling. (a) The image of a line of spots acquired by scans. (b) Positioning error along X axis. (c) Positioning error along Y axis. ‘o’ and ‘*’ are respectively the positioning errors of the start of each ring in ring-based and camera sampling. Here each pixel scale is 11 μm.

Download Full Size | PDF

ΔXi,j=Xi,jjXi,j/mandΔYi,j=Yi,jjYi,j/m.The positioning errors along X and Y coordinates are calculated by

Exj=1n1i|ΔXi,juxi|2,
and
Eyj=1n1i|ΔYi,juyi|2,
where,uxi=jΔXi,j/nand uyi=jΔYi,j/n. Figures 5(b) and 5(c) are the positioning errors calculated by Eqs. (15) and (16). Apparently, the transient motion along Y-axis between rings incurs ten times more positioning errors in ring-based sampling than those in camera sampling. Positioning errors along X-axis are similar in both scans. Note that one pixel CCTS [16,17] has transient radial motion at any sampling position such that its positioning errors are much more than those of ring-based sampling.

3.2 Evaluation of SR with synthetic images

We first generate LR images by convolving the original HR image in Polar coordinates with a 4 × 4 average blurring function using either the ring-based sampling algorithm or camera sampling, and generate 16 sequential LR images with regular angular and radial shifts for each sampling method. Second, the blurred images are down-sampled in LR Polar coordinates. Third, the blurred images are deteriorated with the zero-mean Gaussian white noise with 20 dB peak signal-noise ratio (PSNR). Figure 6 illustrates sampling and SR results of a ROI in ISO_12233. One of the degraded LR images is shown in row 2. Row 2 an row 3 are respectively the POCS SR results of ring-based sampling and camera sampling images. The SR results of both samplings have uncovered the resolution up to the fifth level in the target. All numerical numbers are visible in SR results. Visually, there are no significant differences between the SR results and their respective Fourier spectrum. In other words, the two SR results uncover the pattern details up to the similar high frequency (see the FFT spectrum and strength of dominant energy). Both of them reduce the variations in LR images close to those of the ground truth. However, as highlighted in column 4, we observe that the SR result of camera sampling has more variations of direction of dominant “oriented energy” than that of ring-based sampling, due to the fact that camera sampling method approximates the regular sub-pixel radial and angular motions by interpolation and introduces a little irregularities. Note: we implemented three iterative SR algorithms including POCS, IBP and Papoulis-Gerchberg in MATLAB® (tolerance = 10−5 was used as the iteration stop criteria) and found that POCS is faster and more accurate than the latter two. The other variation-based SR algorithms in [16] take expensive computation time and cannot achieve desirable SR quality. Hence, in the following we only demonstrate and evaluate the POCS SR results.

 figure: Fig. 6

Fig. 6 ISO_12233 target: LR and SR results of CCTS ring-based sampling and camera sampling with the evaluation of Fourier spectrum and “oriented energy”: Row 1 – Ground truth (400 × 400); Row 2 – One blurred and noisy LR image; Row 3 – POCS SR result (400 × 400) of ring-based sampling; Row 4 – POCS SR result (400 × 400) of camera sampling; Column 1– Images; Column 2– Fourier spectrum; Column 3 – Strength of the dominant energy; Column 4 – Direction of dominant “oriented energy”. Note the frequency and energy analysis of ground truth was only performed in the same round area as those of the SR results.

Download Full Size | PDF

Moreover, we quantitatively evaluated the SR image quality of ring-based and camera sampled images using four publicly available 512 × 512 synthetic HR images as reference images: bridge, boat, houses and Lenna [16]. To highlight the results, we only illustrate in Fig. 7 the LR and SR results of bridge and houses. The SR results of both ring-based and camera sampling methods uncover many high-frequency-missing features in the LR image though the SR results of both sampling methods have indistinctive differences. For instance, the bridge frames are significantly sharper and de-aliased after SR calculation for both samplings in the first row of Fig. 7 though the difference between the two SR images are not clear because of most irregular patterns in the ground truth (see the second row). The letters and window frames are much more distinctive in the whole images of SR results in the third row of Fig. 7. In the highlighted areas in the fourth row, camera sampling causes more variations in the direction of window frames in SR results than ring-based sampling.

 figure: Fig. 7

Fig. 7 SR results of ring-based and camera sampling of synthetic images with the evaluation of “oriented energy”: row 1 – images of bridge; row 2 – direction of dominant “oriented energy” of row 1; row3 – images of houses; row 4 – direction of dominant “oriented energy” of row 3; column 1 – ground truth; column 2 – LR image; column 3 – SR for ring-based sampling; column 4 – SR for camera sampling.

Download Full Size | PDF

To quantitatively evaluate the SR images of the two sampling methods, the RMSE values and structural similarity index (SSIM) [22] are calculated between the SR images and their corresponding reference images (see Fig. 8). The RMSE and SSIM values of optimized CCTS (OCCTS) SR results in [16] are included for comparison. Both methods degrade SR for boat, bridge and houses targets compared to OCCTS sampling method. The difference of SR RMSE made by using camera and ring-based sampling methods is insignificant, while camera sampling performs better from the viewpoint of SSIM. The improvement of SSIM can be explained by the bicubic interpolation that we apply to the camera sampled pixel values to achieve regularly angularly and radially distributed pixels for SR. This result is similar to the resolution enhancement in [23] while we only performed once bicubic interpolation and decimation before down sampling in SR instead of doing it in each iteration. The Performance of the technique in [23] for our sampling method deserves the investigation in our future work. Note: These RMSE values of SR images include the errors from both sampling and SR processes. In practical implementation, transient behaviors in OCCTS can introduce more errors to degrade its SR result.

 figure: Fig. 8

Fig. 8 Performance of SR results of sampling of synthetic images: (a) RMSE; (b) SSIM.

Download Full Size | PDF

Note the experimental sampling and SR errors include not only the motion vibration, optics variation and interpolation, but also the errors in the registration of rotation center for achieving LR images. The practical rotation center needs to be found by image registration process. In section 3.2, all synthetic LR images are achieved with the known rotation center in the image center. The above practical imaging noises and errors hamper the achievement of the SR quality for real images as high as for synthetic images.

3.3 Evaluation of real sampling and SR results

We used the rotary stage and imaging system in [17] to evaluate the ring-based and camera sampling for SR. A 1 × lens is used to acquire sampling images. The imaging target is the USAF 1951 (QA30) [24] as detailed in [17]. First, we acquired sixteen sequential LR images with four angular regular shifts and four regular radial shifts for both ring-based and camera sampling methods. Fifteen rings were acquired for each sampling and each sequential LR image using the designed sampling positions in Polar coordinates in section 2. Second, the POCS SR algorithm was applied to the LR images ring by ring in Polar coordinates. Third, the SR results of all rings are stacked and transformed into HR Polar coordinates. In Fig. 9(a) and (b), both the SR results illustrate all high-resolution features in group 5 (refer to Fig. 9(c)) that are blurred and indistinctive in Fig. 9(d). Both SR results illustrate smoothness, high-resolution and de-aliasing effects. The SR results of the two sampling methods have invisible differences as well. In this experiment, sixteen LR images were sampled in 120 s for each scan with rotation speed ω = 40°. However, the number of sampled images for ring-based sampling is 3400 × 4 × 4 while that for camera sampling is 1414 × 4 (see Table 1).

 figure: Fig. 9

Fig. 9 SR results of ring-based and camera sampling of target QA30. (a) SR for camera sampling. (b) SR for ring-based sampling. (c) 20 × image of the target. (d) LR image of CCTS sampling.

Download Full Size | PDF

4. Conclusions

We present a camera sampling method for SR. In this method, LR pixels have knowledge apriori of registration in both radial and angular directions for image reconstruction. This reduces computation cost and improves SR reconstruction accuracy. Camera sampling further eliminates transient behaviors in CCTS and simplifies the sampling process, while preserving regular shifts in both angular and radial directions for SR. Our sampling method results in ring-shaped, small-sized or medium-sized LR sub-images that are suitable for other image process algorithms that are limited by image sizes, such as truncated SVD in deblurring and denoising [24,25].

We also successfully imaged target ISO_12233 (QA72) and Siemens/ Star Section target (T-50) [21] on the same platform [17] to mimic the industrial product inspection of various patterns. The experimental results were as satisfactory as those of target QA30, though we did not demonstrate them in this paper because of limited space. Moreover, our sampling method can be easily incorporated into ubiquitous nonrectangular-sampling infrastructures, including high-precision metrology, bio-medical imaging, remote sensing, tracking, astronomy, photogrammetric, and computing graphics. Our sampling method is independent of sensor size, which facilitates its implementation for any field of view. In addition, every ring-shaped sub-image is sampled and reconstructed independently in SR, which allows for implementing various-level SR for ROIs efficiently and effectively.

In this paper, we only used the standard IBP and its related methods for multiple-image SR to demonstrate the advantages of our proposed camera sampling method. The SR performance can be improved by adding bicubic interpolation followed by bicubic decimation in each iteration [23]. Also, standard single-image SR algorithms employee a kind of machine learning step, like neural networks [26,27] to map the feature maps of a LR image to those of a corresponding HR image, and hence can predict the missing HR details of a input LR image. The single image SR methods either exploit self-similarity property from the same image or learn mapping functions from LR and HR exemplar pairs. Our registration method offers the priori knowledge for the exemplar pairs that may be used for single image SR.

We implemented a commonly used camera to test the proposed sampling methods. Additional sampling errors are introduced because the rectangular pixel shape is used to simulate irregularly shaped LR basis. We have not yet explored the implementation of the sampling methods for other pixel shapes, such as star sectors. If implementing desirably shaped-pixel sensors for the sampling, the SR image quality needs further investigation.

Appendix A Matching camera sampling pixels to ring-based CCTS sampling pixels

For ring l, camera array pixel (i, j) on this ring has coordinates defined by Eqs. (7) and (8). From Eqs. (1)–(3), we can get the Polar coordinates for each camera sampling pixel (1, 1) of the ring l by

rl=l·M·dX,
and
Δαl=Nl*M,
where Δαl denotes the angular step motion of camera pixel (1,1) of ring l and αl denotes the angular position of pixel (1,1).

Then, we use gradient to measure the shift regularity of camera pixels for SR. Letting r=rl andα=αl, we obtain the following from Eq. (7) and Eq. (8),

d(ri,j)={1+[(j1)dXr+(i1)dX]2}1/2dr,
and
d(αi,j)=dα+(j1)[r+(i1)]2+(j1)2·drdX.
We have
d(ri,j)=fr(l,i,j)dr,
and,
d(αi,j)=dαl+fα(l,i,j)·dαl.
Apparently, as l→ + ∞, fr(l,i,j)1 and fα(l,i,j)0, which respectively lead to d(ri,j)drandd(αi,j)dαl. fr(l,i,j)1 and fα(l,i,j)can be the metrics to measure the misalignments of array pixel (i, j) in ring l respectively in radial and angular shifts compared with the regular shifts (Δr, Δα) of pixels { Qi,1 }. For example, we can assign the sampling values of pixels {Qi,2} in the second column of the camera array to half of the ring-based sampling positions in Fig. 1(b).

In the following we attempt to find the critical ring l where the center of pixel (i, j) is sufficiently close to the pixel center in the ring-based CCTS. From Eq. (13), we have

1M·[(j1)1/(1βr)21(i1)]<l.
Given M and βr, the left-hand side of Eq. (23) has the maximum value for ring l when i = 1 and j = M, which offers the critical ring number l for the pixels in the jth column of camera array to achieve regular shifts along radial direction. Using Eqs. (1) and (2), we can approximate l by
dαl=1l·M2·dX.
Substituting Eq. (24) in Eq. (22), we achieve

(j1)l·M2·dr(l·M+i1)2+(j1)2<βΔα.

Appendix B Analysis of camera sampling in large radius

For one circle of the camera rotation, we haver1,1ri,jrM,N. For any of the CCTS-sampling points, the pixel (i, j) shifts (Δri,j,Δαi,j), Δri,j=ri,jr1,1 andΔαi,j=αi,jα1,1, from pixel (1, 1). Using the Taylor series expansions whenr1,1H,W, we can approximate the coordinates respectively by

ri,j=r1,1+(i1)H,
and
αi,j=α1,1[(j1)W]/[r1,1+(i1)H].
When r1,1(i1)H, we have
αi,jα1,1(j1)W/r1,1.
Using these three equations, we can allocate each camera pixel into the local Polar grids transformed from the X’-Y’ coordinates. The shifts of each camera pixel in the radial and angular dimensions of the global Polar coordinates can be approximated by the relative shifts of the camera pixel in the X’-Y’ coordinates. For each ring, we rotate the camera on K circles for LR imaging. The array pixels generate M•N sub-ring-shaped sampled LR images with the width of wk=rM,Nr1,1 for the kth rotation circle, k = 1,..., K.

Funding

National Science Foundation (NSF) (CMMI-1025020).

References and links

1. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

2. S. Borman and R. L. Stevenson, “Spatial resolution enhancement of low-resolution image sequences-a comprehensive review with directions for future research,” Lab. for Image and Signal Analysis, University of Notre Dame, Tech. Rep. (1998).

3. D. Robinson and P. Milanfar, “Fundamental performance limits in image registration,” IEEE Trans. Image Process. 13(9), 1185–1199 (2004). [CrossRef]   [PubMed]  

4. P. Vandewalle, S. Susstrunk, and M. Vetterli, “A frequency domain approach to registration of aliased images with application to super-resolution,” EURASIP J. Adv. Signal Process. 2006, 1–15 (2006). [CrossRef]  

5. W. R. Crum, T. Hartkens, and D. L. G. Hill, “Non-rigid image registration: theory and practice,” Br. J. Radiol. 77(2), S140–S153 (2004). [CrossRef]   [PubMed]  

6. P. Vandewalle, S. Süsstrunk, and M. Vetterli, “Superresolution images reconstructed from aliased images,” Proc. SPIE 5150, 1398–1405 (2003).

7. L. Lucchese and G. M. Cortelazzo, “A noise-robust frequency domain technique for estimating planar roto-translations,” IEEE Trans. Sig. Process. 48, 1769–1786 (2002). [PubMed]  

8. D. P. Capel, Image Mosaicing and Super-resolution (Springer Science & Business Media, 2004).

9. D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746. [CrossRef]  

10. H. Ur and D. Gross, “Improved resolution from subpixel shifted pictures,” Graph Model Im. Proc. 54, 181–186 (1992).

11. L. Poletto and P. Nicolosi, “Enhancing the spatial resolution of a two-dimensional discrete array detector,” Opt. Eng. 38(10), 1748–1757 (1999). [CrossRef]  

12. S. Prasad, “Digital superresolution and the generalized sampling theorem,” J. Opt. Soc. Am. A 24(2), 311–325 (2007). [CrossRef]   [PubMed]  

13. A. Papoulis, “Generalized sampling expansion,” IEEE T. Circuit Syst. 24(11), 652–654 (1977). [CrossRef]  

14. J. L. Brown, “Multi-channel sampling of low-pass signals,” IEEE T. Circuit Syst. 28(2), 101–106 (1981). [CrossRef]  

15. S. Bonchev and K. Alexiev, “Improving super-resolution image reconstruction by in-plane camera rotation,” in 13th Conference on Information Fusion (IEEE, 2010), pp. 1–7. [CrossRef]  

16. X. Du, N. Kojimoto, and B. W. Anthony, “Concentric circular trajectory sampling for super-resolution and image mosaicing,” J. Opt. Soc. Am. A 32(2), 293–304 (2015). [CrossRef]   [PubMed]  

17. X. Du and B. Anthony, “Concentric circle scanning system for large-area and high-precision imaging,” Opt. Express 23(15), 20014–20029 (2015). [CrossRef]   [PubMed]  

18. A. Zomet, A. Rav-Acha, and S. Peleg, “Robust super-resolution,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 645–650.

19. M. Irani and S. Peleg, “Improving resolution by image registration,” Graph. Models Image Proc. 53(3), 231–239 (1991). [CrossRef]  

20. http://www.aig-imaging.com/.

21. E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities (Morgan Kauffman Publishers, 2005).

22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

23. P. Rasti, H. Demirel, and G. Anbarjafari, “Improved iterative back projection for video super-resolution,” In 22nd Signal Processing and Communications Applications Conference (IEEE, 2014), pp. 552–555. [CrossRef]  

24. T. S. Huang and P. M. Narendra, “Image restoration by singular value decomposition,” Appl. Opt. 14(9), 2213–2216 (1975). [CrossRef]   [PubMed]  

25. P. C. Handsen, J. G. Nagy, and D. P. O’Leary, Deblurring Images: Matrices, Spectra, and Filtering (Society for Industrial and Applied Mathematic, 2006).

26. K. Nasrollahi, S. Escalera, P. Rasti, G. Anbarjafari, X. Baro, H. J. Escalante, and T. B. Moeslund, “Deep learning based super-resolution for improved action recognition,” In International Conference on Image Processing Theory, Tools and Applications (IEEE, 2015), pp. 67–72. [CrossRef]  

27. T. Peleg and M. Elad, “A statistical prediction model based on sparse representations for single image super-resolution,” IEEE Trans. Image Process. 23(6), 2569–2582 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 CCTS sampling position. (a) Bottom right quadrant optimized CCTS with sampling position (blue dot), pixel coverage (cyan rectangle) and ideal pixel shape (blue polygon). (b) Ring-based CCTS. (c) HR sector basis of ring-based CCTS for SR.
Fig. 2
Fig. 2 Registration of camera pixels in rotational system. (a) Camera-array resolution and camera-array pixels in rotational sampling system. (b-c) Matching of camera sampling and ring sampling.
Fig. 3
Fig. 3 The determination of column number of camera array for camera sampling (e.g. Max column # for 16 ring is: 4.) (a) maximum column numbers determined respectively by Eqs. (11-14) for rings 1-16 (highlight of the small range in rectangle of (b). (b) maximum column numbers for rings 1-100.
Fig. 4
Fig. 4 Vibration of camera sampling and ring-based CCTS: row 1 – acceleration time series along X axis; row 2 – acceleration time series along Y axis; row 3 – spectra of acceleration along Y axis; column 1 – Camera sampling with concentric CAV circular scan; column 2 – Ring-based sampling with concentric CAV circular scan.
Fig. 5
Fig. 5 Positioning error at the start of each ring for ring-based and camera sampling. (a) The image of a line of spots acquired by scans. (b) Positioning error along X axis. (c) Positioning error along Y axis. ‘o’ and ‘*’ are respectively the positioning errors of the start of each ring in ring-based and camera sampling. Here each pixel scale is 11 μm.
Fig. 6
Fig. 6 ISO_12233 target: LR and SR results of CCTS ring-based sampling and camera sampling with the evaluation of Fourier spectrum and “oriented energy”: Row 1 – Ground truth (400 × 400); Row 2 – One blurred and noisy LR image; Row 3 – POCS SR result (400 × 400) of ring-based sampling; Row 4 – POCS SR result (400 × 400) of camera sampling; Column 1– Images; Column 2– Fourier spectrum; Column 3 – Strength of the dominant energy; Column 4 – Direction of dominant “oriented energy”. Note the frequency and energy analysis of ground truth was only performed in the same round area as those of the SR results.
Fig. 7
Fig. 7 SR results of ring-based and camera sampling of synthetic images with the evaluation of “oriented energy”: row 1 – images of bridge; row 2 – direction of dominant “oriented energy” of row 1; row3 – images of houses; row 4 – direction of dominant “oriented energy” of row 3; column 1 – ground truth; column 2 – LR image; column 3 – SR for ring-based sampling; column 4 – SR for camera sampling.
Fig. 8
Fig. 8 Performance of SR results of sampling of synthetic images: (a) RMSE; (b) SSIM.
Fig. 9
Fig. 9 SR results of ring-based and camera sampling of target QA30. (a) SR for camera sampling. (b) SR for ring-based sampling. (c) 20 × image of the target. (d) LR image of CCTS sampling.

Tables (1)

Tables Icon

Table 1 Comparison of ring−based sampling and camera sampling methods for M × M SR (M is the number of regular shifts in radial direction and angular direction).

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

N l ^ =[ 2π Ρ l ],
Δ Φ l = 2π N l ^ .
Ρ l =l· w l ·dX,
g( R,ϕ )= RΔR/2 R+ΔR/2 ϕΔϕ/2 ϕ+Δϕ/2 b( Rρ,ϕθ )·I( ρ,θ )dθ dρ+n( R,ϕ ),
g( R,ϕ )= ξ=Rm·Δρ/2 R+m·Δρ/2 η=ϕn·Δθ/2 ϕ+n·Δθ/2 B( ( R,ϕ )( ξ,η ) )·H( ξ,η ) +n( R,ϕ ),
H(R,ϕ) ^ =argmin( l g l ( R,ϕ ) g l ( R,ϕ ) ^ ),
r i,j = ( r 1,1 +( i1 )dX ) 2 + ( ( j1 )dX ) 2 ,
α i,j = α 1,1 tan 1 ( ( ( j1 )dX )/( r 1,1 +( i1 )dX ) ).
E r ( i,j )= [ l·M+( i1 ) ] 2 + [ ( j1 ) ] 2 l·M+( i1 ) 1,
E α ( i,j )= l·M· tan 1 [ j1 l·M+( i1 ) ] j1 1,
E r ( i,j )< β r ,
E α ( i,j )< β α .
| 1 f r (l,i,j) |< β Δr ,
| f α (l,i,j) |< β Δα .
E x j = 1 n1 i | Δ X i,j u xi | 2 ,
E y j = 1 n1 i | Δ Y i,j u yi | 2 ,
r l =l·M·dX,
Δ α l = N l*M ,
d( r i,j )= { 1+ [ ( j1 )dX r+( i1 )dX ] 2 } 1/2 dr,
d( α i,j )=dα+ ( j1 ) [ r+( i1 ) ] 2 + ( j1 ) 2 · dr dX .
d( r i,j )= f r (l,i,j)dr,
d( α i,j )=d α l + f α ( l,i,j )·d α l .
1 M ·[ ( j1 ) 1/ (1 β r ) 2 1 ( i1 ) ]<l.
d α l = 1 l· M 2 ·dX .
( j1 )l· M 2 ·dr ( l·M+i1 ) 2 + ( j1 ) 2 < β Δα .
r i,j = r 1,1 +( i1 )H,
α i,j = α 1,1 [ ( j1 )W ]/[ r 1,1 +( i1 )H ].
α i,j α 1,1 ( j1 )W/ r 1,1 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.