Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super-resolution compressive imaging with anamorphic optics

Open Access Open Access

Abstract

A new imaging technique that combines compressive sensing and super-resolution techniques is presented. Compressive sensing is accomplished by capturing optically a set of Radon projections. Super-resolution measurements are simply taken by introducing a slanted two-dimensional array in the optical system. The goal of the technique is to overcome resolution limitation that occurs in imaging scenarios where dense pixels sensors with large number of pixels are not available or cannot be used. With the presented imaging technique, owing to the compressive sensing approach, we were able to reconstruct images with significantly more number of pixels than measured, and owing to the super-resolution design we have been able to achieve resolution significantly beyond that limited by the sensor's pixels size.

© 2013 Optical Society of America

1. Introduction

High quality imaging requires imagers to have large space-bandwidth products. For digital imagers this implies that the sensor array needs to have: (a) sufficiently large number of pixels, and (b) it needs to be dense enough such that the pixels size is sufficiently small to provide high resolution. There are imaging conditions and scenarios in which these two requirements are difficult to be fulfilled together. This typically occurs in imaging applications outside the visible regime where the number of pixels is cost-limited and their size cannot be arbitrarily reduced. Even in the visible spectrum, despite the advanced sensor manufacturing capability, there are applications that require relatively large pixels, such as imaging under low illumination conditions. In this work we present an image acquisition method that is designed to overcome these practical physical limitations. The presented method and system combines compressive sensing (CS) and super-resolution (SR) techniques, where CS is used to reduce the total number of measurements and SR is used to overcome the pixel size limitation.

The recently introduced CS framework [1, 2], allows sampling and reconstruction of signals from a small number of measurements, which is significantly lower than that required by Shannon/Nyqist sampling theory. Due to its appealing advantages, researchers have pursuit to develop many applications of CS for optical imaging and spectral imaging, (see for example [39]). The core idea behind CS, is that for certain types of signals, a small number of linear non-adaptive measurements carry sufficient information for good approximation of the signal. The class of signals that CS is applicable for is that of signals having sparse mathematical representations. Fortunately, most of the natural images are essentially sparse or at least highly comprisable in some mathematical representation domain. In CS the acquisition process is accomplished by taking appropriate set of linear measurements. Contrary to conventional imaging that seeks to map each object point onto a (preferably) single image point, optical CS systems are designed to project multiple image points with different weight onto each single pixel sensor. A brief overview of the CS is provided in subsection 2.1. In this work we use a compressive imaging (CI) method that is based on that presented in [4, 6]. This method is briefly described in subsection 2.3.

Super-resolution [1013] refers to a class of techniques that attempts to reconstruct a single high resolution (HR) scene image from a set of observed images at lower resolution (LR). The most common SR approach considers the case that the LR images are obtained by downsampling subpixel-shifted HR images. A brief introduction to SR is given in subsection 2.2.

In this work we generalize the CI method published in [4, 6] to perform geometrical SR, that is, to obtain resolution below the sensor limit in addition to the optical compression. Following the approach in [4] the acquired measurements are a set of Radon Projections (RP), which can be optically obtained using an anamorphic optical setup. Such a system is figuratively described in Fig. 1(a). Each pixel in the line array of sensors S integrates the intensities over a line in the object plane, so that the entire sensor captures the RP of the scene. Compressive imaging is implemented in [4] by rotating the cylindrical lens together with the line array of sensors around its' optical axis. During the rotational scanning, multiple projections are taken and the image is reconstructed with an appropriate non-linear reconstruction algorithm. In [6] we demonstrated Mpixel size reconstruction with compression ratio of up to ×20 using this approach. Here we extend this method to perform SR by utilizing the fact that the point spread function (PSF) of the anamorphic setup is a line [6, 14] as shown in Fig. 1(a). A straight forward method to extend the technique to perform geometrical SR is made by replacing the single line array of sensors in Fig. 1(a) with multiple staggered line-array of sensors Fig. 1(b). This way, sub-pixel shifted data can be captured, which then can be exploited with conventional SR techniques designed for multichannel measurements with subpixel shifts [10]. The same type of measurements can be obtained more practically by placing a regular two dimensional (2D) array that is slightly rotated around the optical axis with respect to the cylindrical lens coordinates, as shown in Fig. 1(c). As is evident from Fig. 1(c) the PSF crosses the columns of the 2D sensor array at sub-pixels shifts, therefore the columns of the slanted 2D array capture similar information as the staggered sensor. Hence SR numerical reconstructions algorithms can be applied on the set of columns of the slanted sensor array in order to reconstruct high resolution RP.

 figure: Fig. 1

Fig. 1 Capturing subpixel resolution Radon projections. (a) Compressive imaging method using a line array of sensors as in [6], (b) Subpixel information measurement by using two staggered line-array of sensors, (c) Subpixel information measurement using a rotated 2D array instead of the staggered line-array of sensors.

Download Full Size | PDF

One of the main motivations for using our SR compressive imaging (SRCI) system is simultaneously utilization of the advantages of SR system and that of CS systems. The CS part of the system allows image restoration from only small total number of samples. The super-resolution part of the system allows resolution improvement beyond that limited by detector size. Besides the CS and SR benefit of the method we may mention. Its implementation is relatively easy since complicated 2D scanning mechanism is avoided. In Sec. 5 we presented a detailed list of the advantages of this SRCI system.

The paper is organized as follows. In section 2, we give a brief background on CS and SR, and we provide a description of the optical RP acquisition system. In section 3, we describe the SRCI method. In section 4, we show results obtained with simulated and experimental imaging system. In section 5, we conclude and list the advantages of the technique.

2. Background

2.1 Compressive sensing

Compressive sensing is relatively new sampling approach that allows reconstruction of signals from only a few measurements [1, 2]. Compressive sensing relies on the assumption that the acquired signal has some mathematical representation where it is sparse. In order to avoid the obvious loss of information the acquisition process must occur in the some encoded space. The CS process can be described by:

g=Φf+ε=ΦΨα+ε=Ωα+ε,
where fN is the signal that we want to measure, gM is the captured signal, ΦM×N represents the sampling operator, αN is a sparse representation of the signal obtained by α=Ψ1f, where ΨN×N is an inverse of sparsifying transform operator, and εM is an additive noise. In the CS framework, the number of samples Mis smaller than N, therefore the system sampling matrix Φ is ill conditioned and therefore not invertible. Hence, in order to estimate the signal f we must apply appropriate inverse problem techniques, which take in account the sensing model and the sparsity assumption.

Compressive imaging (CI) is a natural application of CS in the field of optical imaging. CI has been applied to acquisition of 2D images, videos, holograms [7] and hyperspectral images [8]. Here we focus on CI of 2D images. In this case f is an image of size n×n, which, in order to comply with Eq. (1) is lexicographically reordered as a column vector of size N×1, where N=n2. Most natural images, have a sparse representation that may be achieved by some sparsifying transform Ψ1(such as discrete Fourier transform, discrete cosine Transform, Hadamard transform and discrete wavelet Transform).

The reconstruction step is based on search of the sparsiest coefficients vector α. A common way achieved by solving the following minimization problem:

αest=argminα{12Ωα-g22+λα1},
where λ is a regularization parameter and 1 is the 1norm. Equation (2) solves a least square problem with a regularization functionα1, thus compromising between data mismatch and sparsity.

2.2 Super-resolution

Super-resolution refers to a class of techniques that allow reconstruction of HR signal from a set of LR signals, typically representing different views of the same scene [913, 1517]. The relation between the HR signal and LR signals can be described through the following multichannel model:

zj=Tjg+εj,j=1,..,b,
where gM is the HR signal, TjSj×Mis a matrix that describes relation between the HR signal and jth LR signal, in general representing geometrical transformations (such as displacements, rotations, warping), downsampling and blurring. The vector zjSj is the measured LR signal, bis the number of different LR signals. We shall denote by S the total number of measurements, that is S=j=1j=bSj, where Sj is the number of measurements in the jth channel.

In order to comply with the notation in Eq. (1), we rewrite Eq. (3) in a matrix-vector form:

z=[z1zb]=[T1Tb]g+ε=Tg+ε,
where zS is the concatenation of all LR images and TS×M is the respective total SR system matrix.

Most common SR acquisition schemes are based on the subpixel translation, that is, the set of LR images{zj}j=1j=b are captured by generating sub pixel shifts between the image captured by camera and the object. This involves inconvenient 2D mechanical scanning. In our work we achieve the same effect by using of rotation scanning. Our acquisition system is described in chapter 3.

2.3 The anamorphic optical setup

Figure 2 details the optical setup shown in Fig. 1. A similar optical setup was used in [4,6,14]. The core of the system is a cylindrical lens, which, in conjunction with the spherical lens has power in the y' direction. Here we added the sensor coordinate system (x'',y''), which unlike in [4, 6, 14] is rotated relatively to the cylindrical lens coordinates (x',y'). In the Fig. 2 the measurement plane is placed on the imaging plane of the spherical lens, and the cylindrical lens can be placed on some other plane, such that to ensure homogeneity of the data in the spreading direction.

 figure: Fig. 2

Fig. 2 Optical Radon projection system.

Download Full Size | PDF

In order to collect sufficient data amount for imaging we need to acquire multiple RP. For this we must simultaneously rotate the sensors together with cylindrical lens in the same direction around the optical axis by some angle θi see Fig. 1. An alternative way to achieve multiple RP is done by rotating only the object. A third option is to place an image rotating component between the cylindrical lens and the object (see [6], for example). In our experiments (sec. 4) we used the third option.

If we denote by Np the number of projections, the measured signal g in Eq. (1) is written as [gθ1TgθiTgθNpT]T, where gθi represents the projection at angle θi. The overall system isΦ=[Φθ1TΦθ2TΦθNpT]T, where Φθiis the matrix describing the ith projection.

Every row of matrix Φ represents a vectorized image that we can treat as a map indicating the locations and weights of the image pixels that contributes to a detector. More details about structure of matrix Φ can be found in [18, 19].

3. Super-resolution system based on compressive imaging with optical Radon projections

3.1 Optical system description

As explained in sec. 1, the main difference between the optical implementation here and in the previous works [4, 6] is the replacement of the line array of sensors S with a 2D array of sensors (e.g., CCD/ CMOS). The proper replacement of the 1D sensor in Fig. 1(a) with the 2D sensor in Fig. 1(c) permits to take advantage of the SR techniques by the fact that the PSF is captured with a subpixel shift see Fig. 1(c).

The relation between the 2D LR grid representing the measured data by the CCD and the 1D HR grid of the reconstructed projection gθi is described in Fig. 3. We shall use the following definitions to describe the system and its model. The parameter n denotes the number of pixels in a single column of the CCD, b is the number of CCD columns that we want to use (it can represent the number of all CCD columns or a specific set of columns), β is the rotation angle between the CCD coordinates (x'',y'') and cylindrical lens coordinates (x',y'). We denote by zj,θithe jthcolumn of the CCD taken at projection angle θi.

 figure: Fig. 3

Fig. 3 Relation between the HR projection grid and LR CCD grid (a) LR (CCD) grid is aligned with HR (virtual) grid (β=0°). Note that LR projections captured by the CCD columns contain the same information. (b) After slating the CCD by β=14° its columns {zj,θi}j=1j=bcontain subpixel-shifted data.

Download Full Size | PDF

Our aim is to reconstruct a HR 1D projection gθi (measuring the RP at angle θi) from b columns of a LR projection. Let us start with Fig. 3(a), where we show the PSF at a particular RP angle, as it captured by the camera LR grid. On this image we add a virtual HR 1D grid that defines the reconstructed signal's resolution. In Fig. 3, for example the HR grid density is 4 times larger than that of the LR sensor array. In the case that β = 0 (Fig. 3(a)), the PSF is in the direction of cylindrical lens axis x'which is coincidence with the direction of the CCD horizontal axis x''. In this case each of the detectors columns{zj,θi}j=1j=bcontains the same data as its neighbour column, as shown in the right hand part of Fig. 3(a). In Fig. 3(b) we ilustrate the case where the cylindrical lens axis (x') is counterclockwise rotated with respect to axis x'' by an angle β. Since the PSF (in the direction of axis x'), is not any more along the sensors'x'' axis, then each colum captures sligthly different shifted data, as seen in the columns [z1,θiz2,θizb,θi]depicted on the right-hand part of Fig. 3(b).

Optimal alignment between HR and LR grids is the one in which HR information is uniformly spread along the LR columns. This is achieved if the angle β is chosen such that the PSF undergoes exactly one vertical LR pixel shift while crossing horizontally the CCD sensor, as shown in Fig. 3(b). For LR pixels with a square form, the angle β fulfilling this requirement is:

β=tan1(1b),

3.2 Mathematical model

The overall acquisition with our SRCI is obtained by combining the geometrical transformation, and downsampling process described by the operator Tin Eq. (4) with the CS acquisition model Eq. (1):

z=Tg=T(Ωα+ε)=Hα+ε,
where zs is the concatenation of all LR images. We assume that our goal is resolution improvement by a factor b, therefore the choice of β is according Eq. (5).

The matrix Tin Eq. (6) is given by:

T=IbeamsTθ,
where Ibeamsdenotes the identity matrix of sizeNp×Np, the sign denotes the Kronecker product (see Appendix A for the explicit expression) and Tθ is the operator which relates one HR projection gθi to [z1,θiTzb,θiT] see Fig. 4. The structures of Tand Tθ are depicted in Fig. 5 (a) and (b), respectively. The matrix Tθ is given by Tθ=[T¯1TT¯jTT¯bT]T, where T¯j is the operator which relates the high resolution projection gθito jth column of the CCD pixels zj,θi. The explicit expression of the matrix T¯j is derived in Appendix A.

 figure: Fig. 4

Fig. 4 Operator Tθ relating the 1D HR grid to the set of columns {zj,θi}j=1j=b, illustrated for the case where the number of columns is b=4 and number of LR sensors in each column isn=6. The relative angle βis in accordance with Eq. (5)

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 The SR transformation matrices in the case where b=4,n=6 and β is in accordance with Eq. (5). (a) Overall SR system matrix T(b) One projection dimension reduction matrix Tθ, (c) Operator T¯j relating the high resolution projection gθi with jthcolumn of CCD pixelszj,θi, where j=1.

Download Full Size | PDF

Figure 4 depicts with more details than Fig. 3(b) the relation between a HR projection gθi and between[z1,θiTzb,θiT] for any projection angle θi and β=14° according to Eq. (5). We will find this figure particularly useful for derivation of T in the Appendix A.

In Figs. 5(a)-5(c) we show the structure of the overall SR matrix T, the entries of one projection dimension reduction matrix Tθ and the entries of matrix T¯1relating gθi with zj,θi in the case where b=4, n=6 and Np=5. We can see that all these matrices are sparse.

4. Results

4.1 Simulation results

In order to evaluate quantitatively our system we first performed a simulated experiment. In our simulation we compare the SRCI system presented here with the CI system presented in [4]. We assume that the SRCI has an array of pixels of size 120×4. As for the system in [4] we consider two cases; in the first case we assume that a line-array of 120×1 sensors is used and in the second case a line-array of 480×1pixels sensors are used. In the first case it is assumed that the same pixels size as with SRCI is used, whereas in the second case it is assumed that the pixels size can be reduced by a factor of 4 (in each dimension) to obtain a denser array having the same number of pixels as with the SRCI. With both systems we assume the same number Np of exposures are taken.

The test image of size 480×480 shown is in Fig. 6(a). It consists of a sequence of gradually increasing squares, where the smallest square has dimensions of 1×1 pixels and the largest square is of size 24×24 pixels. The gap between every two adjacent squares is 8 pixels. The forward process that describes transformations from HR image to the LR projections consists from the following steps. First step in the process is the lexicographical vectorization of the image matrix. The second step is a multiplication between f and between Radon matrix Φθifor obtaining column vector gθiEq. (1). In the third step we perform a duplication of gθi. The matrix that we obtain as a result is equivalent to an image that would be captured by the two-dimensional array of detectors shown in Fig. 3(a). The fourth step is the counterclockwise rotation of the obtained matrix around its center by the angleβ, as in Fig. 3(b). Fifth step is downsampling each dimension by a factor b = 4. Sixth step is the repetition of the previous steps for all projection angles θi for i=1,,Np. All this steps implement Eq. (6). With this process we simulated LR projections {zj,θi}j=1j=b where b=4 for Np=90RP angles. In our simulation, each LR projection has a size n=120.

 figure: Fig. 6

Fig. 6 Original synthetic image) a) Original synthetic image of size 480×480.(b) Restored synthetic image of size 480×480 from 90 projections of size 480×1.(c) Restored synthetic image of size 120×120from 90 projections of size 120×1 (d) Synthetic image of size 480×480 restored from 90 projections of size 120×4 by SR.

Download Full Size | PDF

For the image reconstruction we used the Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm [20] which solves:

x˜=argminx{0.5yKx22+λc(x)},
where x˜is the coefficients of the signal which we interested to reconstructyis the signal that captured by sensors, K is a sampling operator that describe system, c(x)is the regularization function. In our case applying TwIST withx˜, K,y and c(x) are αest, H, z and α1 respectively is needed for restoration of SR images. First, we reconstruct the sparse representation of the signal αest in wavelet domain, and then we reconstruct the f by applying the inverse wavelet transform. We achieved the best results forλ=0.1.

Figure 6(b) shows the reconstructed image from Np=90 HR projections of size480×1. This image represents the best result to be expected from SRCI, since in this case we assumed that HR pixels are directly captured. In the Fig. 6(c) we show the image restored from the case that LR projections of size120×1 are captured. This image actually represents the result to be expected without SR. Note that squares of sizes 1×1 to 3×3HR pixels are unresolved. This is expected since; the captured LR pixels are 4 times larger than the HR pixels in the original image (Fig. 6(a)). Figure 6(d) represents the image restored by implementing SRCI method from 90 Radon projections captured with four columns, each having 120 LR pixels.

From comparison between Figs. 6(c) and 6(d) we can see significant improvement of resolution, clarity and visual quality of image. Note that, owing to the SR, the squares at the top of the image are clearly resolved despite being smaller (1×1and 3×3 HR pixels) than the sensor LR pixel (of size 4×4 HR pixels). From a system point of view, the simulation results demonstrate the ability to cover a field-of-view of 480×480pixels with a sensor having only 120 × 4 pixels (i.e., about × 500 times less pixel sensors than pixels in the image). The reconstructed image having 230400 ( = 480 × 480) pixels was obtained from 43200 samples ( = 120 × 4 × 90), hence a compression ratio of × 5.33 is demonstrated. This implies that with the SRCI system about × 5.33 less data needs to be stored and transmitted, and shorter acquisition time is required compared to any conventional scanning system using the same sensor array.

4.2 Real experiment results

In the real experiment we used the system described in [6] using a slanted CCD sensor with β=14°. We have chosen a toy as an object. For image reconstruction, as in the simulated experiments, we used TwIST algorithm. In Fig. 7(a) we show a LR image of size 120×120 pixels restored from 90 projections of size 120×1 pixels, only by implementation of the CS (as Fig. 6(c)). In Fig. 7(b) we show an image of size 480×480 pixels restored from 90 SR projections of size 120×4. From comparison between Figs. 7(a) and 7(b) we can see significant improvement of the edges; we can see in details the forelock, eyes and teeth of the toy.

 figure: Fig. 7

Fig. 7 Restoration of images from 90 projections. (a) Image of size 120×120 restored using CI as in [6] from projections of size 120×1. (b) HR image of size 480×480 using SRCI projections of size 120×4.

Download Full Size | PDF

In Fig. 8 we show an image of size 480×480 restored by implementation SRCI with only half of the projections used for reconstruction of Fig. 7(b). Obviously the reconstruction is poorer than that in Fig. 7(b), but even despite the fact that only half of the number of measurements were taken, the quality of restored image is higher than that obtained without SR (Fig. 7(a)).

 figure: Fig. 8

Fig. 8 Restored image of size 480×480 using only 45 projections of size 120×4.

Download Full Size | PDF

5. Conclusion

In this work we have presented a new imaging technique that combines CS with SR techniques; CS is used to reduce the total number of measurements and SR is used to overcome the pixel size limitation. With this technique, high resolution images can be reconstructed from a set of LR Radon projections. The presented technique was demonstrated by numerical simulation and by a real experiment implementing the proposed optical system. The numerical results have demonstrated resolution improvement by a factor 16 (i.e., factor of four in each dimension). Hence, owing to the SR approach we were able to reconstruct details four times finer than sensor's pixels dimension. We have designed an acquisition system that uses approximately 500 times less pixels sensors than the image size, and uses a scanning process that is approximately 18.7%shorter than a conventional push-broom system using the same number of sensors.

The advantages of the method are summarized in the following. (a) Geometrical SR is obtained, that is, the reconstructed image has a resolution beyond the one dictated by the sensor size. (b) SR is obtained with rotational scanning rather with 2D raster scanning that is typically used in conventional SR techniques. Rotation motion is preferable in terms of mechanical design as it is smooth, periodic and one directional (angular rather horizontal and vertical). (c) Owing to the CI approach, fewer samples are taken. (d) Compared to conventional imaging systems performing linear scanning, the acquisition time is faster by the same factor as the compression factor. The SR improvement does not come at the expense of scanning time, compared to the previous CI technique in [6]. (e) Compared to the previous CS technique in [6] the only additional hardware requirement is replacement of the line array of sensors with a conventional 2D sensor array. Since the number of columns of commercial sensor array is typically much larger than the desired SR factor, the additional columns can be used for improving the SNR. This can be achieved, for example, by binning multiple rows together, thus generating virtual macropixels with larger area. (f) The method can be readily applied with the golden angle sampling scheme proposed in [6] to achieve progressive CI, with which the reconstruction quality of the image is gradually improving.

We wish to emphasize that, although we have demonstrated our technique with an experiment in the visible spectral regime, its strength is mainly for imaging outside the visible spectrum (UV, IR, THz, and etc.). In addition, our technique can be useful also in the visible spectral regime where the sensor pixel size needs to be kept large, e.g. for imaging under photon starved conditions.

6. Appendix A

Let us write explicitly Eq. (7):

T=IbeamsTθ=[Tθ000Tθ00000Tθ],
where Ibeams denotes the identity matrix of sizeNp×Np, and Tθ is the operator which relates one HR projection gθiwith vector [z1,θiTzb,θiT].

As mentioned in Sec. 3.2 Tθ=[T¯1TT¯jTT¯bT]T, where T¯j is a matrix of size n×n˜, wheren˜=b(n+1), relating a LR projection zj,θifrom a HR projection gθi. The construction of the matrices Tθ and T¯j, are better understood with the help of Fig. 4 where we show downsampling process. As we can see, the contribution of each HR pixel to one LR sensor is in accordance to their overlapping relative area. It can be seen from Fig.4 that b1 HR pixels contribute completely to each LR pixel (the overlapping area has the shape of a parallelogram), while 2 HR pixels contribute only partially. We first define the general dimension reduction matrix T¯n×bn which (see Fig. 4) has entries t¯i,k given by:

t¯i,k={12ifk=b(i1)+1ork=bi+11ifb(i1)+1<k<bi+10otherwise,i=1,2,,nk=1,2,,bn.
The elements t¯ji,kof matrix T¯jconstructing Tθare:

t¯ji,k={t¯ji,k=t¯i,kj+1ifj1<kj1+bn0otherwise,i=1,2,,nk=1,2,,bnj=1,...,b.

Acknowledgment

The authors wish to thank the Israel Science Foundation (grant No.1039/09) and Harbour foundation for supporting this research.

References and links

1. E. J. Candès, “Compressive sampling,” Proc. Int. Congress of Mathematics 3, 1433–1452, Madrid, Spain (2006). [CrossRef]  

2. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

3. Y. Rivenson and A. Stern, “An efficient method for multi-dimensional compressive imaging,” in Computational Optical Sensing and Imaging (Optical Society of America, 2009).

4. A. Stern, “Compressed imaging system with linear sensors,” Opt. Lett. 32(21), 3077–3079 (2007). [CrossRef]   [PubMed]  

5. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in Electronic Imaging 2006 (International Society for Optics and Photonics, 2006).

6. S. Evladov, O. Levi, and A. Stern, “Progressive compressive imaging from Radon projections,” Opt. Express 20(4), 4260–4271 (2012). [CrossRef]   [PubMed]  

7. Y. Rivenson, A. Stern, and B. Javidi, “Overview of compressive sensing techniques applied in holography [Invited],” Appl. Opt. 52(1), A423–A432 (2013). [CrossRef]   [PubMed]  

8. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]   [PubMed]  

9. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]   [PubMed]  

10. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process 20(3), 21–36 (2003). [CrossRef]  

11. D. Capel and A. Zisserman, “Computer vision applied to super resolution,” IEEE Signal Process 20(3), 75–86 (2003). [CrossRef]  

12. Z. Zalevsky and D. Mendlovic, Optical Superresolution, Springer (2004).

13. H. Greenspan, “Super-resolution in medical imaging,” Comput. J. 52(1), 43–63 (2008). [CrossRef]  

14. Y. Kashter, O. Levi, and A. Stern, “Optical compressive change and motion detection,” Appl. Opt. 51(13), 2491–2496 (2012). [CrossRef]   [PubMed]  

15. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef]   [PubMed]  

16. P. Vandewalle, S. Süsstrunk, and M. Vetterli, “A frequency domain approach to registration of aliased images with application to super-resolution,” EURASIP J. Adv. Signal Process. 2006, 1–15 (2006). [CrossRef]  

17. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]   [PubMed]  

18. J. H. Jørgensen, “Knowledge-based tomography algorithms” (Doctoral dissertation, Technical University of Denmark, DTU, DK-2800 Kgs. Lyngby, Denmark,2009).

19. V. Farber, E. Eduard, Y. Rivenson, and A. Stern, “A study of the coherence parameter of the progressive compressive imager based on Radon transform,” in SPIE Defense, Security, and Sensing (International Society for Optics and Photonics,2013).

20. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Capturing subpixel resolution Radon projections. (a) Compressive imaging method using a line array of sensors as in [6], (b) Subpixel information measurement by using two staggered line-array of sensors, (c) Subpixel information measurement using a rotated 2D array instead of the staggered line-array of sensors.
Fig. 2
Fig. 2 Optical Radon projection system.
Fig. 3
Fig. 3 Relation between the HR projection grid and LR CCD grid (a) LR (CCD) grid is aligned with HR (virtual) grid ( β=0° ) . Note that LR projections captured by the CCD columns contain the same information. (b) After slating the CCD by β=14° its columns { z j, θ i } j=1 j=b contain subpixel-shifted data.
Fig. 4
Fig. 4 Operator T θ relating the 1D HR grid to the set of columns { z j, θ i } j=1 j=b , illustrated for the case where the number of columns is b=4 and number of LR sensors in each column is n=6 . The relative angle β is in accordance with Eq. (5)
Fig. 5
Fig. 5 The SR transformation matrices in the case where b=4 , n=6 and β is in accordance with Eq. (5). (a) Overall SR system matrix T (b) One projection dimension reduction matrix T θ , (c) Operator T ¯ j relating the high resolution projection g θ i with j th column of CCD pixels z j, θ i , where j=1 .
Fig. 6
Fig. 6 Original synthetic image) a) Original synthetic image of size 480×480 .(b) Restored synthetic image of size 480×480 from 90 projections of size 480×1 .(c) Restored synthetic image of size 120×120 from 90 projections of size 120×1 (d) Synthetic image of size 480×480 restored from 90 projections of size 120×4 by SR.
Fig. 7
Fig. 7 Restoration of images from 90 projections. (a) Image of size 120×120 restored using CI as in [6] from projections of size 120×1 . (b) HR image of size 480×480 using SRCI projections of size 120×4 .
Fig. 8
Fig. 8 Restored image of size 480×480 using only 45 projections of size 120×4 .

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

g=Φf+ε=ΦΨα+ε=Ωα+ε,
α est = argmin α { 1 2 Ωα-g 2 2 +λ α 1 },
z j = T j g+ ε j , j=1,..,b ,
z=[ z 1 z b ]=[ T 1 T b ]g+ε=Tg+ε,
β= tan 1 ( 1 b ),
z=Tg=T( Ωα+ε )=Hα+ ε ,
T= I beams T θ ,
x ˜ = argmin x { 0.5 yKx 2 2 +λc( x ) },
T= I beams T θ =[ T θ 0 0 0 T θ 0 0 0 0 0 T θ ],
t ¯ i,k ={ 1 2 if k=b( i1 )+1ork=bi+1 1 if b( i1 )+1<k<bi+1 0 otherwise , i=1,2,,n k=1,2,,bn .
t ¯ j i,k ={ t ¯ j i,k = t ¯ i,kj+1 if j1<kj1+bn 0otherwise , i=1,2,,n k=1,2,,bn j=1,...,b .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.