Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Feasibility study for compressive multi-dimensional integral imaging

Open Access Open Access

Abstract

This paper describes a generalized framework for single-exposure acquisition of multi-dimensional scene information using integral imaging system based on compressive sensing. In the proposed system, a multi-dimensional scene containing a plurality of information such as 3D coordinates, spectral and polarimetric data is captured by integral imaging optics. The image sensor uses pixel-wise filtering elements arranged randomly. The multi-dimensional original object is reconstructed using an algorithm with a sparsity constraint. The proposed system is demonstrated with simulations and feasible optical experiments based on synthetic aperture integral imaging using multi-dimensional objects including 3D coordinates, spectral, and polarimetric information.

© 2013 Optical Society of America

1. Introduction

Multi-dimensional imaging is challenging because conventional imaging optics project an object onto a two-dimensional detector array. A cooperative design of optics and signal processing, which is called computational imaging, has been used for multi-dimensional imaging. For example, stereo camera and integral imaging have been applied to 3D imaging, including depth acquisition [18]. Those systems have multiple cameras or lenses to observe the object from different perspective views and computationally estimate the depth with the parallaxes between the captured elemental images.

Such imaging systems have been extended to acquisition of multi-dimensional objects including spectral, polarization, etc. [912]. In those systems, the lateral pixel count of the multidimensional object is the same as that of the image sensor as shown in Fig. 1(a). It means that the number of the elements in the original multi-dimensional object is larger than that of the pixels of the image sensor. In other words, typical multi-dimensional imaging systems with a single-exposure are ill-posed. The computational process is based on rearrangement of the captured pixels and the full-size object cannot be reconstructed in this case. Thus, conventional single-exposure acquisition of the multi-dimensional information compromises the space-bandwidth product of the image sensor.

 figure: Fig. 1

Fig. 1 Multi-dimensional imaging systems. (a) A conventional sensing approach and (b) a CS approach.

Download Full Size | PDF

A framework called compressive sensing (CS) can solve such ill-posed problem with randomized sampling and reconstruction algorithms employing a sparsity constraint as shown in Fig. 1(b) [1315]. Some generalized multi-dimensional imaging systems with CS have been proposed [1619]. In this paper, we propose a new modality of multi-dimensional integral imaging to alleviate some drawbacks in the previous works.

2. Proposed multi-dimensional imaging system

The proposed multi-dimensional imaging system has an integral imaging optics and an image sensor with randomly arranged pixel-wise filtering elements as shown in Fig. 2, where x and y show the lateral axes and z shows the longitudinal axis, respectively. It has been inspired by compressive Fresnel holography [19], where pixel-wise multimodal filtering elements on the image sensor are used for randomized sparse sampling in holographic imaging [20]. The holographic imaging process is replaced by integral imaging to alleviate some stringent requirements for holographic imaging such as the requirements for active illumination (coherent illumination), speckle noise degradation, difficulty in capturing an outdoor scene by coherent light, multiple coherent sources for multispectral illumination, etc. Furthermore speckle noise and random phase in general objects are difficult to be treated by CS with single-exposure imaging because they are not compressive [21]. An advantage of the proposed system over the other previous multi-dimensional imaging systems is the potential to have a compact optical sensor. The previous systems require additional optical elements to be used with the imaging optics [1618]. On the other hand, the filtering elements of the proposed system can be integrated onto the image sensor. In the proposed system, the integral imaging optics composed of multiple lenses or cameras projects the multi-dimensional object onto the image sensor with randomly arranged pixel-wise filtering elements. A single lens or camera is called elemental optics in this paper. The object is reconstructed with a CS algorithm employing a sparsity constraint.

 figure: Fig. 2

Fig. 2 Compressive multi-dimensional integral imaging.

Download Full Size | PDF

Firstly an imaging process of a conventional single aperture imaging system without any coding and filtering optics is described. In this paper, normal small letters are continuous variables, normal capital letters are integer variables, and calligraphic capital letters are functions, respectively. y-axis is omitted and ideal pinhole optics and ideal detector sampling are assumed for simplicity. The imaging process is written as

𝒢(u)=CC(u,z)dz,
where 𝒢 is the captured image, u is the lateral axis on the image sensor, and ℱC is the object in the C-th channel, respectively. The impulse response in this system does not depend on the depth z and the channel C. It is difficult to distinguish between different depths and channels in this case.

In the proposed scheme, integral imaging and pixel-wise filters realize a depth- and channel-variant impulse response, respectively. The parallax between the captured elemental images and the transmittance of the pixel-wise filters depend on the depth and the channel, respectively. The imaging process in Eq. (1) can be modified for the proposed integral imaging system as

𝒢K(u)=C𝒬C,K(u)𝒟(ux)C(x𝒯K(z),z)dxdz,
where 𝒢′K is the captured image by the K-th elemental optics, 𝒬C,K is the response of the pixel-wise filters for the C-th channel in the K-th elemental optics, 𝒟 is a low pass filter or a downsampling function caused by a fill factor of the detectors, and 𝒯K is a translation by the parallax in the K-th elemental optics, respectively. The equation shows the following process; First, the multi-dimensional object ℱC is projected with the parallax translation 𝒯K by each of the elemental optics. Secondly, the projected signals are convolved with the detector response 𝒟 of the image sensor. Third, the convolved signals are multiplied with the filter responses 𝒬C,K. Finally, the resultant signals are integrated onto the image sensor as 𝒢′K. The model can be extended for higher-dimensional objects with small modifications. The proposed scheme can be adapted to various optical information acquisitions. Examples of the applications and its implementations are summarized in Table 1.

Tables Icon

Table 1. Applications of the proposed scheme.

The imaging process of our proposed system can be expressed linearly as shown in Eq. (2). The process in the K-th elemental optics expressed by the equation is also rewritten with matrix operators. Here bold lower case letters are column vectors and bold capital letters are matrices, respectively. This process is written as

gK=HKf
=QKDTKf,
where gK ∈ ℝMX×1 is the vector of the captured data by the K-th elemental optics, HK ∈ ℝMX×(NX×NZ ×NC) is the system matrix of the K-th elemental optics, and f ∈ ℝ(NX×NZ ×NC)×1 is the vector of the object data, respectively. MX is the number of the detectors in the single elemental optics, NX and NZ are the numbers of elements of the object along the lateral and longitudinal axes, and NC is the number of the channels as shown in Fig. 2, respectively. ℝa×b is an a ×b matrix with real numbers. The matrix HK can be decomposed as Eq. (4) based on Eq. (2). Here, QK ∈ ℝMX×(MX×NC) is a matrix indicating the filter response in the K-th elemental optics. The matrix QK can be written as
QK=[Q1,KQ2,KQNC,K],
where QC,K ∈ ℝMX×MX is a diagonal matrix indicating the filter response of the C-th channel in the K-th elemental optics. D ∈ ℝ(MX×NC)×(NX×NC) is a matrix for downsampling, whose fill factor is supposed to be 100 % for simplicity, expressed as
D=[1t0T0t0t1t0t0t0t1t],
where 1 ∈ ℝS×1 is a vector with elements all equal to 1, 0 ∈ ℝS×1 is a vector with elements all equal to 0, and the superscript t denotes the transpose of a matrix, respectively. Here S is a downsampling factor. TK ∈ ℝ(NX×NC)×(NX×NZ ×NC) is a matrix indicating the projection with the parallax translation in the K-th elemental optics. This matrix can be written as
TK=[TKOOOTKOOOTK],
TK=[T1,KT2,KTNZ,K],
where TZ,K ∈ ℝNX×NX is a shifted identity matrix indicating the parallax translation at the Z-th depth in the K-th elemental optics and O ∈ ℝNX×(NX×NZ) is a matrix whose elements are 0, respectively. Finally, the imaging process of the entire optics is written as
g=[g1g2gLX]=[H1H2HLX]f
=Hf,
where g ∈ ℝ(MX×LX)×1 is the vector of the captured data by the entire optics and H ∈ ℝ(MX×LX)×(NX×NZ ×NC) is the system matrix of the entire optics, respectively. Here LX is the number of the elemental optics as shown in Fig. 2. As mentioned above, (MX × LX) << (NX × NZ × NC) is assumed in this paper. It means that this system is ill-posed. Equation (9) indicates that a column in the entire system matrix H has multiple nonzero elements due to the multiple elemental imaging processes with the filtering in Eq. (5) and the downsampling in Eq. (6). Randomness can be introduced by modifying the filtering in Eq. (5) and the translation in Eq. (8). These mean that the system matrix H approximately satisfies certain properties dictated by CS theory [22].

To solve the inversion of Eq. (10), a CS algorithm called Two-step iterative shrinkage/thresholding (TwIST) [23] is used for the multi-dimensional object reconstruction. TwIST solves the following problem shown as

f^=argminfgHf2+τ(f),
where ||·||2 is the 2 norm, τ is a regularization parameter, and ℛ is a regularizer. In this paper, the two-dimensional total variation [24] for each Z-th plane of each C-th channel is chosen as the regularizer, which is:
(f)=XYZC(f(X+1,Y,Z,C)f(X,Y,Z,C))2+(f(X,Y+1,Z,C)f(X,Y,Z,C))2.
In Eq. (12), the elements in the original vector f are rearranged with lexicographic order to express the multiple dimensions and f(X, Y, Z, C) denotes the X, Y, Z, C-th rearranged element of the vector f.

3. Simulations of sparse samplings

In the proposed system, signals on each channel are sparsely sampled with an integral imaging sensor as shown in Fig. 2 and Eq. (2). Both regular and irregular sparse samplings in conventional single aperture imaging and integral imaging, which has multiple apertures, are compared by simulations in this section. Figures 3(a)–3(c) show an object image, which is the Shepp-Logan phantom, and regular and irregular sparse sampling patterns, which correspond to the filter response in Eq. (5), respectively. The size of the object and the two patterns are 150 × 150 pixels each. That is, the size of the object is 150×150×1×1 (=NX ×NY ×NZ ×NC) pixels and the size of the captured image is 150 ×150 (=(MX ×LX) ×(MY ×LY)) pixels, respectively. In the regular sparse sampling pattern, the pattern is divided by blocks with 3 ×3 pixels. A same combination with three pixels in these blocks is set as 1 (white) and the other pixels are 0 (black) as shown in Fig. 3(b). In this case, 33.3 % of the whole image are 1. This ratio is called the sampling ratio in this paper. The irregular sparse sampling pattern is a randomized binary distribution, where 33.3 % of pixels are 1 and the others are 0 as shown in Fig. 3(c). That is, 33.3 % of diagonal elements of the filtering sub-matrix QC,K in Eq. (5) are 1 and the others are 0 in both of the sampling patterns.

 figure: Fig. 3

Fig. 3 Simulations of sparse samplings. (a) An object image, (b) a regular sparse sampling pattern, and (c) an irregular sparse sampling pattern.

Download Full Size | PDF

The object was projected onto the image sensor plane by either a single aperture imaging optics or an integral imaging optics. In the case of the single aperture imaging optics, there was a single elemental image, that is, 1×1 (=LX ×LY), with 150×150 (=MX ×MY) pixels. In the integral imaging case, there were 5×5 (=LX ×LY) elemental images with 30 ×30 (=MX ×MY) pixels for each elemental image. The projected signals by the two optics cases were multiplied with the sampling patterns in Figs. 3(b) and 3(c). In the integral imaging case, the sampling patterns were divided into 5 ×5 regions for the multiplication. The resultant signals, which are the captured images, with the two sampling patterns in the two cases are shown in Figs. 4(a)–4(d), respectively. The signal-to-noise ratio (SNR) in the measurements was 40 dB. The reconstruction results with TwIST algorithm are shown in Fig. 5. By comparing Figs. 5(a)–5(d), it is evident that the integral imaging with irregular sparse sampling, whose reconstruction is shown in Fig. 5(d), has a higher reconstruction fidelity than the other cases. The other cases lose many pixels and/or have degradations in their reconstructions. The integral imaging with irregular sparse sampling shows the robustness of the proposed imaging system for under-sampling. The randomness introduced to sparse sampling, that is, the filtering in Eq. (5), has improved the reconstruction performance of integral imaging. The peak SNR (PSNR) was measured for comparison. The PSNR can be calculated as

PSNR=20log10MAXMSE,
where MAX is the maximum pixel intensity of an original data and MSE is the mean square error between the original data and the reconstructed data. The PSNRs between the original phantom in Fig. 3(a) and the image reconstructions in Figs. 5(a)–5(d) were 20.6 dB, 19.4 dB, 21.7 dB, and 30.4 dB, respectively. Those PSNRs also indicate the advantage of integral imaging with irregular sparse sampling.

 figure: Fig. 4

Fig. 4 Captured images in the simulations. The total number of captured pixels in all cases are the same. Images by single aperture imaging with (a) regular sparse sampling in Fig. 3(b) and (b) irregular sparse sampling in Fig. 3(c). Images by integral imaging with (c) regular sparse sampling in Fig. 3(b) and (d) irregular sparse sampling in Fig. 3(c).

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Reconstructions in the simulations. Reconstructions of single aperture imaging with (a) regular sparse sampling in Fig. 3(b) and (b) irregular sparse sampling in Fig. 3(c). Reconstructions of integral imaging with (c) regular sparse sampling in Fig. 3(b) and (d) irregular sparse sampling in Fig. 3(c).

Download Full Size | PDF

The relationships between the sampling ratios and reconstruction PSNRs of single aperture imaging and integral imaging with regular and irregular sparse samplings are plotted in Fig. 6. Integral imaging with irregular sparse sampling has higher PSNRs than the others when the projected signals by the optics are sampled with low sampling ratios. The lower bound of the sampling ratio in integral imaging with irregular sparse sampling may be around 30 %. As shown in the simulations of this section, the proposed integral imaging can reduce the number of sampling points or detectors on the image sensor. It is also useful in applications of multi-channel data acquisition because each channel may be under-sampled in this case as shown in Fig. 1.

 figure: Fig. 6

Fig. 6 Relationships between sampling ratios and PSNRs, where SAI is single aperture imaging, II is integral imaging, RSS is regular sparse sampling, and ISS is irregular sparse sampling, respectively.

Download Full Size | PDF

4. Experiments

In this section, the concept of the proposed system is also verified by optical experiments based on synthetic aperture integral imaging. It can be directly applied to various types of integral imaging. In the experiments, a color camera was used to obtain elemental images. The focal length of the camera lens is 50 mm, the sensor size is 36 mm ×24 mm, and the pixel count is 1248 ×832, respectively. The camera was scanned along the horizontal and vertical directions by a two-axis translation stage. The number of the captured elemental images in the scan was 6 ×6 (=LX ×LY).

4.1. Spectral integral imaging

In the first experiment, compressive spectral integral imaging with the proposed system is performed. A sign and a car object were located at 270 mm and 330 mm from the sensor. The objects’ elemental images were captured by the translated camera. The pitch of the translation was 5 mm×5 mm. The entire captured elemental data is shown in Fig. 7 and a sample elemental image is shown in Fig. 8, respectively. The size of the captured elemental images was reduced to 175 ×79 (= MX × MY) pixels. The reduced elemental images were multiplied with the filter responses 𝒬C,K in Eq. (2) to emulate randomly arranged pixel-wise color filters. Three single band-pass filters (red, green, and blue) were assumed. One of the three filters was randomly selected on each detector. In this case, 33.3 % of the diagonal elements of each filtering sub-matrix QC,K in Eq. (5) are 1 and the other elements are 0. The diagonal elements of the each three sub-matrices QC,K in the K-th elemental optics were complementary from each channel. The sampled emulated elemental image of Fig. 8 is shown in Fig. 9.

 figure: Fig. 7

Fig. 7 Entire captured elemental images for spectral integral imaging.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 A sample captured elemental image for spectral integral imaging.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 The sampled elemental image in Fig. 8 for compressive spectral integral imaging.

Download Full Size | PDF

The reconstructions with a back-projection algorithm [25] using the bicubic interpolation and TwIST algorithm are shown in Fig. 10(a) and Fig. 10(b), respectively. No sparsity constraints were used in the back-projection reconstruction. The size of the reconstructed object was 1044×480×2×3 (=NX ×NY ×NZ ×NC) pixels. Thus, the compression ratio was 6.0. The reconstruction planes were set at 270 mm and 330 mm from the sensor, respectively. TwIST removed the defocused object in the back-projection reconstruction and enhanced the contrast and lateral resolution of the reconstructed object.

 figure: Fig. 10

Fig. 10 Four-dimensional image reconstructions from the compressive spectral integral imaging data. (a) Reconstructed images using back-projection algorithm and (b) reconstructed images using TwIST algorithm. The first plane focuses on the sign and the second plane focuses on the car.

Download Full Size | PDF

4.2. Spectral and polarimetric integral imaging

In the second experiment, compressive spectral and polarimetric integral imaging is demonstrated. Polarimetric imaging has been used in medical imaging, remote sensing, industrial control, etc. [2628]. The scene includes two plants at 430 mm, a truck at 550 mm, and a sign at 810 mm from the sensor. The scene was captured by the translated camera without and with a polarizer at a single polarization angle [29]. The pitch of the translation was 10 mm ×10 mm. The entire captured elemental data of intensity images and linearly polarized images are shown in Figs. 11(a) and 11(b), respectively. A sample captured elemental images of both cases are shown in Figs. 12(a) and 12(b). The elemental images were resized to 247 ×151 (= MX × MY) pixels. The resized images were multiplied and integrated with the filter responses 𝒬C,K to emulate randomly arranged pixel-wise three color filters (red, green, and blue) and pixel-wise polarizers. One of the three color filters was randomly selected on each detector, and one or no polarizer was randomly arranged on each of them. A detector with or without the polarizer captures intensity or linearly polarized spectral information, respectively. In this case, 16.7 % of the diagonal elements of the each filtering sub-matrix QC,K are 1 and the others are 0. The diagonal elements of the six (three spectrums times two polarizations) sub-matrices QC,K in the K-th elemental optics were complementary from each channel. The sampled emulated elemental image is shown in Fig. 13.

 figure: Fig. 11

Fig. 11 Entire captured elemental images for spectral and polarimetric integral imaging. (a) Intensity elemental images and (b) linearly polarized elemental images.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Sample captured elemental images for spectral and polarimetric integral imaging. (a) Intensity image and (b) linearly polarized image.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 The sampled elemental image in Fig. 12 for compressive spectral and polarimetric integral imaging.

Download Full Size | PDF

The reconstructions with the back-projection and TwIST algorithms are shown in Figs. 14(a) and 14(b), respectively. The size of the reconstructed object was 1000 ×610 ×3 ×(3 ×2) (=NX ×NY ×NZ ×NC) pixels. Thus the compression ratio was 8.2. In each figure, the reconstruction planes are 430 mm, 550 mm, and 810 mm from the sensor. The first and second rows in Figs. 14(a) or 14(b) show the intensity images and the linearly polarized images, respectively. The polarimetric imaging was verified with the reflection on the body of the truck object on the second plane in both of the reconstructions. TwIST algorithm suppressed the defocused objects which exist in the back-projection reconstruction. Also, TwIST enhanced the contrast and lateral resolution of the reconstructed object. In this experiment, the scene has a small occlusion of the sign by the plants as shown in Figs. 11 and 12. The occlusion does not affect the reconstructions and is negligible in this case. However the impact may be large when the occlusions are not small. It can be alleviated based on a method proposed for occlusions in compressive Fresnel holography [30].

 figure: Fig. 14

Fig. 14 Five-dimensional image reconstructions from the compressive spectral and polarimetric integral imaging data. (a) Reconstructed images using back-projection algorithm and (b) Reconstructed images using TwIST algorithm. The first plane focuses on the two plants, the second plane focuses on the truck, and the third plane focuses on the sign. The first row in (a) or (b) is intensity images and the second row in (a) or (b) is linearly polarized images.

Download Full Size | PDF

5. Conclusion

In this paper, we have proposed and demonstrated a multi-dimensional integral imaging system with compressive sensing. We have described a generalized framework for single-exposure acquisition of multi-dimensional scene information using compressive integral imaging. The system is capable of handling multi-dimensional scene containing a plurality of information such as 3D coordinates, spectral and polarimetric data, dynamic range, high speed imaging, etc. In the proposed system, a multi-dimensional object is captured with the integral imaging optics. The projected signals by the imaging optics are filtered by pixel-wise optical elements on the image sensor, and the resultant signals are integrated. The original object was reconstructed with TwIST algorithm using the total variation as the regularizer. Using both simulations and feasible optical experiments based on synthetic aperture integral imaging, we have demonstrated processing of multi-dimensional scene including 3D coordinates, spectral, and polarimetric information acquisition and image reconstruction. The demonstrated concept can be directly applied to integral imaging systems.

The proposed multi-dimensional integral imaging shown in Fig. 2 and Table 1 is realizable with currently available or near future technologies. For example, pixel-wise color filters and polarizers [31] are commercially produced and CMOS image sensors use a line-wise shutter called rolling shutter. We may be able to integrate these components directly into the system even if some randomness should be added to them. Spatial light modulators (SLMs) are also useful to implement pixel-wise polarizers, neutral density filters, and shutters.

A future study for designing and constructing the proposed system should be investigated. The system models introduced in Section 2 assume an ideal imaging process. The difference between the ideal models and the real physical phenomena degrades the reconstruction performance. These models should be improved with considering more realistic imaging process containing defocusing, aberration, etc. Furthermore, a boundary condition on the reconstruction performance and a reasonable system design, e.g. filter pattern, considering the performance should be studied. As mentioned in Section 2, the proposed system was inspired by compressive Fresnel holography [19, 20]. The condition on the reconstruction performance in compressive Fresnel holography has been investigated recently [32]. This approach may be applied to the future issues of the multi-dimensional integral imaging system proposed in this paper.

Acknowledgment

The authors wish to thank the anonymous reviewers for their comments and suggestions.

References and links

1. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 353–363 (1993). [CrossRef]  

2. G. M. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

3. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–74 (1968). [CrossRef]  

4. L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of a new 3-D imaging systems,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]   [PubMed]  

5. F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE 94, 490–501 (2006). [CrossRef]  

6. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99, 556 –575 (2011). [CrossRef]  

7. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007). [CrossRef]  

8. M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]   [PubMed]  

9. R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12, 1643–1655 (2004). [CrossRef]   [PubMed]  

10. R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

11. B. Javidi, S.-H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. 45, 2986–2994 (2006). [CrossRef]   [PubMed]  

12. R. Horstmeyer, G. Euliss, R. Athale, and M. Levoy, “Flexible multimodal camera using a light field architecture,” in “Proc. ICCP09 ,” (2009), pp. 1–8.

13. D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory 52, 1289–1306 (2006). [CrossRef]  

14. R. Baraniuk, “Compressive sensing,” IEEE Sig. Processing Mag. 24, 118–121 (2007). [CrossRef]  

15. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Sig. Processing Mag. 25, 21–30 (2008). [CrossRef]  

16. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010). [CrossRef]   [PubMed]  

17. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18, 23041–23053 (2010). [CrossRef]   [PubMed]  

18. R. Horisaki and J. Tanida, “Multidimensional TOMBO imaging and its applications,” in Proc. SPIE (2011), 8165, pp. 816516. [CrossRef]  

19. R. Horisaki, J. Tanida, A. Stern, and B. Javidi, “Multidimensional imaging using compressive Fresnel holography,” Opt. Lett. 37, 2013–2015 (2012). [CrossRef]   [PubMed]  

20. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Display Technol. 6, 506–509 (2010). [CrossRef]  

21. K. Choi, R. Horisaki, J. Hahn, S. Lim, D. L. Marks, T. J. Schulz, and D. J. Brady, “Compressive holography of diffuse objects,” Appl. Opt. 49, H1–H10 (2010). [CrossRef]   [PubMed]  

22. E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Info. Theory 52, 489–509 (2006). [CrossRef]  

23. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007). [CrossRef]  

24. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992). [CrossRef]  

25. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]   [PubMed]  

26. J. E. Solomon, “Polarization imaging,” Appl. Opt. 20, 1537–1544 (1981). [CrossRef]   [PubMed]  

27. S. G. Demos and R. R. Alfano, “Optical polarization imaging,” Appl. Opt. 36, 150–155 (1997). [CrossRef]   [PubMed]  

28. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]   [PubMed]  

29. X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express 20, 15481–15488 (2012). [CrossRef]   [PubMed]  

30. Y. Rivenson, A. Rot, S. Balber, A. Stern, and J. Rosen, “Recovery of partially occluded objects by applying compressive fresnel holography,” Opt. Lett. 37, 1757–1759 (2012). [CrossRef]   [PubMed]  

31. T. Sato, T. Araki, Y. Sasaki, T. Tsuru, T. Tadokoro, and S. Kawakami, “Compact ellipsometer employing a static polarimeter module with arrayed polarizer and wave-plate elements,” Appl. Opt. 46, 4963–4967 (2007). [CrossRef]   [PubMed]  

32. Y. Rivenson and A. Stern, “Conditions for practicing compressive fresnel holography,” Opt. Lett. 36, 3365–3367 (2011). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Multi-dimensional imaging systems. (a) A conventional sensing approach and (b) a CS approach.
Fig. 2
Fig. 2 Compressive multi-dimensional integral imaging.
Fig. 3
Fig. 3 Simulations of sparse samplings. (a) An object image, (b) a regular sparse sampling pattern, and (c) an irregular sparse sampling pattern.
Fig. 4
Fig. 4 Captured images in the simulations. The total number of captured pixels in all cases are the same. Images by single aperture imaging with (a) regular sparse sampling in Fig. 3(b) and (b) irregular sparse sampling in Fig. 3(c). Images by integral imaging with (c) regular sparse sampling in Fig. 3(b) and (d) irregular sparse sampling in Fig. 3(c).
Fig. 5
Fig. 5 Reconstructions in the simulations. Reconstructions of single aperture imaging with (a) regular sparse sampling in Fig. 3(b) and (b) irregular sparse sampling in Fig. 3(c). Reconstructions of integral imaging with (c) regular sparse sampling in Fig. 3(b) and (d) irregular sparse sampling in Fig. 3(c).
Fig. 6
Fig. 6 Relationships between sampling ratios and PSNRs, where SAI is single aperture imaging, II is integral imaging, RSS is regular sparse sampling, and ISS is irregular sparse sampling, respectively.
Fig. 7
Fig. 7 Entire captured elemental images for spectral integral imaging.
Fig. 8
Fig. 8 A sample captured elemental image for spectral integral imaging.
Fig. 9
Fig. 9 The sampled elemental image in Fig. 8 for compressive spectral integral imaging.
Fig. 10
Fig. 10 Four-dimensional image reconstructions from the compressive spectral integral imaging data. (a) Reconstructed images using back-projection algorithm and (b) reconstructed images using TwIST algorithm. The first plane focuses on the sign and the second plane focuses on the car.
Fig. 11
Fig. 11 Entire captured elemental images for spectral and polarimetric integral imaging. (a) Intensity elemental images and (b) linearly polarized elemental images.
Fig. 12
Fig. 12 Sample captured elemental images for spectral and polarimetric integral imaging. (a) Intensity image and (b) linearly polarized image.
Fig. 13
Fig. 13 The sampled elemental image in Fig. 12 for compressive spectral and polarimetric integral imaging.
Fig. 14
Fig. 14 Five-dimensional image reconstructions from the compressive spectral and polarimetric integral imaging data. (a) Reconstructed images using back-projection algorithm and (b) Reconstructed images using TwIST algorithm. The first plane focuses on the two plants, the second plane focuses on the truck, and the third plane focuses on the sign. The first row in (a) or (b) is intensity images and the second row in (a) or (b) is linearly polarized images.

Tables (1)

Tables Icon

Table 1 Applications of the proposed scheme.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

𝒢 ( u ) = C C ( u , z ) d z ,
𝒢 K ( u ) = C 𝒬 C , K ( u ) 𝒟 ( u x ) C ( x 𝒯 K ( z ) , z ) d x d z ,
g K = H K f
= Q K DT K f ,
Q K = [ Q 1 , K Q 2 , K Q N C , K ] ,
D = [ 1 t 0 T 0 t 0 t 1 t 0 t 0 t 0 t 1 t ] ,
T K = [ T K O O O T K O O O T K ] ,
T K = [ T 1 , K T 2 , K T N Z , K ] ,
g = [ g 1 g 2 g L X ] = [ H 1 H 2 H L X ] f
= Hf ,
f ^ = argmin f g Hf 2 + τ ( f ) ,
( f ) = X Y Z C ( f ( X + 1 , Y , Z , C ) f ( X , Y , Z , C ) ) 2 + ( f ( X , Y + 1 , Z , C ) f ( X , Y , Z , C ) ) 2 .
PSNR = 20 log 10 MAX MSE ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.