Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays

Open Access Open Access

Abstract

We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

© 2014 Optical Society of America

1. Introduction

Light ray field, or light field, refers to spatio-angular distribution of the radiance of the light rays in space [1]. Owing to the increased dimensionality of the light ray field over conventional two-dimensional (2D) image which represents only spatial distribution of the light rays, the light ray field contains three-dimensional (3D) information of the object scene. Exploiting the 3D information embedded in the light ray field, various techniques for light ray field camera or microscope have been developed recently [110].

In most techniques, the light ray field is captured using a lens array, where every lens in the array captures angular distribution of the light rays at its principal point. This lens array based method enables direct capture of the light ray field at a single shooting. But it fundamentally trades the spatial resolution with angular resolution, and thus the achieved spatial resolution is much lower than that of the native image sensor. Although several methods have been proposed to enhance the spatial resolution, they usually require computationally heavy inverse problem solving, sometimes with prior knowledge on the object scene [5, 11,12]. It has also been proposed to capture multiple images with sub-lens-pitch shifts of the lens array in an effort to enhance the spatial resolution [4, 13]. In this method, the angular resolution is usually fixed and the spatial resolution is enhanced by multiple capturing. Although this multiple capturing method can enhance the spatial resolution, the achieved spatial resolution is usually less than native resolution of the image sensor due to practical limitation on the number of the capturing. Beside the spatial resolution, calibrating the lens array and preventing the overlap between the adjacent lens images are another problem of the lens array based methods [1, 14].

In this paper, we propose a different light ray field capture technique. In the proposed method, a usual camera at a fixed position captures multiple images of the 3D scene while changing its focal distances, spanning the scene axially as shown in Fig. 1. The captured images are then processed to estimate the light ray field using back-projection method which has been used for the computed tomography (CT) imaging [15]. Note that the proposed method is distinguished from axially distributed integral imaging sensing technique [16] where the depth of focus of the camera is maintained large such that whole 3D scene is in focus in all images. On the contrary, the proposed technique reduces the depth of focus such that only specific depth range is in focus in each image. This difference enables the application of the back-projection method of CT and depth information acquisition not only for the off-axis points but also for on-axis points as well.

 figure: Fig. 1

Fig. 1 Conceptual diagram of the proposed method.

Download Full Size | PDF

As the proposed method does not require any special equipment like lens array, it is easy to be used. Capturing at a fixed camera location reduces the burden of the calibration greatly. Finally, the estimated light ray field can have high spatial resolution comparable to that of the image sensor itself. Note that the proposed method can be compared to the conventional lens array shifting method in the sense that both require multiple capturing [4, 13]. While the angular resolution of the lens array shifting method is usually fixed and the spatial resolution is enhanced by the multiple capturing, the spatial resolution is maintained high and the angular resolution is enhanced by the multiple capturing in the proposed method. Therefore the proposed method can be considered as a different approach that puts more emphasis on the spatial resolution. It is also worth for mentioning that the manipulation of the focal plane of the camera in the proposed method is generally much easier than that of the lateral 2D shift of the lens array in the lens array shifting method. Finally, it is found that the spectrum area of the light ray field captured using the proposed method is well matched to that of a lambertian 3D object. This feature makes the proposed method more efficient for the lambertian object case.

The estimated light ray field can be used for digital visualization of the 3D scene with different parallax or different refocus distances. It can also be visualized optically using various light ray field based 3D display techniques like integral imaging [17], layered display [18, 19], and holographic stereogram [8]. In the followings, we explain the principle and present the verification results with optical experiments.

2. Principle

Figure 2 shows the relation between the light ray field and the image captured by a camera. The light ray field L is represented by four-dimensional (4D) distribution in reference plane with two spatial variables x, y and two angular variables θx, θy. Figure 2 shows only 2D x-θx cross section of the 4D light ray field for clarity. Note that each point in the x-θx cross section represents individual light ray.

 figure: Fig. 2

Fig. 2 Relationship between the captured image and the light ray field.

Download Full Size | PDF

When the focal plane of the camera coincides with the reference plane of the light ray field as shown in Fig. 2(a), each pixel in the camera records integrated intensity of the light rays coming from a specific position in the reference plane. Because those light rays have different directions but share the same position in the reference plane, they are in a vertical line in the x-θx cross section as shown in Fig. 2(a). Different pixels in the camera also record integrations along the corresponding vertical lines in the x-θx cross section. Therefore, the image captured with the focal plane located at the reference plane is the vertical projection of the light ray field.

When the focal plane of the camera deviates from the reference plane by Δzn, the light rays corresponding to a camera pixel have different positions in the reference plane as shown in Fig. 2(b). Their position difference in the reference plane is given by the product of the Δzn and their angular difference. Therefore, in the light ray field representation, each camera pixel records integration along the corresponding slanted line, giving a slanted projection of the light ray field:

Izr+Δzn(xc,yc)=NAL(xcmΔznθx,ycmΔznθy,θx,θy)dθxdθy,
where Izrzn(xc,yc) is the captured image with focal plane at zr + Δzn, and m is the magnification between the camera sensor plane and the focal plane. Note that although each pixel in the captured image is a projection of the light ray field along a specific line corresponding to the focal plane distance, it integrates information from corresponding range in the surface of the whole 3D object, not limited to the slice of the object at the focal plane. By capturing multiple images with different focal planes, we can obtain different 2D projections of the 4D light ray field. Note that this is similar to the CT where different 2D projections of 3D attenuation distribution are captured. In our method, the range of the projection angle is not 360° like CT, but determined by the maximum deviation of the focal plane from the reference plane. The angular and spatial range of the light ray field captured in each projection image is given by the object side numerical aperture (NA) and the field of view (FOV) of the camera.

The reconstruction of the original 4D light ray field from the captured 2D projections can be done using algorithms for CT reconstruction. In this paper, we use the simplest back-projection algorithm [15]. As shown in Fig. 3, each image is back-projected to the light ray field space and the normalized accumulation of the back-projections of all images is used as the reconstruction of the light ray field. Therefore in this paper, the reconstructed light ray field Lrec(x,y,θx,θy) is computed by

Lrec(x,y,θx,θy)=1NnIzr+Δzn(x+Δznθxm,y+Δznθym),
where the number of the captured images is N.

 figure: Fig. 3

Fig. 3 Light ray field reconstruction from captured images using back projection.

Download Full Size | PDF

The spatio-angular resolution of the reconstructed light ray field can be explained in the spatio-angular frequency domain (fx, fy, fθx, fθy). It is known that the spatio-angular spectrum of the light ray field corresponding to a 3D object with lambertian reflecting surface occupies an area given by Δzmax<ϕx,ϕy<Δzmax, where ϕx,ϕy=arctan(fθx/fx),arctan(fθy/fy)fθx/fx,fθy/fy, respectively and Δzmaxis the maximum axial deviation of the 3D object from the reference plane of the light ray field as indicated by shaded area in Fig. 4 [1, 20]. From the Fourier-slice theorem, the Fourier transform of a captured image Izrzn which is the slanted projection of the light ray field given by Eq. (1), represents the slice (fθx = Δznfx, fθy = Δznfy) in the spectrum. Therefore, the back-projection of N images described above reconstructs corresponding N slices in the spectrum area of the lambertian 3D object. The spatial bandwidth Bx and By of the reconstructed slice is given by the spatial bandwidth of the image sensor on the focal plane considering the magnification, i.e. Bx, By = Bsx/m, Bsy/m where Bsx and Bsy are the spatial bandwidth of the image sensor on the sensor plane. Therefore it can be said that the spatial resolution of the reconstructed light ray field can reach that of the native image sensor and the angular resolution is enhanced by the number of the capturing in the proposed method. Figure 4 also indicates the spectrum area that can be captured using conventional lens array shifting method as a dashed line. With N captures (N shifts along each lateral direction), the spatial bandwidth is enhanced to NBx,lens-shifting, where Bx,lens-shifting is the spatial bandwidth of a single capture which is determined by the lens pitch. Note that the conventional lens array shifting method can capture rectangular spectrum area, giving advantage over the proposed method in capturing non-lambertian object. However, with the same number of the captures N, the conventional lens array shifting method can enhance the spatial bandwidth only by N along each lateral direction, while the proposed method can reconstruct N 2D slices of the 4D light ray field. Therefore, especially for the lambertian object, the proposed method can be considered more efficient.

 figure: Fig. 4

Fig. 4 Spectrum area of the light ray field corresponding to lambertian 3D object and the reconstructed slices by the proposed method. The spatial bandwidth Bx of individual slice is dependent on Δz due to slightly different magnification but it is ignored in the figure for simplicity.

Download Full Size | PDF

3. Experiment

3.1 Light ray field capture and reconstruction with digital visualization

In the first experiment, we used a usual camera (CANON 5D Mark II) with extension tube in order to ensure large object-side NA. Three objects including black box, purple origami crane and black letters on a white box were placed in the axial range from 20mm to 90mm as shown in Fig. 5(a). 36 images were captured at a fixed position with 2mm step of the focal plane by manipulating the focus ring of the camera. Note that the manual manipulation of the focus ring used in our experiment may cause focusing error leading to the shear of the reconstructed light ray field, which eventually results in axial distortion of the visualized 3D images. Although the axial distortion is not clearly observed in our experimental result presented below, it is required to develop a method to change the focal plane more precisely in further work. Figure 5(b) shows three examples of the captured images. In the captured images, the central 2100 × 2100 pixel part was cropped and resized to 500 × 500 resolution due to the memory limitation of the processor.

 figure: Fig. 5

Fig. 5 (a) Experimental setup and (b) examples of the captured images.

Download Full Size | PDF

The cropped and resized images were then used for the reconstruction of the light ray field by back projection method. In the back projection process, the angular resolution of the light ray field was set to 11 × 11, giving Nx × Ny × Nθx × Nθy = 500 × 500 × 11 × 11 data points. Figure 6 shows different cross sections of the reconstructed light ray field.

 figure: Fig. 6

Fig. 6 Reconstructed light ray field.

Download Full Size | PDF

The reconstructed light ray field can be visualized in digital ways. Figure 7(a) shows synthesis of different parallax views of the 3D scene by selecting different slices in the reconstructed light ray field. Figure 7(b) shows the movie of the synthesized parallax views. It can be confirmed that different horizontal and vertical parallax views are synthesized successfully from the light ray field which was reconstructed from the stack of the focal images captured at a fixed camera position.

 figure: Fig. 7

Fig. 7 Digital visualization: parallax images (a) Parallax view synthesis from light ray field (b) Movie of synthesized parallax views (Media 1).

Download Full Size | PDF

Figure 8 shows synthesis of refocused images at different distances. As shown in Fig. 8(a), by calculating 2D projection of the light ray field with different projection angle, the corresponding refocused images can be synthesized. Figure 8(b) shows the movie of the synthesized images with different refocus distances. It can be seen that the frontal, mid, and rear objects are focused sequentially as expected.

 figure: Fig. 8

Fig. 8 Digital visualization: refocus images (a) Refocus image synthesis from light ray field (b) Movie of synthesized refocused images (Media 2).

Download Full Size | PDF

In the experimental results shown in Figs. 6-8, 36 images were captured with 2mm separation to reconstruct the light ray field. The minimum number of the focal images required for reliable light ray field reconstruction depends on the performance of the reconstruction algorithm. In this paper, the simplest back projection algorithm is used. With this algorithm, the parallax movies of the light ray field reconstructed from different number of the focal images are shown in Fig. 9. From Fig. 9, it can be seen that 36 images with 2mm step shown in Fig. 9(a), 9 images with 8mm step shown in Fig. 9(b), and 6 images with 12mm step shown in Fig. 9(c) do not show noticeable difference in the reconstruction. But when 3 images are used as shown in Fig. 9(d), the abrupt changes of the parallax are observed. Therefore in current experimental condition, it can be roughly estimated that more than 4 focal images are required for reliable reconstruction and further research is needed for complete analysis.

 figure: Fig. 9

Fig. 9 Movie of reconstructed parallax using (a) 36 images with 2mm step (Media 3) (b) 9 images with 8mm step (Media 4) (c) 6 images with 12mm step (Media 5) (d) 3 images with 24mm step (Media 6).

Download Full Size | PDF

In the first experiment, a usual camera with an extension tube was used. But the proposed method can also be applied to other configurations. Figure 10 shows the parallax movies of the light ray field captured with different configurations. Figure 10(a) shows the reconstruction when the camera captures focal images without the extension tube. The objects were distributed from 300mm (purple origami crane) to 400mm (black letters on white background) and 6 images were captured while increasing the focal distance from 210mm distance with 60mm step. Figures 10(b)-10(d) show the reconstruction when the proposed method is applied to the microscope. 8 images for Fig. 10(b), and 10 images for Figs. 10(c) and 10(d) were captured with different slide glass distances from the objective. The results shown in Fig. 10 confirm successful reconstruction of the parallax from the captured images.

 figure: Fig. 10

Fig. 10 Movie of reconstructed parallax with different systems (a) Camera without extension tube (Media 7) (b) Microscopy (insect object, Media 8) (c) Microscopy (ant skin, Media 9) (d) Microscopy (ant eye, Media 10).

Download Full Size | PDF

3.2 Optical visualization using 3D display techniques

The reconstructed light ray field can also be visualized in optical way using various 3D display techniques. Different 3D display techniques require different forms of light modulating data like elemental image array for integral imaging display and encoded fringe pattern for holographic stereogram. The light ray field data reconstructed in the proposed method is universal within its angular range in the sense that it can be used for the generation of light modulating data for different kinds of ray-based 3D display techniques. For the demonstration, optical reconstruction using three different 3D display techniques including integral imaging, layered display, and holographic stereogram is presented in this paper.

Figure 11 shows reconstruction using integral imaging display. Figure 11(a) illustrates the synthesis of the elemental images from the light ray field. After locating the reference plane of the captured light ray field at a certain distance considering the characteristics of the integral imaging display setup, the elemental images are synthesized by finding corresponding light rays in the light ray field for every pixel in the elemental images. In our experiment, a square lens array of 1mm × 1mm lens pitch and 3.3mm focal length was located in front of a display panel of 125um pixel pitch. The gap between the lens array and the display panel was adjusted slightly larger than the focal length of the lens array to reduce visible color moire pattern. The reference plane of the captured light ray field was located at the display panel plane such that the 3D images are displayed around the display panel plane extending from negative depth region to positive depth region. Note that the reference plane can be located at any distance from the display panel within the depth range where the integral imaging display can present the 3D images with acceptable resolution. Figure 11(b) shows the elemental image array synthesized from the captured light ray field of Fig. 6. Small square region in Fig. 11(b) represents an elemental image for the corresponding lens in the array. Figure 11(c) shows the optical reconstruction result captured from different directions. From the relative position difference between the background letters, purple origami, and the frontal black box, the proper motion parallax is observed.

 figure: Fig. 11

Fig. 11 Optical reconstruction with integral imaging display (a) Synthesis of elemental images from the light ray field (b) Synthesized elemental images (c) Observed images from different directions (Media 11).

Download Full Size | PDF

Figure 12 shows the reconstruction using layered display. In the layered display, the transmission type panel stack is used for the reconstruction of the light ray field [18, 19]. In the experiment, a custom backlight unit and two twisted nematic liquid crystal panels of 245um pixel pitch were used. The frontal polarizer of the rear panel was removed and the frontal panel was rotated by 90° to match the polarization direction. In this setup, the intensity of the reconstructed light ray is determined by the transmittance product of two panels. The image data for panels was generated from the light ray field data of Fig. 6 using simultaneous algebraic reconstruction technique [18]. Figure 12(a) shows the generated front and rear panel data. Figure 12(b) shows the optical reconstruction result observed from different directions. Horizontal and vertical motion parallax of the origami crane is clearly demonstrated. Note that the different optical reconstruction quality and depth dependent characteristics seen from Figs. 11(b) and 12(b) come from different properties of integral imaging display and layered display, not from the light ray field data itself.

 figure: Fig. 12

Fig. 12 Optical reconstruction with layered display (a) Synthesized images for front and rear panels (b) Observed images from different directions (Media 12).

Download Full Size | PDF

Figure 13 shows reconstruction result using holographic stereogram. For the hologram synthesis, 10 × 5 parallax images were synthesized from the light ray field data and resized to 192 × 192 pixel resolution. The Fourier-transform of individual parallax image is then stitched to form full-parallax holographic stereogram of 1920 × 960 resolution in Fourier geometry [8]. Note that in this patch based holographic stereogram synthesis, the number of the parallax images and their individual resolution which affect the angular and spatial resolutions of the reconstructed image respectively, have trade-off relation at a given spatial light modulator (SLM) resolution. We chose 10 × 5 parallax images with 192 × 192 resolution as they showed noticeable focusing effect with tolerable spatial resolution in our experiment as presented below. In this experiment, two different object scene; i.e. one with rear “INHA” letters and a frontal origami crane, the other one with rear “INHA” letters and a frontal usb mark were used for better contrast in the reconstruction. Figure 13(a) shows the synthesized hologram for the first object scene. The phase of the synthesized hologram was loaded to the phase-only SLM of 8um pixel pitch. The experimental results in Figs. 13(b) and 13(c) clearly demonstrate the accommodation effect, i.e. foreground and background objects are focused at different planes, confirming that the 3D information is reconstructed well.

 figure: Fig. 13

Fig. 13 Optical reconstruction using holography (a) Synthesized complex field (b) Optical reconstruction of scene 1 (Media 13) (c) Optical reconstruction of scene 2 (Media 14).

Download Full Size | PDF

4. Conclusion

In this paper, we propose a novel light ray field capture technique using focal image stack. Multiple images are captured at a fixed camera position with different focal distances. We show that these images are different 2D projections of 4D light ray field and the original 4D light ray field can be reconstructed by back-projecting the captured images to 4D light ray field space again. The main advantage of the proposed method is that it can provides high spatial resolution light ray field data comparable to the native resolution of the camera. Another advantage is that it does not require any special hardware. The proposed method is verified experimentally using a usual camera with and without extension tube and a microscopy as well. The optical reconstruction of the obtained light ray field using 3D display technique is also experimentally demonstrated using three 3D display techniques including integral imaging, layered display and holographic stereogram.

The proposed method also has several limitations. The reconstructed spectrum area of the light ray field is not sufficient for non-lambertian 3D objects especially around low spatial frequency region of the light ray field. The simple back-projection method used in the proposed method only reconstructs discrete slices of the whole spectrum leaving area between the slices unknown. These limitations are believed to be overcome by more sophisticated reconstruction algorithm with spatial transmittance coding on the camera lens which is a topic of further research.

Acknowledgments

This research was partly supported by 'The Cross-Ministry Giga KOREA Project' of The Ministry of Science, ICT and Future Planning, Korea. [GK13D0200, Development of Super Multi-View (SMV) Display Providing Real-Time Interaction]. This research was also partly supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013-061913).

References and links

1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

2. B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14(9), 3806–3829 (2006). [CrossRef]   [PubMed]  

3. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]   [PubMed]  

4. Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef]   [PubMed]  

5. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

6. J. Arai, T. Yamashita, M. Miura, H. Hiura, N. Okaichi, F. Okano, and R. Funatsu, “Integral three-dimensional image capture equipment with closely positioned lens array and image sensor,” Opt. Lett. 38(12), 2044–2046 (2013). [CrossRef]   [PubMed]  

7. J. Kim, J.-H. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]   [PubMed]  

8. S.-K. Lee, S.-I. Hong, Y.-S. Kim, H.-G. Lim, N.-Y. Jo, and J.-H. Park, “Hologram synthesis of three-dimensional real objects using portable integral imaging camera,” Opt. Express 21(20), 23662–23670 (2013). [CrossRef]   [PubMed]  

9. A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20(12), 13522–13531 (2012). [CrossRef]   [PubMed]  

10. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef]   [PubMed]  

11. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]   [PubMed]  

12. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]   [PubMed]  

13. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]   [PubMed]  

14. K. Hong, J. Hong, J.-H. Jung, J.-H. Park, and B. Lee, “Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging,” Opt. Express 18(11), 12002–12016 (2010). [CrossRef]   [PubMed]  

15. S. X. Pan and A. C. Kak, “A computational study of reconstruction algorithms for diffraction tomography: Interpolation versus filtered backpropagation,” IEEE Trans. Acoust. Speech Signal Process. 31(5), 1262–1275 (1983). [CrossRef]  

16. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34(13), 2012–2014 (2009). [CrossRef]   [PubMed]  

17. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

18. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 1–11 (2011). [CrossRef]  

19. N.-Y. Jo, H.-G. Lim, S.-K. Lee, Y.-S. Kim, and J.-H. Park, “Depth enhancement of multi-layer light field display using polarization dependent internal reflection,” Opt. Express 21(24), 29628–29636 (2013). [CrossRef]   [PubMed]  

20. J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express 19(19), 18729–18741 (2011). [CrossRef]   [PubMed]  

Supplementary Material (14)

Media 1: MP4 (346 KB)     
Media 2: MP4 (150 KB)     
Media 3: MP4 (346 KB)     
Media 4: MP4 (387 KB)     
Media 5: MP4 (389 KB)     
Media 6: MP4 (335 KB)     
Media 7: MP4 (382 KB)     
Media 8: MP4 (405 KB)     
Media 9: MP4 (512 KB)     
Media 10: MP4 (410 KB)     
Media 11: MP4 (1681 KB)     
Media 12: MP4 (1292 KB)     
Media 13: MP4 (265 KB)     
Media 14: MP4 (1207 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Conceptual diagram of the proposed method.
Fig. 2
Fig. 2 Relationship between the captured image and the light ray field.
Fig. 3
Fig. 3 Light ray field reconstruction from captured images using back projection.
Fig. 4
Fig. 4 Spectrum area of the light ray field corresponding to lambertian 3D object and the reconstructed slices by the proposed method. The spatial bandwidth Bx of individual slice is dependent on Δz due to slightly different magnification but it is ignored in the figure for simplicity.
Fig. 5
Fig. 5 (a) Experimental setup and (b) examples of the captured images.
Fig. 6
Fig. 6 Reconstructed light ray field.
Fig. 7
Fig. 7 Digital visualization: parallax images (a) Parallax view synthesis from light ray field (b) Movie of synthesized parallax views (Media 1).
Fig. 8
Fig. 8 Digital visualization: refocus images (a) Refocus image synthesis from light ray field (b) Movie of synthesized refocused images (Media 2).
Fig. 9
Fig. 9 Movie of reconstructed parallax using (a) 36 images with 2mm step (Media 3) (b) 9 images with 8mm step (Media 4) (c) 6 images with 12mm step (Media 5) (d) 3 images with 24mm step (Media 6).
Fig. 10
Fig. 10 Movie of reconstructed parallax with different systems (a) Camera without extension tube (Media 7) (b) Microscopy (insect object, Media 8) (c) Microscopy (ant skin, Media 9) (d) Microscopy (ant eye, Media 10).
Fig. 11
Fig. 11 Optical reconstruction with integral imaging display (a) Synthesis of elemental images from the light ray field (b) Synthesized elemental images (c) Observed images from different directions (Media 11).
Fig. 12
Fig. 12 Optical reconstruction with layered display (a) Synthesized images for front and rear panels (b) Observed images from different directions (Media 12).
Fig. 13
Fig. 13 Optical reconstruction using holography (a) Synthesized complex field (b) Optical reconstruction of scene 1 (Media 13) (c) Optical reconstruction of scene 2 (Media 14).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

I z r + Δ z n ( x c , y c ) = N A L ( x c m Δ z n θ x , y c m Δ z n θ y , θ x , θ y ) d θ x d θ y ,
L r e c ( x , y , θ x , θ y ) = 1 N n I z r + Δ z n ( x + Δ z n θ x m , y + Δ z n θ y m ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.