Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational reconstruction for three-dimensional imaging via a diffraction grating

Open Access Open Access

Abstract

This paper describes a computational reconstruction method for 3-D imaging via a diffraction grating. An optical device consisting of a diffraction grating with a camera produces a parallax image array (PIA) for 3-D imaging in an efficient way according to recent researches. Unlike other capturing systems for a PIA such as a lens array with a camera and a camera array, a diffraction grating with a camera has an advantage in terms of the optical system complexity. However, since the diffraction grating is transparent, the captured raw image by the diffraction grating has no feature to detect the boundary of each parallax image. Moreover, the diffraction grating allows parallax images to overlap each other due to its optical property. Those problems prevent computational reconstruction from generating 3-D images. To remedy those problems, we propose a 3-D computational reconstruction method via a diffraction grating. The proposed method using a diffraction grating includes analyzing the PIA pickup process and converting a captured raw image into a well-defined PIA. Our analysis introduces a virtual pinhole; thus, a diffraction grating works as a camera array. Also, it defines the effective object area to segment parallax images and provides a mapping between each segmented parallax image and corresponding virtual pinhole. The minimum image area is also defined to determine the minimum field of view for our reconstruction. Optical experimental results indicated the proposed theoretical analysis and computational reconstruction in diffraction grating imaging are feasible in 3-D imaging. To our best knowledge, this is the first report on 3-D computational reconstruction via a diffraction grating.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A parallax image array (PIA) for real three-dimensional (3-D) objects is one of the most effective forms for 3-D imaging, processing, and display since it contains a lot of perspective information for real 3-D objects and it enables numerical methods to be useful. The captured PIA is utilized to display a real 3-D space using a lens array and is also converted to a series of depth-sliced images using a computational reconstruction process. Thus, acquiring a PIA is an essential part of the field of 3-D imaging [113].

Usually, using a camera array or a lens array with a camera is one of the optical devices to pick up a PIA for a 3-D scene. Also, capturing a PIA through those optical methods is an active topic in the field of integral imaging and light field analysis [1416]. The use of a camera array for capturing a PIA provides the advantage of high-resolution parallax images, comparing with a PIA using a lens array with a camera. The higher resolution camera is engaged in the PIA pickup; the higher resolution images are reconstructed from the computational and optical reconstruction methods. Thus, the high-resolution parallax images play a crucial part in a specific function such as high depth resolution.

However, a camera array is a high-cost and complicated imaging system, compared with other systems such as a lens array with a camera or a moving camera with an electro-mechanical control system. Also, it sometimes requires the camera calibration and post-processing before and after picking up a PIA. A moving camera may be an alternative for the camera array, but it is limited to use for the stationary 3-D scene. A lens array is a relatively low-cost optical system. However, it has some problems such as optical aberrations and the barrel distortion since it consists of many single lenses. Therefore, it deserves to develop an imaging system of picking up a PIA having advantages of low-cost and high-resolution for the 3-D imaging industry.

Recently, a novel method via a diffraction grating has been proposed for acquiring a PIA [17,18]. The method consists of a diffraction grating plate and a camera. The structure of the PIA-acquiring system using a diffraction grating with a camera is similar to that of a system using a lens array with a camera. Thus, its optical structure is low-cost and even less complicated than the lens array-based system. Moreover, a thin diffraction grating does not suffer from the problems in a lens array system. Also, the parallax images through a diffraction grating are high-resolution. Therefore, a diffraction grating-based imaging system can be one of the promising techniques in 3-D imaging. However, the researches on 3-D imaging via a diffraction grating have been focused on acquiring a PIA optically. Researches on computational reconstruction using a diffraction grating have not been discussed yet.

In this paper, we propose a computational reconstruction method using a raw image from diffraction grating imaging. Our method consists of the optical analysis to localize each parallax image in the raw image and a back-projection method to reconstruct a volume from the extracted parallax images. The optical structure of diffraction grating imaging differs from that of a camera array or a lens array with a camera, as shown in Fig. 1(a) through 1(c). A different optical structure needs a new method to extract parallax images. Extracting of parallax images in a camera array, as depicted in Fig. 1(a), is useless due to the separable structure. As depicted in Fig. 1(b) for a lens array, a detection technique of the lens pattern is required to extract each parallax image, which has been discussed a lot in the literature [1922]. As shown in Fig. 1(c), localization of parallax images in a diffraction grating is not easy since a diffraction grating is transparent, no pattern to detect. To overcome the problem, we propose 3-D computational reconstruction having a theoretical analysis of the geometrical information of parallax images in diffraction grating imaging. Optical experiments with various 3-D objects for diffraction grating imaging are conducted to demonstrate the practical feasibility of the proposed method to evaluate the proposed method.

 figure: Fig. 1.

Fig. 1. PIA capturing systems and region of parallax images according to the characteristics of each system. (a) Camera array and its parallax images. (b) Lens array and its parallax images. (c) Diffraction grating and imaging points on the pickup plane.

Download Full Size | PDF

2. Computational reconstruction in diffraction grating imaging

2.1 Basis structure for computational reconstruction in diffraction grating imaging

The concept of the proposed method is depicted in Fig. 2, including the schematic diagram of the relationship between the pickup process and the reconstruction process via diffraction grating imaging. Analysis of the pickup process for 3-D objects using a diffraction grating defines the system parameters such as the wavelength of a light source, a spatial resolution of a diffraction grating, and positions of a diffraction grating and an imaging lens. The proposed computational reconstruction method is based on the back-projection where virtual pinholes are defined to project parallax images into the 3-D reconstruction space. Let us define an effective object area (EOA) that is the maximum area for 3-D objects to exist without overlapping each other. Detecting the EOA in the acquired images is essential to computational reconstruction in diffraction grating imaging. Moreover, let us define a minimum image area (MIA) that is the minimum field of view for each virtual pinhole. A mapping between EOA and MIA is required for computational reconstruction.

 figure: Fig. 2.

Fig. 2. The proposed diffraction grating imaging system consisting of PIA pickup and computational reconstruction processes, where PI stands for parallax image, VP for virtual pinhole, EOA for the effective object area, and MIA for the minimum image area.

Download Full Size | PDF

2.2 Pickup process for 3-D objects in diffraction grating imaging

A diffraction grating diffracts the light rays emanating from a 3-D object. Observing the object through the diffraction grating with an imaging lens is the basic concept of the pickup process in diffraction grating imaging. Those diffracted light rays are observed as perspective images for the object. It seems that observing the object space through the diffraction grating generates virtual images of the object on the back of the diffraction grating. The virtual images have parallaxes for the 3-D object thus are stored as a parallax image array (PIA) by a capturing device such as a camera. Here, the imaging depth and the size of virtual objects are the same as those of the original 3-D object.

Figure 3 shows two-dimensional (2-D) geometric relationships among a point object, parallax images of the point object on the parallax image plane, and captured parallax images by an imaging lens. Let us assume that the diffraction grating is at a distance d from the imaging lens in the x-y plane, where the point object is at (xO,yO,zO). Here, the z-coordinates of all parallax images are the same as zO according to diffraction grating imaging. Also, although the proposed analysis can provide the second or higher order diffraction, the first order diffraction is described for the purpose of better understanding the system. As shown in Fig. 3, the −1st and 1st order parallax images, PI(x−1st,yO,zO) and PI(x1st,yO,zO) are generated by the ± 1st order diffraction of the point object by a diffraction grating. The point object at (xO,yO,zO) serves as the 0th order parallax image. The diffraction angle θ between the 1st order and the 0th order parallax images is written as θ=sin−1(/a), where λ is the wavelength of a light source, and a is the aperture width of the diffraction grating. Considering the diffraction order and the location of the point object, the x-coordinate of the parallax images is given by

$${x_{mth}} = {x_O} + |{{z_O} - d} |\tan \left( {si{n^{ - 1}}\left( {\frac{{m\lambda }}{a}} \right)} \right),$$
where m is −1, 0, and 1. The y-coordinate yO of a parallax image is obtained by replacing xO with yO in Eq. (1). The imaging point I(xmth,ynth,zO) on the pick-up plane is given by
$$I({{x_{mth}},{y_{nth}},{z_O}} )= \left( {\left( {\frac{{{z_I}}}{{{z_O}}}} \right){x_{mth}},\left( {\frac{{{z_I}}}{{{z_O}}}} \right){y_{nth}},{z_I}} \right),$$
where m and n are −1, 0, 1, and zI stands for the z-coordinate of the pick-up plane.

 figure: Fig. 3.

Fig. 3. Geometrical relationship in diffraction grating imaging among a point object, parallax images (PIs), and picked up PIs.

Download Full Size | PDF

Note that a diffraction grating generates parallax images in the space where the z-coordinate of the object and parallax images are the same. Moreover, for example, although the ray reaching I(x1th,yO,zO) in the pick-up plane seems to come from PI(x1th,yO,zO), only the rays emanating from the point object are real.

Thus, the viewpoint of the point object corresponding to each parallax image can be explained by a relationship that the virtual rays from parallax images passing through the optical center of the capturing device are the light rays from the point object.

Figures 4 shows the geometric relationship between the path of the chief ray coming from the point object and the path of the virtual rays coming from the parallax images. The virtual rays from the ± 1st order parallax images to the optical center of the imaging lens meet the diffraction grating at the point which is given by a position function G(xmth,ynth,zO), as shown in Fig. 4. At the points G(xmth,ynth,zO), the diffraction grating change the paths of the real rays from the point object into the optical center of the imaging lens. The points G(xmth,ynth,zO) are given by

$$G({{x_{mth}},{y_{nth}},{z_O}} )= \left( {\left( {\frac{d}{{{z_O}}}} \right){x_{mth}},\,\left( {\frac{d}{{{z_O}}}} \right){y_{nth}},\,d} \right),$$
where m and n are −1, 0, and 1.

 figure: Fig. 4.

Fig. 4. Geometrical relationship between virtual rays from PIs and real rays from a point object.

Download Full Size | PDF

2.3 Virtual pinholes and mapping positions of parallax images

This section introduces the virtual pinholes for the back-projection in diffraction grating imaging since they are a crucial part of the proposed computational reconstruction. It turns out that the virtual pinholes can be considered to be a camera array. Our analysis provides formulas for the positions of the virtual pinholes corresponding to their parallax images and the mapping between parallax images and virtual pinholes.

As shown in Fig. 4, the ray radiating from the point object at an angle of Ø is determined to be the ray from the 1st order parallax image located in (x1st,yO,zO). The incoming ray to the imaging point I(x1st,yO,zO) from PI(x1st,yO,zO) through the imaging lens has the same angle of Ø. The imaging point I(x1st,yO,zO) is considered to be the parallax image with the angle of Ø from the point object. Let assume another point of the depth of zI on the pickup plane. Then it has the same parallax as the image I(x1st,yO,zO) and it exists where the line passing through the point object and the point G(x1st,yO,zO) meets the pickup plane.

Putting a virtual pinhole at VI(x1st,yO,0) on the x-axis, as shown in Fig. 5, enables to acquire a virtual parallax image of the point object that has the same parallax as the image I(x1st,yO,zO). Figure 5 shows the geometric relationships among point objects, virtual pinholes, and picked up parallax images.

 figure: Fig. 5.

Fig. 5. Geometrical relationship among a point object, virtual pinhole (VP), virtual image (VI) plane, and I(x1st, yO, zO).

Download Full Size | PDF

Let the optical center of the virtual pinhole located at a point on the x-axis be a position function VP(xmth,ynth,zO), as depicted in Fig. 5. The point VP(xmth,ynth,zO) is calculated by using Eq. (1) and (3), and it is given by

$$VP({{x_{mth}},{y_{nth}},{z_O}} )= \left( {\frac{{({{x_{mth}} - {x_O}} )d}}{{{z_O} - d}},\frac{{({{y_{nth}} - {y_O}} )d}}{{{z_O} - d}},0} \right),$$
where m and n are −1, 0, and 1. Equation (4) says that the position of each virtual pinhole has unique value regardless of the position of the object as long as the order of the corresponding parallax image is the same. However, it says that the position of each virtual pinhole increases in the x and y directions as the depth of the object increases. Also, the imaging lens can be considered as the 0th virtual pinhole.

Here, we introduce a virtual image plane where virtual images exit. The virtual images are a relocated version from the image I(xmth,ynth,zO) of the pickup plane and their position function is given by VI(xmth,ynth,zO)=(x,y,zI), where m and n are −1, 0, and 1. The x-coordinate of a virtual image is given by

$${x_{VImth}} = \left( {\frac{{{x_{mth}}d}}{{{z_O}}} - {x_O}} \right)\frac{{{z_O} - {z_I}}}{{{z_O} - d}} + {x_O}.$$
Replacing xO and xmth by yO and ymth in (5) yields the y-coordinate yO of a virtual image. The virtual images VI(xmth,ynth,zO) is utilized in the proposed computation reconstruction method. Those images are a shift version of the picked-up parallax images I(xmth,ynth,zO) with a factor of Δxmapping. Using Eq. (2) and (5) gives the shift Δxmapping between I(xmth,ynth,zO) and VI(xmth,ynth,zO), which is written as
$$\Delta {x_{mapping}} = \left|{\frac{{{z_I} - d}}{{{z_O} - d}}({{x_O} - {x_{mth}}} )} \right|.$$

2.4 Parallax image segmentation and computational reconstruction

To segment parallax images, an image area is used to determine the field of view of each virtual pinhole. A region in the image area is also defined to locate a parallax image. Segmenting the individual parallax images from the captured raw image by a diffraction grating is hard due to the transparent diffraction grating unlike a raw image picked up by a lens array or a camera array. Moreover, the diffraction grating imaging also allows parallax images to overlap each other when the size of an object exceeds a specific limit. The overlapping problem of parallax images can be explained by the size of the object and the distance between the object and the diffraction grating. An effective object area (EOA) that is the maximum size of an object needs to defined to avoid the overlapping problem. The center of EOA is aligned to the optical axis of the imaging lens for convenience.

Figure 6(a) shows the geometrical relationship among effective object areas, parallax image regions, picked up PI regions, virtual pinholes, and half of the minimum image area (MIA). Defining an EOA yields the region of a parallax image in diffraction grating imaging. In Fig. 6(a), the 1st and −1st order PI regions of the EOA are highlighted in red and green, respectively. The boundary between the picked-up PI regions is determined by using Eq. (2); thus, segmentation of the parallax images in the picked-up plane is possible. After mapping from the picked-up parallax image region into the virtual image plane using Eq. (6), half of MIA is obtained from the distance from the optical axis of the corresponding virtual pinhole to the edge of the picked-up PI region.

 figure: Fig. 6.

Fig. 6. (a) Geometrical relationship among EOA, virtual pinholes, and MIA. (b) Illustration of the imaging formation and mapping process using a real image of a circle object.

Download Full Size | PDF

The size of EOA is equal to that of the ± 1st PI regions when the point object is placed on the optical axis. The size Δx depends on the point xOC on the optical axis. Changing Δx is changing the size of the 1st order PI region based on the center of the 1st order PI, xOC1st. The maximum Δx, the EOA is here given by

$$\Delta {x_{\max }} = |{{z_O} - d} |\tan \left( {si{n^{ - 1}}\left( {\frac{\lambda }{a}} \right)} \right),$$
where λ is the wavelength of a light source, and a is the aperture width of a diffraction grating. It is seen that the EOA is proportional to the distance between the object and the diffraction. The half of MIA, Δr is calculated through Eq. (4), Eq. (5) and Eq. (7), and it is given by
$$\Delta r = \frac{{{z_I}}}{{{z_O}}}\left( {d\tan \left( {si{n^{ - 1}}\left( {\frac{\lambda }{a}} \right)} \right) + \frac{{\Delta {x_{\max }}}}{2}} \right) - \Delta {x_{\max }}.$$
Figure 6(b) illustrates how the parallax images of a circle object in real 3-D space are converted to the parallax image array for our computational reconstruction. The EOA is defined for the circle object in the left of Fig. 6(b). The raw parallax images picked up by an imaging lens are shown in the middle of Fig. 6(b). The proposed mapping based on virtual pinholes and MIA produces the parallax image array in the virtual plane for the proposed computational reconstruction, as shown in the right of Fig. 6(b).

Figure 7 shows the back-projection from the parallax image array of the virtual image plane on the reconstruction plane. Each MIA having its parallax image inside are back-projected on 3-D reconstruction space, passing through its virtual pinhole; it is overlapped on the reconstruction plane. Locating each parallax image accurately in its EIA is possible from Eq. (6). Note that the MIAs possibly overlap each other as highlighted in blue and orange colors in Fig. 7. The back-projection of all MIAs focuses the object image on the reconstruction plane, where the depth of the reconstruction plane is equal to that of the object image. Therefore, the proposed computational reconstruction via diffraction grating imaging consists of determining MIAs, locating the captured parallax images in MIAs, and back-projecting the MIAs.

 figure: Fig. 7.

Fig. 7. Virtual pinhole (VP) models for computational reconstruction in diffraction grating imaging.

Download Full Size | PDF

3. Experiments on the computational reconstruction of object image

To demonstrate the practical feasibility of the proposed method as well as to verify the theoretical analysis explained above, experiments on the computational reconstruction of 3-D objects are performed. Figure 8(a) shows our experimental setup for the PIA pickup process. In Fig. 8(a), the distance between the diffraction grating and the nearest object is 100 mm, which is also 400 mm away from the camera having a CMOS sensor of 35.9 × 24 mm with a pixel pitch of 5.95 µm. The diffraction grating having a spatial resolution of 500 lines/mm is 300 mm away from the camera. A pair of the diffraction grating is crossed at right angles for the generation of 2-D PIA. A diode laser of its wavelength λ=532 nm is used for illuminating the object. Figure 8(b) shows the observation of the object space through the diffraction grating under the indoor illumination. Figure 8(c) shows the observation of the object space when the object is irradiated with the laser in the room lighting condition. Figure 8(d) shows the observation of the object space when the object is irradiated with the laser while the room lighting is off. The PIA pickup and reconstruction experiments utilize the PIA observed in the illumination condition of Fig. 8(d).

 figure: Fig. 8.

Fig. 8. Experimental setup of PIA pickup process and observation images of object space according to change of illumination. (a) Experimental setup. (b) Indoor lighting only. (c) Both indoor and laser lighting. (d) Laser lighting only.

Download Full Size | PDF

Figure 9 shows the front and perspective views of the objects used in the experiments and the images captured by a camera after the objects were imaged as a PIA by diffraction grating imaging. In the PIA pickup process, 3-D objects are used to test the proposed computational reconstruction. As shown in Fig. 9, the shapes of the ‘circle’, ‘triangle’, ‘square’, ‘3’, ‘D’, and ‘S’ are used as the test objects. Man and woman models are also used for a 3-D volume.

 figure: Fig. 9.

Fig. 9. Objects used in the process to pick up PIAs and the extracted PIAs by the proposed method via diffraction grating imaging.

Download Full Size | PDF

Those test objects are located at various positions to check the imaging characteristics. In our experiments, the distance between the frontmost objects and the diffraction grating is 100 mm. The distances among the objects and the lengths of the objects in our experimental setup are shown in Fig. 9. Here, the PIAs in Fig. 9 have a resolution of 2439×2439 pixels and consist of an array of 3×3 parallax images; thus, each parallax image is an image with a size of 813×813.

Figure 10 shows the 3-D reconstructed images from the proposed computational reconstruction method, where the PIAs in Fig. 9 is used as the input. In our reconstruction, virtual pinholes are set to back-project their parallax images on the reconstruction planes along to the z-axis.

 figure: Fig. 10.

Fig. 10. 3-D objects of (a) a circle, (b) geometric shapes, (c) letters of “3DS” and (d) male and female models, and their 3-D reconstructed images.

Download Full Size | PDF

Figure 10(a) shows that the 3-D reconstructed images from the PIA of the single circle object. The circle object is focused at a depth of 100 mm; elsewhere it is blurred out quickly as away from the depth. Figure 10(b) shows the reconstruction results for the geometrical objects of a circle, a triangle, and a square. The spacing between these objects is 5 mm, and it is seen that each object is focused at its location, even though the distance between the objects is very close to each other. Figure 10(c) shows the reconstructed images for the letter objects of ‘3’, ‘D, and ‘S’. As shown in Fig. 9, the letter ‘S’ was obscured by the letter ‘D’ in capturing the PIA for the objects. Nonetheless, the letter ‘S’ is well focused on its reconstructed image. Figure 10(d) shows the reconstructed images for the 3-D objects of the male and female models. The spacing with a distance of 1 mm differentiates the reconstructed images. Thus the depth accuracy in 3-D imaging via diffraction grating imaging is acceptable.

4. Conclusion

In this paper, we have proposed a computational reconstruction method for 3-D object imaging via diffraction grating imaging. We have introduced virtual pinholes and their virtual reconstruction plane for back-projection. A computational reconstruction in diffraction grating imaging requires a well-defined parallax image array. However, localization of parallax images in a diffraction grating was not easy since a diffraction grating is transparent, no pattern to detect. To overcome this problem, we have proposed a theoretical analysis considering the geometrical information such as size and locations of parallax images, and also a virtual pinhole model for our 3-D computation reconstructing method specialized in a diffraction grating imaging. The proposed computation reconstruction method was the first report of 3-D imaging using a diffraction grating, to our best knowledge. We expect that our method is acceptable as a next-generation technology in applications for 3-D imaging using a laser source. For future works, depth extraction and biomedical imaging using our method will be considered as practical applications.

Acknowledgment

This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government(MSIP) (No.2017-0-00515, Development of integraphy content generation technique for N-dimensional barcode application)

References

1. J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging,” Opt. Express 13(6), 1875–1884 (2005). [CrossRef]  

2. R. Martínez-Cuenca, G. Saavedra, A. Pons, B. Javidi, and M. Martínez-Corral, “Facet braiding: a fundamental problem in integral imaging,” Opt. Lett. 32(9), 1078–1080 (2007). [CrossRef]  

3. D.-H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]  

4. D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express 16(21), 16294–16304 (2008). [CrossRef]  

5. Y. Piao, D.-H. Shin, and E.-S. Kim, “Robust image encryption by combined use of integral imaging and pixel scrambling techniques,” Opt. Lasers Eng. 47(11), 1273–1281 (2009). [CrossRef]  

6. J.-Y. Jang, H.-S. Lee, S. Cha, and S.-H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–B76 (2011). [CrossRef]  

7. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]  

8. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

9. J.-Y. Jang, D. Shin, and E.-S. Kim, “Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging,” Opt. Express 22(2), 1533–1550 (2014). [CrossRef]  

10. J.-Y. Jang and M. Cho, “Orthoscopic real image reconstruction in integral imaging by rotating an elemental image based on the reference point of object space,” Appl. Opt. 54(18), 5877–5881 (2015). [CrossRef]  

11. H. Yoo and J.-Y. Jang, “Intermediate elemental image reconstruction for refocused three-dimensional images in integral imaging by convolution with δ-function sequences,” Opt. Lasers Eng. 97, 93–99 (2017). [CrossRef]  

12. J. Wei, S. Wang, Y. Zhao, and M. Piao, “Synthetic aperture integral imaging using edge depth maps of unstructured monocular video,” Opt. Express 26(26), 34894–34908 (2018). [CrossRef]  

13. J.-Y. Jang, D. Shin, and E.-S. Kim, “Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging,” Opt. Lasers Eng. 54, 14–20 (2014). [CrossRef]  

14. S. Komatsu, A. Markman, and B. Javidi, “Optical sensing and detection in turbid water using multidimensional integral imaging,” Opt. Lett. 43(14), 3261–3264 (2018). [CrossRef]  

15. K.-C. Kwon, M.-U. Erdenebat, Y.-T. Lim, K.-I. Joo, M.-K. Park, H. Park, J.-R. Jeong, H.-R. Kim, and N. Kim, “Enhancement of the depth-of-field of integral imaging microscope by using switchable bifocal liquid-crystalline polymer micro lens array,” Opt. Express 25(24), 30503–30512 (2017). [CrossRef]  

16. H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt. Express 20(21), 23755–23768 (2012). [CrossRef]  

17. J.-I. Ser, J.-Y. Jang, S. Cha, and S.-H. Shin, “Applicability of diffraction grating to parallax image array generation in integral imaging,” Appl. Opt. 49(13), 2429–2433 (2010). [CrossRef]  

18. J.-Y. Jang, J.-I. Ser, and E.-S. Kim, “Wave-optical analysis of parallax-image generation based on multiple diffraction gratings,” Opt. Lett. 38(11), 1835–1837 (2013). [CrossRef]  

19. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Disp. Technol. 2(4), 393–400 (2006). [CrossRef]  

20. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef]  

21. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef]  

22. K. Hong, J. Hong, J.-H. Jung, J.-H. Park, and B. Lee, “Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging,” Opt. Express 18(11), 12002–12016 (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. PIA capturing systems and region of parallax images according to the characteristics of each system. (a) Camera array and its parallax images. (b) Lens array and its parallax images. (c) Diffraction grating and imaging points on the pickup plane.
Fig. 2.
Fig. 2. The proposed diffraction grating imaging system consisting of PIA pickup and computational reconstruction processes, where PI stands for parallax image, VP for virtual pinhole, EOA for the effective object area, and MIA for the minimum image area.
Fig. 3.
Fig. 3. Geometrical relationship in diffraction grating imaging among a point object, parallax images (PIs), and picked up PIs.
Fig. 4.
Fig. 4. Geometrical relationship between virtual rays from PIs and real rays from a point object.
Fig. 5.
Fig. 5. Geometrical relationship among a point object, virtual pinhole (VP), virtual image (VI) plane, and I(x1st, yO, zO).
Fig. 6.
Fig. 6. (a) Geometrical relationship among EOA, virtual pinholes, and MIA. (b) Illustration of the imaging formation and mapping process using a real image of a circle object.
Fig. 7.
Fig. 7. Virtual pinhole (VP) models for computational reconstruction in diffraction grating imaging.
Fig. 8.
Fig. 8. Experimental setup of PIA pickup process and observation images of object space according to change of illumination. (a) Experimental setup. (b) Indoor lighting only. (c) Both indoor and laser lighting. (d) Laser lighting only.
Fig. 9.
Fig. 9. Objects used in the process to pick up PIAs and the extracted PIAs by the proposed method via diffraction grating imaging.
Fig. 10.
Fig. 10. 3-D objects of (a) a circle, (b) geometric shapes, (c) letters of “3DS” and (d) male and female models, and their 3-D reconstructed images.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

x m t h = x O + | z O d | tan ( s i n 1 ( m λ a ) ) ,
I ( x m t h , y n t h , z O ) = ( ( z I z O ) x m t h , ( z I z O ) y n t h , z I ) ,
G ( x m t h , y n t h , z O ) = ( ( d z O ) x m t h , ( d z O ) y n t h , d ) ,
V P ( x m t h , y n t h , z O ) = ( ( x m t h x O ) d z O d , ( y n t h y O ) d z O d , 0 ) ,
x V I m t h = ( x m t h d z O x O ) z O z I z O d + x O .
Δ x m a p p i n g = | z I d z O d ( x O x m t h ) | .
Δ x max = | z O d | tan ( s i n 1 ( λ a ) ) ,
Δ r = z I z O ( d tan ( s i n 1 ( λ a ) ) + Δ x max 2 ) Δ x max .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.