Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Voxel characteristic estimation of integral imaging display system using self-interference incoherent digital holography

Open Access Open Access

Abstract

Three-dimensional (3D) images reconstructed by integral imaging display are captured as a complex hologram using self-interference incoherent digital holography (SIDH) and analyzed for the volumetric image characteristics. The integrated images can present 3D perception through not only binocular disparity but also volumetric property, which is represented in forming a volume picture element, called ‘voxel’, and an important criterion to distinguish the integral imaging from the multiview 3D display. Since SIDH can record the complex hologram under incoherent lighting conditions, the SIDH camera system has the advantage to measure the voxel formed with the incoherent light fields. In this paper, we propose a technique to estimate and analyze the voxel characteristics of the integral imaging system such as the depth location and resolution. The captured holograms of the integrated images are numerically reconstructed by depth for the voxel analysis. The depth location of the integrated image can be calculated and obtained using the autofocus algorithms and the focus metrics values, which also show the modalities of depth resolution. The estimation method of this paper can be applied to the accurate and quantitative analysis of the volumetric characteristics of light field 3D displays.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Recently, the emergence of metaverse leads public interests to three-dimensional (3D) displays for augmented reality (AR) and virtual reality (VR) applications. At the same time, the research to improve the usability and the performance of the most popular type of AR/VR display devices which are called near-eye display (NED), or head-mounted display (HMD) have been actively conducted. The 3D display system to be applied to the AR/VR device should have a small, thin, and light form factor, with the ability to represent more realistic and solid visual information and can resolve the general problems in NED such as vergence-accommodation conflict (VAC), limitations of field of view (FOV) and eyebox [13]. Therefore, the flat panel type 3D display like light field 3D display can be a good candidate for the solutions for AR/VR display systems.

Integral imaging display is one of the representative light field display systems, which can express the different perspective scenes through a micro lens array (MLA) and is one of the good solutions for the focus cues [46]. However, there is a trade-off between the spatial resolution and the angular resolution, of which relation has been defined as the characteristic equation [7]. The integrated image formed by the light fields passing through the elemental lenses can be displayed on the integration plane, which is located on the cross sections of the light fields and might be different from the focused plane of the elemental lens. Therefore, the integrated image has not only the viewing perspectives but also the volumetric properties.

The implementation methods of volumetric 3D display formulate a volume picture element also called voxel in the peripheral region of the focal plane. The attempts to analyze the properties of the voxel mathematically were carried out by some previous studies [811], and the measurement of resolution and MTF function of aerial images using the knife-edge method was also reported [12,13]. However, there is no standard method to analyze the quantitative analysis of the actual voxel location. Because the conventional intensity-based camera system only measures the intensity of light, most of the evaluation of the expressed depth of 3D images is carried out by adjusting the focus of the camera.

Holography is a promising display technology that provides a perfect 3D perception [14] and is a recording strategy that can capture complete 3D information [15]. Unlike the general holographic recording method requires a coherent light source such as laser, self-interference incoherent digital holography (SIDH) or Fresnel incoherent correlation holography (FINCH) can obtain the interference patterns under the condition of low coherent or incoherent light sources like LED or natural lights [1618]. Since Rosen et al. first proposed FINCH system using spatial light modulator (SLM) in 2007 [18], various studies have been proposed such as improvements in the architecture, analysis of resolution limitations, and simulation techniques [1925]. Among those studies, research on the FINCH system that can acquire a hologram using a phase-shift technique with a single-shot was implemented by various approaches including SLM [26], diffractive optical elements (DOE) [27], metalens [28]. We have proposed the SIDH system using a geometric phase (GP) lens, named GP-SIDH [2931]. Our proposed system does the phase modulation and the wavefront separation at the same time through the single GP lens, which is made of liquid crystal alignment layers, and has various advantages such as compact optical path and real-time operation.

In this paper, we propose the voxel estimation technique using GP-SIDH. Our holographic camera system assumes the voxel as a summation of the point source, so the voxel represented by the light fields can be captured by self-interference. To analyze the characteristics of the voxel, an autofocus algorithm evaluating the sharpness of an image is applied to the numerical reconstruction of captured holograms according to depth. To verify this proposal, the depth positions of integrated images were measured and the modalities of the depth resolution of the integrated image were analyzed and discussed.

2. Voxel estimation using GP-SIDH

2.1 Hologram obtained by GP-SIDH

Figure 1 shows the schematic concept of the proposed method, where the light fields that make up the voxel represented by integral imaging displays are assumed as the group of point light sources and captured by SIDH camera system. The voxel of integral imaging is an incoherent imager that has no spatial coherence. In our GP-SIDH camera, the light wave from the voxel passes through the GP lens, which acts like a bifocal lens as the polarization state of the incident light. When the wavefront is modulated, the phase shift is simultaneously performed. According to the principle of geometric phase, when the polarization state changes the path difference in the Poincaré sphere deduces the relative phase difference; by changing the relative angle between the two polarizers, the amount of geometric phase difference is controlled. The polarized image sensor can get intensity images in four steps with 45°, the phase modulated lights are rearranged into light with the same polarization by micropolarizer on the pixel and self-interfere with each other. The final complex hologram can be obtained by recombination of four recorded images without the bias and the twin-image noise.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of proposed system. Blue and yellow lines after the GP lens indicate the transmitting and converging rays, respectively; MLA with focal length $l = {f_{MLA}}$; l, voxel location; ${z_s}$, voxel to GP lens distance; ${f_{gp}}$, focal length of GPL, ${z_h}$; GP lens to polarized image sensor distance. (b) Elemental image on display. (c) Integrated voxel. (d) Complex hologram. (e) Numerical reconstruction.

Download Full Size | PDF

Integral imaging forms the voxel by the distribution of the light rays using MLA. According to the imaging theory, each elemental lens of MLA produces a focused image of the elemental images at the focal plane as shown in Fig. 1., respectively. In this situation, the aperture limitation of the elemental lens affects the depth of field (DOF) and the working distance of the imaging system. The integrated plane is reproduced by superimposing each ray, and in case of DOF is large enough, the voxel is formulated at the overlapped plane of DOF of rays. Therefore, the light rays appear as if they started from the integrated plane rather than the focused plane, the voxel of the integral imaging system can be defined as the area where the light fields of elemental images are overlapped.

Equation (1) is a mathematical expression of integrated voxel which is shown in Fig. 1. The location of the voxel is determined by the lens maker’s equation, which is related to the gap between display and MLA and the focal length of MLA [32]. In terms of expression of 3D information, the voxel of the integral imaging can be defined to 3D pixels, and since the voxel is correlated by display pixels, the shape of the voxel can be assumed to point source.

$$l = \frac{{fg}}{{g - f}}$$
In Eq. (1), l is the location of formulated voxel, f is focal length of MLA, g. is the gap between display and MLA. Applying the rule of spherical wave propagation, the intensity profile from the voxel to image sensor is expressed as
$${\textrm{I}_0}({x,\; y} )= {\left|{{C_1}({{r_s}} )Q\left[ {\frac{1}{{{z_s}}}} \right]L\left[ { - \frac{{{{\bar{r}}_s}}}{{{z_s}}}} \right] \cdot \left( {{C_2}Q\left[ {\frac{1}{{{f_{GPL}}}}} \right] + {C_3}Q\left[ {\frac{{ - 1}}{{{f_{GPL}}}}} \right]} \right)\ast Q\left[ {\frac{1}{{{z_h}}}} \right]} \right|^2}$$
where ${\ast} $ denotes propagation by 2D convolution operator, zs and zh denote the distance between voxel to GP lens and GP lens to image sensor respectively, and fGPL denotes a focal length of GP lens. The function $L({\bar{s}} )$ and $Q(s )$ is the linear phase function, $L({\bar{s}} )= \exp [{i2\pi {\lambda^{ - 1}}({{s_x}x + {s_y}y} )} ]$, and the quadratic phase function, $Q(s )= \exp [{i2\pi s{\lambda^{ - 1}}({{x^2} + {y^2}} )} ]$, $\lambda $ is the wavelength, ${\bar{r}_s} \equiv ({{x_s},\; {y_x}} )$ and Cn is a complex constant of an arbitrary object point. According to previous research [2224], the reconstruction distance zr of the voxel plane at the proposed configuration is denoted as
$${z_r} ={-} \frac{{({{f_{GPL}}{z_s} - {z_h}{z_s} + {f_{GPL}}{z_h}} )({{f_{GPL}}{z_s} + {z_h}{z_s} + {f_{GPL}}{z_h}} )}}{{2z_s^2{f_{GPL}}}}.$$
The hologram obtained by GP-SIDH can be reconstructed to a specific depth plane using the numerical reconstruction method such as Fresnel integral method or angular spectrum method [33]. The reconstruction distance zr and Eq. (3) indicates converting hologram plane to spatial plane process, the numerical reconstruction procedure is expressed as
$$U({x,\; y;{z_r}} )= {\mathrm{{\cal F}}^{ - 1}}({\mathrm{{\cal F}}({U({x,\; y;{z_0}} )} )\ast H({u,\; v;{z_r}} )} )$$
where $U({x,\; y;z} )$ denotes field at depth z, $\mathrm{{\cal F}}$ and ${\mathrm{{\cal F}}^{ - 1}}$. is Fourier transform operator, z0 denotes the initial location and $H({u,\; v;{z_r}} )$ denotes transfer function of free space propagation of location zr.

2.2 Autofocusing algorithm

In the numerical reconstruction of SIDH, the image property changes according to the reconstruction depth. The autofocus algorithm finds an image which is in focus according to the position of the optical element in the field of digital photography and microscopy [3441]. In the field of digital holography, to compare the focus of the numerically reconstructed image, the image characteristics analysis techniques have been proposed using various methods such as image statistics-based [36], correlation-based [37], differentiation-based [38], weighted Fourier spectral-based [39], and discrete cosine transform-based [40].

The autofocus algorithm is applied to the amplitude of the field at a specific depth obtained through diffraction theory such as the angular spectrum method. The result of the autofocus algorithm is the focus metrics M which indicate quantified sharpness value corresponding reconstruction depth range. The finding depth of voxel is analyzed by estimating the focus metrics M as follow,

$${z_{voxel}} = \mathop {\textrm{argmax}}\limits_{{z_r}} M$$
For example, the normalized variance (VAR) is the most popular autofocus algorithm which is based on statistics of the distribution of luminance or gray level across the image [36]. The focus metrics M of propagation distance z through VAR is
$${\textrm{M}_{VAR}}(z )= \frac{1}{{MN{\mu _g}}}\mathop \sum \limits_{m = 1}^M \mathop \sum \limits_{n = 1}^N {[{g({m,\; n,\; z} )- \; {\mu_g}} ]^2},$$
where $g({m,\; n,\; z} )$ is a gray level of amplitude reconstruction at coordinates m, n of propagation distance z, N and M are width and height, respectively, and ${\mu _g}$ is the mean of the amplitude reconstruction.

3. Experiment and result

3.1 Experimental setup

GP-SIDH used in the experiment is composed of the customized GP lens and the polarization image sensor (Lucid vision labs PHX050S1) with a total of 2448 × 2048 pixels and a pixel pitch of 3.45 um. The customized GP lens acts the same as the commercial model but has a longer focal length of 261.5 mm at wavelength 550 nm compared to the off-the-shelf model (ImagineOptix, USA). It is fabricated by interferometer and LC alignment [41]. The customized GP lens modulates half of left-handed circularly polarized (LHCP) incoming light to converging light and transmits another half of incoming light. GP lens to image sensor distance is 8 mm.

Figure 2 shows hologram recording to reconstruction procedure of the resolution target located at 150 mm from the GP lens. First, a fringe pattern with four different polarizations is taken with a polarized image sensor. Figure 2(a-d) are four different polarization intensity images of 0, 45, 90, 135 degrees. To acquire bias and twin image eliminated complex hologram, 4-step phase shift method is employed [42]. Figure 2(e) is the phase-angle part and Fig. 2(f) is the amplitude part of a complex hologram. Finally, the focus metrics are extracted by applying the autofocus algorithm to the numerically reconstructed hologram while changing the depth.

 figure: Fig. 2.

Fig. 2. Hologram acquisition, reconstruction and autofocus result. Intensity at (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization, (e) phase-angle part, (f) amplitude part of acquired phase-shift complex hologram. Numerical reconstruction at (g) 128 mm, (h) 152 mm, (i) 189 mm. (j) Normalized focus metrics value comparison depending on the applied autofocus algorithms.

Download Full Size | PDF

The final focus curve and corresponding reconstruction depth are shown in Fig. 2(g). Four autofocus algorithms are used: the Tenengrad algorithm (TEN), the bandpass discrete cosine transform (DCT), Volath’s F4 (VOL), the normalized variance (VAR). TEN is a differentiation-based algorithm which finds the edge of an image by Sobel filtering [38]. DCT is a discrete cosine transform based algorithm [40]. VOL is correlation-based algorithm [37] and VAR is image statistics-based algorithm [36]. The peak of focus metrics indicates in focus depth, reconstructions of corresponding depth are shown. According to the diffraction formula, the diffraction pattern of the hologram is inevitably affected by the pattern of the original image. In the case of the resolution target used in the experiment, the defocus pattern constructively interferes with each other, thus they have high sharpness even though they are not actually in focus.

In order to measure the formulated voxel, it is necessary to measure the DOF variance depending on the numerical aperture. We measured and analyzed the point source hologram as a function of the size of the aperture stop. Figure 3 shows the numerical reconstruction image of the voxel point located at 150 mm according to the distance from the camera system. The point image objects are formed by the projection lens of which f/# is 1.4 and 16, respectively. The working distance of point image formed by the lens with large aperture and small f/# is longer than that with a small aperture and big f/# as shown in Fig. 3, which is difficult to be identified with the reconstructed planar images but is clearly confirmed in the intensity distribution along the longitudinal direction.

 figure: Fig. 3.

Fig. 3. Numerical reconstructions of hologram of point image. (a) Schematic diagram of DoF variance experiment. (b), (c), and (d) planar images at 150, 175, and 200 mm using the lens of f/# 1.4, while (e), (f) and (g) those using f/# 16, (h) and (i) intensity distribution along the longitudinal direction from 100 mm to 250 mm for f/#, respectively.

Download Full Size | PDF

Figure 4 shows the experimental setup for the estimation of the voxel properties of integral imaging using our camera system. The distance between the MLA of the integral imaging system and GP lens of GP-SIDH is 150 mm. To implement the integral imaging display, we use a smartphone display (Xperia XZ Premium) with a total of 3840 × 2160 pixels and a pixel pitch of 31.5 um. The elemental lens pitch of MLA is 1 mm, and the focal length of the elemental lens is 3.3 mm. The gap between MLA and flat panel display (FPD) is set to match with the focal length of MLA, which makes the integral imaging system into the focused mode. The expressible range of the implemented integral imaging system is about 100 mm from the location of MLA. If the integrated image is formed far out of the expressible range, the image quality is severely degraded and not recognized. The central wavelength is 550 nm. GP lens functions in a broadband spectral range with the wavelength dependency of ${f_{gp}}(\lambda )= {f_t}({{\lambda_t}/\lambda } )$, where ${f_t}$ and ${\lambda _t}$ is the focal length and wavelength of template lens [43]. Therefore, in the measurement result, the wavelength dependency of the 3D display was also reflected.

 figure: Fig. 4.

Fig. 4. Experimental setup of GP-SIDH camera and integral imaging system

Download Full Size | PDF

3.2 Complex hologram of Integrated image

The three-line images displayed on the integral imaging system in the real and the virtual ranges are captured as complex holograms and analyzed about the depth locations of the integrated images using an autofocus algorithm. Figure 5 shows the tested integrated image and its captured hologram. Fig. 5 (a) is the picture using a conventional 2D camera, while (b) shows the phase angle image of the hologram obtained by GP-SIDH, and (c) shows the numerical reconstruction image of the hologram using the angular spectrum method and region of interest (ROI). In the analysis of an autofocus algorithm, the focus metrics of only the ROI area are calculated, which can reduce the unnecessary error and increase the accuracy and the effectiveness of the calculation results.

 figure: Fig. 5.

Fig. 5. (a) Picture of the integrated image using a general CCD camera, (b) phase angle image of the captured complex hologram, (c) numerical reconstruction image of the integrated image and region of interest (ROI) of an autofocus algorithm

Download Full Size | PDF

Figure 6 (with Visualization 1) shows the analysis results of the integrated image in the real mode range, which is designed to be located at 40 mm from MLA, which is named the integrated distance, l, corresponds to 110 mm from GP-SIDH camera. Fig. 5 (a), (b), and (c) shows the pictures of the integrated image on the diffuser, which is placed at 118 mm, 108 mm, and 98 mm, respectively. These results show the integrated image is focused on the 108 mm that is slightly different from the designed integration location of 110 mm, which assumes that this small variance is caused by the practical error and distortions of system factors such as the gap and the specifications of MLA. The flipped images are observed on the diffuser around the centric front view image, which is not observed within the viewing angle. Fig. 5 (d), (e), and (f) show numerically reconstructed images of the hologram at the same locations of (a), (b), and (c), respectively. Although the single shot of reconstruction is difficult to clearly distinguish the degree of focus and blur, the stage of focus and defocus can be obviously observed with the continuous reconstructions along the longitudinal direction as shown in Visualization 1. Fig. 6 (g) shows the focus metrics curve comparison according to autofocus criteria. It is assumed that the peak of each focus metrics indicates the focal point of the voxel. As shown in Fig. 6 (g), all algorithms show a similar focus distance. The shapes and the tendencies of focus metrics curves are affected by various factors such as the patterns of an object, the applied algorithm, etc., but the variations are insignificant to define the focal position of the image. Therefore, we can estimate and find the voxel location using the autofocus algorithm.

 figure: Fig. 6.

Fig. 6. Integrated image in real mode located at l = 40 mm (a), (b) and (c) captured images on diffuser according to distance from GPL, (d), (e) and (f) numerical reconstruction of hologram along distance from GPL (Visualization 1), (g) focus metrics curve depends on autofocus criteria (see Visualization 1).

Download Full Size | PDF

The integrated image in the virtual mode where the voxel is located behind MLA, it is impossible to put the diffuser on the positions of the integrated image. However, the capturing process using GP-SIDH can be performed almost the same as the case of the real mode. The location of the integrated image in virtual mode can also be measured using an autofocus algorithm. Fig. 7 (a), (b), and (c) show the numerical reconstructions of the integrated image located at l = - 40 mm where the distance from the GP lens is 190 mm. Similar with the Visualization 1 of real mode, the stage of focus and blur is more clearly indicated in Visualization 2, which is the continuous reconstruction image along with the longitudinal distance, and shows the voxel is located around 192 mm from GP lens nearby where the integrated image is designed to be placed. The location of the integrated image can also be estimated by the focus metrics value curve as shown in Fig. 7 (d). As shown in Fig. 7 (d), it seems that the difference in the shape and trend of the focus metric curve according to the autofocus criteria is larger than that of the real mode case. Moreover, DCT method indicates a wrong value for the voxel location. Therefore, we choose VAR method for the estimation of the integrated image location because this method shows the most acceptable shape and trend of the focus metrics curves.

 figure: Fig. 7.

Fig. 7. Integrated image in virtual mode located at l = - 40 mm (a), (b) and (c) numerical reconstructions of hologram along distance from GPL (Visualization 2), (g) focus metrics curve depending on autofocus criteria (see Visualization 2).

Download Full Size | PDF

To verify the estimation method of the voxel position of integral imaging, we measure the integrated images which are designed to be located at 20, 40, and 80 mm away from MLA in both real and virtual modes as shown in Fig. 8 (a) and Visualization 3, Visualization 4. The 6 sets of elemental images to be in each position are prepared and sequentially displayed on the FPD, and each integrated image is captured as the complex hologram using GP-SIDH camera. The captured holograms are analyzed through the VAR method, and the position of the voxel of the integrated image is calculated as shown in Fig. 8. (b) and (c) for real and virtual modes, respectively. For the real mode, the position of the voxel is measured as 131 mm (l =19 mm), 108 mm (l = 42 mm) and 92 mm (l =58 mm), respectively. Compared with the designed values, the difference of voxel location is small for the relatively short integration distance, the error is quite significant as increasing the integration distance. Visualization 3 compares each real mode voxel reconstruction and in focus depth. The focus is achieved with the highest sharpness at the position of the voxel obtained through the proposed method. Similar things are also observed in the virtual mode as shown in Fig. 8 (c). The voxel locations are calculated as 170 mm (l =-20 mm), 192 mm (l = -42 mm) and 214 mm (l = - 64 mm). Visualization 4 compares virtual mode voxel reconstruction and shows the results corresponding to Fig. 8(c). Although there is a little error, the voxel location can be measured and analyzed by the proposed method using GP-SIDH camera with an autofocus algorithm.

 figure: Fig. 8.

Fig. 8. Estimation of voxel location using focus metrics curve of VAR method. (a) Schematic diagram of implemented system and voxel location (b) voxel estimation results for real mode integrated images, (c) results for virtual mode integrated images (see Visualization 3, Visualization 4).

Download Full Size | PDF

4. Discussion

In summary, we demonstrated the voxel measurement system based on GP-SIDH. Compared with previous studies that suggested methods to evaluate the quality of 3D image, the existing studies suggested a technique for indirectly measuring the transfer function through the intensity of the optical field [1113]. On the other hand, our approach has the advantage that it can be utilized for the analysis of various types of 3D displays because it measures the wavefront reproduced by the 3D display. However, the performance of our system depends on implementation of the 3D display. In case of the integral imaging, the location error increases in proportion to the integration distance. The integration position is determined by the reciprocal pitch of the elemental image. Larger integration distances are more sensitive to the misalignments and variations of the integral imaging system components. Therefore, the differences between the designed and the measured position of the voxel becomes larger when the designed location exceeds 80 mm in both real and virtual mode.

It should be noted that the focus metrics value is just the indication of the location of the voxel, not the evaluation of image quality or the clearness of integration. When the integration location is too far, the image quality of the integrated image is degraded and blurred not to be recognizable. Figure 9 shows the results for the voxel on the larger integration distance where the integrated image cannot be observable. It is difficult to recognize the integrated image at $l ={-} 120\; mm$, but the voxel measurement proceeds with an error. Therefore, for more accurate voxel estimation, it is necessary to consider the image quality evaluation at the same time.

 figure: Fig. 9.

Fig. 9. Image degradation of large integration distance voxel, (a) l = - 80 mm, (b) l = - 120 mm (c) voxel locations using VAR method

Download Full Size | PDF

5. Conclusion

A voxel is a viewing characteristic defining volumetric 3D display, which is an image pixel formed both vertically and horizontally in a real or virtual space region. In this study, we propose the voxel estimation technique based on self-interference digital holography that can acquire holograms with incoherent light conditions. The proposed method analyzes the characteristics according to the depth of voxel through the autofocus algorithm around 200 mm length, by compact optical configuration less than 4 cm. Our approach also has the robustness of vibration that can be integrated into the motion stage and can be used as a viewing angle measurement system. To verify our proposal, the experiment through the integral imaging system is conducted by the measurement of voxels of integrated images. The results confirm that not only the real mode, but also the virtual mode of the voxel can be analyzed by our system. The next steps in this work should include optimization and enhancement of optical performance. The goal is the development of a holographic camera system specialized for the task of acquiring holograms of the macroscopic light field. We expect this work will form quantitative evaluation indicators in the future development direction of 3D displays and provide new inspiration.

Funding

This work was supported by Samsung Display Co., Ltd.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Lambooij, M. Fortuin, I. Heynderickx, and W. IJsselsteijn, “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Tech. 53(3), 030201 (2009). [CrossRef]  

2. B. C. Kress, Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (SPIE, 2020).

3. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

4. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

5. F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

6. Xuan Wang and H. Hua, “Depth-enhanced head-mounted light field displays based on integral imaging,” Opt. Lett. 46(5), 985–988 (2021). [CrossRef]  

7. S.W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn J. Appl. Phys. Lett. 44(2), L71–L74 (2005). [CrossRef]  

8. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]  

9. Y. M. Kim, K. H. Choi, and S. W. Min, “Analysis on expressible depth range of integral imaging based on degree of voxel overlap,” Appl. Opt. 56(4), 1052–1061 (2017). [CrossRef]  

10. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

11. S. K. Kim, D. W. Kim, Y. M. Kwon, and J. Y. Son, “Evaluation of the monocular depth cue in 3D displays,” Opt. Express 16(26), 21415–21422 (2008). [CrossRef]  

12. N. Kawagishi, R. Kakinuma, and H. Yamamoto, “Aerial image resolution measurement based on the slanted knife edge method,” Opt. Express 28(24), 35518–35527 (2020). [CrossRef]  

13. N. Kawagishi, K. Onuki, and H. Yamamoto, “Comparison of divergence angle of retro-reflectors and sharpness with aerial imaging by retro-reflection (AIRR),” IEICE Trans. Electron. E100.C(11), 958–964 (2017). [CrossRef]  

14. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

15. M. K. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev. 1, 018005 (2010). [CrossRef]  

16. M. K. Kim, “Incoherent digital holographic adaptive optics,” Appl. Opt. 52(1), A117–A130 (2013). [CrossRef]  

17. M. K. Kim, “Full color natural light holographic camera,” Opt. Express 21(8), 9636–9642 (2013). [CrossRef]  

18. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]  

19. J. Rosen, S. Alford, V. Anand, J. Art, P. Bouchal, Z. Bouchal, M. U. Erdenebat, L. Huang, A. Ishii, S. Juodkazis, N. Kim, P. Kner, T. Koujin, Y. Kozawa, D. Liang, J. Liu, C. Mann, A. Marar, A. Matsuda, T. Nobukawa, T. Nomura, R. Oi, M. Potcoava, T. Tahara, B. L. Thanh, and H. Zhou, “Roadmap on recent progress in FINCH technology,” J. Imaging 7(10), 197 (2021). [CrossRef]  

20. Y. Kashter and J. Rosen, “Enhanced-resolution using modified configuration of Fresnel incoherent holographic recorder with synthetic aperture,” Opt. Express 22(17), 20551–20565 (2014). [CrossRef]  

21. N. Siegel and G. Brooker, “Single shot holographic super-resolution microscopy,” Opt. Express 29(11), 15953–15968 (2021). [CrossRef]  

22. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with Fresnel elements,” Opt. Express 18(2), 962–972 (2010). [CrossRef]  

23. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]  

24. J. Rosen and R. Kelner, “Modified Lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22(23), 29048–29066 (2014). [CrossRef]  

25. N. Siegel, V. Lupashin, B. Strorrie, and G. Brooker, “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10(12), 802–808 (2016). [CrossRef]  

26. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, “Single-shot phase-shifting incoherent digital holography,” J. Opt. 19(6), 065705 (2017). [CrossRef]  

27. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42(3), 383–386 (2017). [CrossRef]  

28. H. Zhou, L. Huang, X. Li, X. Li, G. Geng, K. An, Z. Li, and Y. Wang, “All-dielectric bifocal isotropic metalens for a single-shot hologram generation device,” Opt. Express 28(15), 21549–21559 (2020). [CrossRef]  

29. K. Choi, J. Yim, S. Yoo, and S. W. Min, “Self-interference digital holography with a geometric-phase hologram lens,” Opt. Lett. 42(19), 3940–3943 (2017). [CrossRef]  

30. K. Choi, J. Yim, and S.-W. Min, “Achromatic phase shifting self-interference incoherent digital holography using linear polarizer and geometric phase lens,” Opt. Express 26(13), 16212–16225 (2018). [CrossRef]  

31. K. Choi, K.-I. Joo, T.-H. Lee, H.-R. Kim, J. Yim, H. Do, and S.W. Min, “Compact self-interference incoherent digital holographic camera system with real-time operation,” Opt. Express 27(4), 4818–4833 (2019). [CrossRef]  

32. E. Hecht, Optics, 5e (Pearson Education India, 2002).

33. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

34. E. S. Fonseca, P. T. Fiadeiro, M. Pereira, and A. Pinheirao, “Comparative analysis of autofocus functions in digital in-line phase-shifting holography,” Appl. Opt. 55(27), 7663–7674 (2016). [CrossRef]  

35. H. A. Ilhan, M. Doğar, and M. Özcan, “Digital holographic microscopy and focusing methods based on image sharpness,” J. Micros. 255(3), 138–149 (2014). [CrossRef]  

36. F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39(15), 4286–4289 (2014). [CrossRef]  

37. D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987). [CrossRef]  

38. E. Krotkov and J.P. Martin, “Range from focus,” Proceedings of IEEE International Conference on Robotics and Automation. 3, 1093–1098 (1986).

39. L. Firestone, K. Cook, K. Culp, N. Talsania, and K. Preston, “Comparison of autofocus methods for automated microscopy,” Cytometry 12(3), 195–206 (1991). [CrossRef]  

40. J. Jeon, I. Yoon, J. Lee, and J. Paik, “Robust focus measure for unsupervised auto-focusing based on optimum discrete cosine transform coefficients,” IEEE Trans. Consumer Electron. 57(1), 1–5 (2011). [CrossRef]  

41. J.-M. Geusebroek, F. Cornelissen, A. W. Smeulders, and H. Geerts, “Robust autofocusing in microscopy,” Cytometry 39(1), 1–9 (2000). [CrossRef]  

42. P. Chen, B.-Y. Wei, W. Hu, and Y.-Q. Lu, “Liquid-crystal-mediated geometric phase: from transmissive to broadband reflective planar optics,” Adv. Mater. 32, 1903665 (2020). [CrossRef]  

43. C. Yousefzadeh, A. Jamali, C. McGinty, and P. J. Bos, ““Achromatic limits” of Pancharatnam phase lenses,” Appl. Opt. 57(5), 1151–1158 (2018). [CrossRef]  

Supplementary Material (4)

NameDescription
Visualization 1       numerical reconstruction of the real mode integral imaging voxel.
Visualization 2       numerical reconstruction of the virtual mode integral imaging voxel.
Visualization 3       Numerical reconstruction of real mode voxels with three different location.
Visualization 4       Numerical reconstruction of virtual mode voxels with three different location.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Schematic diagram of proposed system. Blue and yellow lines after the GP lens indicate the transmitting and converging rays, respectively; MLA with focal length $l = {f_{MLA}}$; l, voxel location; ${z_s}$, voxel to GP lens distance; ${f_{gp}}$, focal length of GPL, ${z_h}$; GP lens to polarized image sensor distance. (b) Elemental image on display. (c) Integrated voxel. (d) Complex hologram. (e) Numerical reconstruction.
Fig. 2.
Fig. 2. Hologram acquisition, reconstruction and autofocus result. Intensity at (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization, (e) phase-angle part, (f) amplitude part of acquired phase-shift complex hologram. Numerical reconstruction at (g) 128 mm, (h) 152 mm, (i) 189 mm. (j) Normalized focus metrics value comparison depending on the applied autofocus algorithms.
Fig. 3.
Fig. 3. Numerical reconstructions of hologram of point image. (a) Schematic diagram of DoF variance experiment. (b), (c), and (d) planar images at 150, 175, and 200 mm using the lens of f/# 1.4, while (e), (f) and (g) those using f/# 16, (h) and (i) intensity distribution along the longitudinal direction from 100 mm to 250 mm for f/#, respectively.
Fig. 4.
Fig. 4. Experimental setup of GP-SIDH camera and integral imaging system
Fig. 5.
Fig. 5. (a) Picture of the integrated image using a general CCD camera, (b) phase angle image of the captured complex hologram, (c) numerical reconstruction image of the integrated image and region of interest (ROI) of an autofocus algorithm
Fig. 6.
Fig. 6. Integrated image in real mode located at l = 40 mm (a), (b) and (c) captured images on diffuser according to distance from GPL, (d), (e) and (f) numerical reconstruction of hologram along distance from GPL (Visualization 1), (g) focus metrics curve depends on autofocus criteria (see Visualization 1).
Fig. 7.
Fig. 7. Integrated image in virtual mode located at l = - 40 mm (a), (b) and (c) numerical reconstructions of hologram along distance from GPL (Visualization 2), (g) focus metrics curve depending on autofocus criteria (see Visualization 2).
Fig. 8.
Fig. 8. Estimation of voxel location using focus metrics curve of VAR method. (a) Schematic diagram of implemented system and voxel location (b) voxel estimation results for real mode integrated images, (c) results for virtual mode integrated images (see Visualization 3, Visualization 4).
Fig. 9.
Fig. 9. Image degradation of large integration distance voxel, (a) l = - 80 mm, (b) l = - 120 mm (c) voxel locations using VAR method

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

l = f g g f
I 0 ( x , y ) = | C 1 ( r s ) Q [ 1 z s ] L [ r ¯ s z s ] ( C 2 Q [ 1 f G P L ] + C 3 Q [ 1 f G P L ] ) Q [ 1 z h ] | 2
z r = ( f G P L z s z h z s + f G P L z h ) ( f G P L z s + z h z s + f G P L z h ) 2 z s 2 f G P L .
U ( x , y ; z r ) = F 1 ( F ( U ( x , y ; z 0 ) ) H ( u , v ; z r ) )
z v o x e l = argmax z r M
M V A R ( z ) = 1 M N μ g m = 1 M n = 1 N [ g ( m , n , z ) μ g ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.