Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color Fourier orthoscopic holography with laser capture and an LED display

Open Access Open Access

Abstract

We present an end-to-end full color Fourier holographic imaging approach, which involves standard holographic recording with three wavelengths and an improved LED-driven display. It provides almost undistorted orthoscopic reconstruction of large objects in full color, which can be viewed with a naked eye. High quality reconstruction is preserved across large object depths, measured in meters, as shown theoretically and experimentally. Our imaging approach is based on capture, processing and display of the object wave fields without spherical phase factors. This efficient convention combined with a novel numerical propagator for confocal fields enables complete axial decoupling of both ends of the imaging chain, and consequently, free manipulation of axial position as well as size of the image without visible deformations and with minimal computation effort.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

It is commonly recognized that digital holography (DH) and holographic display constitute the best-known framework for truly 3D imaging and video as they aim to recreate the optical field emitted by a recorded scene [1]. With present state of technology, the digital reproduction of all 3D perception cues offered by classical holography is very difficult. Therefore, known implementations of holographic video involve some compromises made on the side of data capture or display, which influence visual properties of the reconstructed image. One common approach is to simulate holographic content with computer generated holograms (CGH) [2–5], which pose no restrictions to data capture system and focus on improving the display side [6–11]. Another proposition is to use integral photography for 3D recording [12], which feeds a standard holographic display based on Spatial Light Modulator (SLM) technology [13]. The technique facilitates 3D perception with a naked eye but is fundamentally limited in recorded image depth and resolution. A complete chain of end-to-end holographic imaging was attempted by a few groups. Most of the works involve an elementary hardware design consisting of a single holographic camera and a corresponding SLM [14–17], which projects a real image back to the object space. More advanced systems utilize a set of CDD cameras [18] or a synthetic aperture acquisition [19] coupled with a multi-SLM spherical display [20–22] to extend horizontal parallax. In all cases the obtained image suffers from inverted depth and a key-hole effect due to divergence of the reconstructed object field. In effect, only small images can be observed as whole from a single position. An alternative way is to project a virtual image of a Fresnel hologram [23]. In this setting the image is naturally orthoscopic, but the viewing zone coincides with the physical SLM, which means that the eye cannot be placed in the optimal region of image perception. Our recent work [24] provides a distortion-free monochromatic solution to pseudoscopic geometry of the real image. However, the method puts restriction on axial position of the image, requires additional lens and there are some losses of image quality at the processing stage.

In contrary to the state of the art this work proves that it is possible to obtain color reconstructions of large real orthoscopic objects, with arbitrary position and size, also without 3D deformations where the full image can be viewed with naked eyes. The geometrical manipulations of the image are especially simple, they require only additional hologram cropping. All these features are achieved by presented here Fourier holographic imaging, which is based on axial decoupling of both ends of the system. The system involves standard lensless digital Fourier holographic recording [25] with three wavelengths, an improved version of LED illumination display build using the viewing window (VW) concept [26] and introduced here Confocal Fresnel Propagation (CFP). The proposed imaging approach has small viewing angle, thus the position of viewer is spatially restricted, and in consequence viewer is unable to observe different perspectives of image. This feature can be improved using spatial or temporal multiplexing [9,18–22] or eye tracking system [26]. The Fourier holographic imaging is based on capturing, processing and displaying of the object wave without the corresponding spherical phase factor. This way of wave encoding is referred to as Compact Space Bandwidth Product (CSBP) representation [27]. Thanks to its full use the amount of Space Bandwidth Product (SBP) involved in the entire processing path is constant and determined by the display SLM. Thus, the characteristic advantage of the lensless Fourier holographic capture system in terms of SBP [28] is here extended for the entire imaging path. Experimentally, the numerical efficiency and flexibility of the solution is demonstrated by showing axial relocation and magnification of the orthoscopic holographic image.

The coupling between capture and display systems is carried out by means of the introduced CFP method for Fourier Holography where the carriers of propagated beams are confocal, i.e. have common focus point. The efficiency and accuracy of the solution is a result of fundamental feature of the algorithm where the size and resolution of the result is scaled according to the ratio of the radii of propagated beams. This important feature allows free manipulation of axial position as well as linear size of the image without any additional computational cost. Also, high quality image is obtained because the same readout curvature is introduced numerically and optically. The CFP method requires only two Fourier Transformations (FT) and no zeropadding. The technique takes a sampling idea from the non-paraxial CSBP Angular method [27], which on contrary requires four FTs, additional zeropadding and it is not based on confocal geometry.

A downside to all the aforementioned techniques is that they use laser light sources on the display side producing coherent speckles and being potentially hazardous to human vision. Color LED illumination is the preferred alternative [29–33], but it is not straightforward to employ due to its reduced spatial coherence and related image blur. The use of LED in VW display was already presented [30]. Here it is theoretically and experimentally demonstrated that high quality full color reconstruction can be obtained for objects with large depths, here 10 m. The theory indicates that the further improvement of reconstruction depth is possible when selecting the display magnification optimal for the visual perception. In our previous display based on Fresnel configuration and unitary magnification the low coherence of LEDs limited the reconstruction depth to 50 mm [31].

In the following we (i) present principle of our 3D Fourier imaging, (ii) develop the CFP method, (iii) analyze geometrical deformations remaining in the reconstructed 3D volume of real-world objects, (iv) show experimental implementation, (v) analyze coherence effects, and (vi) demonstrate display performance in reconstruction of holographic content with emphasis on image quality perceived by a naked eye for varying reconstruction depth.

2. 3D Imaging Principle

According to the sine condition in the Abbe’s theory of image formation [34], nonunitary magnification of a 3D image necessarily entails a nonlinear relation between transverse and longitudinal magnifications. Within the paraxial regime the longitudinal magnification is a square of the transverse one. Thus, a reconstructed volume is heavily distorted at larger magnifications. One example of 3D imaging is a direct reconstruction of coherent digital hologram of an object of size 100 mm in the LED display system [31], for which the maximum size of the image is only 10 mm. In this case the required scale manifests in an almost flat reconstruction as the transverse and longitudinal magnifications are 0.1 and 0.01, respectively, while our display has the capability of reconstructions of close 3D copies of recorded color objects.

Figure 1 shows a schematic of the image formation process in the proposed Fourier holographic technique. Figure 1(a) illustrates registration of the primary hologram realized in the lensless DH Fourier configuration, while Fig. 1(b) depicts reconstruction of a secondary hologram in the Fourier-based LED holographic display. The CFP routine provides transfer between the primary and secondary hologram plane. In the capture geometry the recorded object term OR* is a product of the object wave O and the conjugated reference wave R*:

OR*(u1)=Oc(u1,R1)=O(u1)exp[ikn1u122R1],
where subscript 1 indicates the capture system and n one of the RGB wavelengths used for recording. The notation Oc(u1,-R1) expresses the CSBP representation of O, which is the object wave O with removed paraxial spherical phase factor defined by radius R1. This CSBP representation is applied throughout the entire manuscript to improve the clarity of presentation as well as highlight efficiency of the data handling of our imaging. According to the properties of the lensless Fourier DH, the object plane containing the reference point source can be reconstructed by means of a Fourier Transformation (FT). If we express both the hologram field Oc(u1,-R1) and the image field Oc(x1,R1) in the CSBP convention, this operation takes the following simple form:
FT[Oc(u1,R1)]=Oc(x1,R1)=O(x1)exp[ikn1x122R1].
This FT relation implies that SBP of Oc at hologram and numerical reconstruction planes is the same and equals to the hologram pixel count N1 = Bx1 × 1/Δx1. The parameters Bx1 (transverse dimension of numerically reconstructed imaging space) and 1/Δx1 (samplingfrequency) are important for processing path, since both define physical properties of captured object wave, while their product represents minimum SBP that can support this object wave. Naturally, one can always sample the actual object wave, but this is highly inefficient since SBP of the sole spherical carrier of this object wave is much larger than N1. Therefore, an efficient display design should accept a CSBP representation of the object wave as an input data.

 figure: Fig. 1

Fig. 1 Imaging principle of Fourier a) DH capture and b) display geometry.

Download Full Size | PDF

Our holographic imaging is axially decoupled, which means that the axial position R2 of reconstructed image can be set freely. Thus, the secondary hologram plane ξ2, being the magnified SLM plane, does not coincide with the plane x2 carrying the image content (FT of the hologram). Nevertheless, one can always set the image position to R2 = R1, which enables reconstruction of a holographic copy of an object with unitary magnification, even when its size exceeds the diameter of the magnified SLM. The arbitrary selection of R2 is carried out numerically by means of the CFP for converging or diverging fields, described in detail in Sec. 3. The algorithm takes an image field Oc(x2,R2), and returns an object wave Oc(ξ2,Ff) of secondary hologram plane with unchanged SBP. The sample size is modified according to Ff/R2. The spherical carriers have a common focal point at VW, thus the propagation distance always equals to R2Ff. On the display side, the secondary hologram plane ξ2 contains the field lens Lf. The SLM is addressed with the CSBP field Oc(ξ2,Ff) while the curvature of object field is supplemented by the lens Lf.

The full processing path of the capture-display system has four main steps: (i) numerical reconstruction of the object wave Oc(x1,R1) (Eq. (2)), (ii) numerical conversion of Oc(x1,R1) into Oc(x2,R2) (Sec. 4), (iii) propagation over the distance R2Ff to plane ξ2 resulting in Oc(ξ2,Ff) (Sec. 3) and (iv) complex coding of Oc(ξ2,Ff) [31,35]. The strength of our Fourier imaging relies mosty on steps (ii) and (iii) of the processing where axial position and dimension of reconstructed image are set. The axial locations of the object and its image are determined by R1 and R2 of the corresponding CSBP representations. Thus, the change of axial position of the image does not require additional processing. The position set as readout curvature R2 is a parameter of CFP. There is another advantage coming from our imaging technique. Since the hologram and the CSBP representation of the image are related by FT, the magnification can be realized by clipping or zeropadding of the hologram realized in steps (i), (ii).

Even though the capture and display geometries are analogous, the distortion-free 3D imaging of the entire object may not always be possible. Both systems may use different wavelengths or have different angular field of views (aFoVs). The effect of the wavelength mismatch is straightforward and its minimization is discussed in Sec. 4. However, the aFoV1 is determined by the pixel size of CCD while aFoV2 by the focal length Ff and the dimension of the image of the SLM Bξ2:

aFoV1=λ1Δu1=Bx1R1andaFoV2=Bξ2Ff=Bx2R2,
where Bx1, Bx2 denote maximum linear sizes of the object and its reconstructions at the planes x1, x2, respectively and Δu1 is the pixel size of the CCD. Therefore, when aFoV2 < aFoV1 the image has to be down-sized or clipped.

So far, the discussion was focused on the optical conjugation of the captured object with its image. We can notice that the planes u1 and u2, being the Fourier counterparts of the object/image planes, are optically conjugated as well. Plane u1 is the hologram capture plane while u2 is known as a VW [26]. Its size is related to the parameters of capture as:

BVW=λ2FfΔξ2=λ2R2BCCDmDλ1R1,
where Δξ2 is the sample size of the magnified SLM at plane ξ2, BCCD is a size of CCD, mD digital transverse magnification of image discussed in Sec. 4, and λ1, λ2 are the capture and display wavelengths, correspondingly. In a display large aFoV2 is desirable, which may be obtained by decreasing Ff. However, this also leads to decrease of VW, which should not be smaller than the pupil of the eye.

3. Confocal Fresnel Propagation Method

The coupling between capture and display systems is carried out by means of the paraxial propagation algorithm for CSBP fields of common focus. The algorithm takes the display geometry of the Fig. 1(b), where the field Oc(ξ2,Ff) at the SLM-conjugated plane ξ2 is obtained out of Oc(x2,R2) of the image plane x2. The propagation is derived from the well-known Fresnel diffraction formula

O(ξ2)=O(x2)exp{iπ(x2ξ2)2λn(R2Ff)}dx2
having the form of a convolution. Equation (5) describes a general case of diffraction while in Fourier holography the beams evolve around confocal spherical carriers. The confocal geometry indicates that the wave fields at planes x2 and ξ2 with radii R2 and Ff have common focus at VW and thus the propagation distance must be R2 - Ff. By inserting CSBP representations of input and output defined by Eqs. (1)-(5) and taking FT of both sides we obtain:
O˜cξ(fξ)exp{iπλnFffξ2}=[O˜cx(fξ)exp{iπλnR2fξ2}]exp{iπλn(R2Ff)fξ2},
where tilde terms FTs of CSBP representations indicated with corresponding superscripts:
O˜cx(fξ)=Oc(x2,R2)exp{i2πx2fξ}dx2,
O˜cξ(fξ)=Oc(ξ2,Ff)exp{i2πξ2fξ}dξ2,
where ⊗ is a convolution and fx, fξ are frequency vectors of planes x2 and ξ2, respectively. This result illustrates that FT of both CSBP representations are related via product and convolution with corresponding paraxial spherical waves. The convolution with paraxial spherical wave can be expressed as a FT integral with two additional phase factors. Thus performing FT on Eq. (6) and expanding the convolution on the right hand side gives:
O˜cξ(fξ)=exp{iπλFffξ2}O˜cx(fx)exp{iπλR2fx2i2πλR2ffx}dfxexp{i2πλFfffξ}df.
Let us now introduce a scaling relation between frequency vectors that corresponds to convergence of the reconstructed wave under the confocality condition, namely fξ = R2Ff−1fx. This reduces Eq. (9) to the two FTs. Hence, Eq. (9) becomes a simple analytical formula relating FTs of CSBP representations of the input and output fields:
O˜cξ(fξ)=O˜cx(FfR21fξ)×exp{iπλFfffξ2(FfR211)}.
Note, that O˜cx(FfR21fξ)=O˜cx(fx), therefore no resampling is needed. When implementing this propagation algorithm by using Fast FT (FFT) we find that the frequency sample size in the output is Δfξ = R2Ff−1Δfx, which is a consequence of introduced confocality. This leads to corresponding change of the spatial sample size:
Δξ2=FfR21Δx2,
where Δx2, Δξ2 are the sample dimensions of input and output. This indicates a strength of CFP method, because it accounts for physical modification of the parameters of the propagated beam. The size and resolution of the result are scaled according to the radius ratio of propagated beams, which are the distances from the confocal point. This key feature enables propagation with high accuracy and high efficiency. It means that the Fourier DH content can be reconstructed in the display at arbitrary location with the same pixel count and without loss of quality. Numerical realization of CFP based on Eq. (10) is very efficient, as it takes only three steps: (i) calculation of FFT of CSBP of a field at plane x2 (Eq. (7)), (ii) multiplication with parabolic phase factor (Eq. (10)), and (iii) calculation of inverse FFT (Eq. (8)).

Let us compare memory requirements of the CFP method of this work to the classical propagation algorithms, which involve direct sampling of the diffraction field. The complete object wave is a product of CSBP representation of the wave and the corresponding spherical carrier. The CSBP representation samples only the wave changes in reference to this spherical carrier. For our experimental setup, where the registration parameters are: Δu1 = 3.45 µm, λ = 0.5 µm, R1 = 1.06 m and N1 = 2048, SBP of the CSBP representation of the object wave is SBP[Oc(x1,R1)] = 2048 × 2048. For the complete object wave the spherical carrier must be also sampled thus its SBP is larger: SBP[O(x1)] = FoV1 × (Δx1−1 + Δu1−1). For the considered experimental parameters SBP the complete of object wave is SBP[O(x1)] = 463 × 2048 × 2048. The example indicates that the majority of memory resources are here used for sampling spherical carrier and not the wave information itself.

4. Image magnification, axial shift and the capture-display mismatch

This work empathizes color imaging of 3D real objects with unitary 3D magnification. Nevertheless, the imaging path also enables geometrical manipulation of the image: axial shift and transverse magnification mD. The subscript D indicates that this magnification is introduced digitally. In the case of 1:1 imaging (mD = 1) small distortions in reconstruction of the 3D volume are unavoidable due to slightly different wavelengths used in recording and reconstruction. Introducing mD ≠ 1 leads to increased distortions due to different radii or transverse magnifications.

Geometrical manipulation of the holographic image, discussed in Sec. 5, is especially simple within the pipeline of the proposed Fourier holography. In this case the processing path needs to be supplemented with rescaling of CSBP representation of the holographic image. This takes care of the transverse magnification. The axial position of the reconstructed image R2 is a parameter of CFP propagation, which is a part of the processing path.

Figure 1 presents coordinate convention used here for the sake of analysis of the reconstructed volumetric distortions, where the axial coordinates z1 = 0 and z2 = 0 indicate central planes of the corresponding 3D volumes. The numerical part of the imaging process is targeted to reconstruct the center region of the 3D object (plane x2) without distortions and chromatic aberrations. This is realized by introducing parameters of the mismatch between the capture and display systems (curvatures, wavelengths, transverse magnification) to the CSBP field at x2, which contains the object field captured with R1 and λ1. The magnification mD is technically introduced by resampling a holographic image Oc(x1,R1) simply by cutting and zeropadding of the primary hologram and its FT, respectively. Since the SLM displays a diffracted wave that is calculated with parameters of the reconstruction system, there will be no distortions or chromatic aberrations in the central plane of the 3D object. The object at planes z2 ≠ 0, on the other hand, will be affected to some degree.

Let us consider reconstruction of an image from the secondary hologram plane in Fig. 1 with arbitrary wavelength λ2, transverse magnification mD, and at reconstruction distance R2R1. In this general case the reconstructed image at z2 is related to the object field at z1 via:

O2(x2,z2)=O1(M1x2,Dz1z2)exp{iπκmDx22λ2Dz},
where
κ=1mD2R1λ1λ2R2,
M=mD[1+z1mD2κ]1,
Dz=z2z1=λ1λ2mDM.
Here M is a transverse magnification, Dz indicates axial location of a peripheral plane of the reconstructed object, while κ is the curvature of spherical distortion due to the differences between the reconstruction and the capture systems. Equation (12) gives 3D correspondence between the object and image waves for an arbitrary imaging case of object wave O1(x1, z1). It provides axial deformation, transverse magnification, and phase distortion obtained in the entire reconstruction volume. The results agree with [36]; however, here the complete relation between complex fields in 3D is given. For illustration let us consider three cases, 1st – reconstruction of the object in 1:1 scale, 2nd – axial translation of the reconstructed object, and 3rd – axial translation and transverse magnification.

The first case considers only the wavelengths mismatch. The obtained distortions are shown as plots of volumetric axial deformations Fig. 2(a) and transverse magnification Fig. 2(b) for R2 = R1 = 1 m, mD = 1, with object depths −200 mm > z1 > 200 mm. The largest distortions are obtained for channel G, which is characterized by the largest wavelength mismatch. For this channel the object captured at −1.2 m will be reconstructed at −1.208 m and with small magnification M = 1.007. The distortions are certainly small but deserve a comment. One can notice that the complex amplitude distributions at corresponding planes x1, x2 are unequal, i.e. O2(x2) ≠ O1(x1). This is a consequence of reconstruction of CSBP representation Oc(x1,R1) at x2, which, when reconstructed with λ2, equals Oc(x2,λ1λ21R1). Since the change in λ affects the original field, while the reconstruction distance is defined by R2, the reconstructed field O2(x2) is effectively deformed by a small phase distortion with curvature κ=(λ2λ1)λ21R11.

 figure: Fig. 2

Fig. 2 Axial deformation Dz and transverse magnification M for three cases of imaging: a), b) R2 = R1, mD = 1, c), d) axial shift to R2 = 2 × R1, mD = 1, and e), f) axial shift to R2 = 2R1 and magnification mD = 2.

Download Full Size | PDF

The second example considers an axial shift of the reconstructed object to R2 = 2R1, while keeping mD = 1. The volumetric distortions illustrated in Figs. 2(c) and 2(d) are small. The distortions are larger than in the previous case; they are mainly determined by the mismatch in the curvature radii. The third example illustrates deformations obtained when imaging with large magnification. The values R2 = 2R1 and mD = 2 are selected to preserve the angular size of the reconstructed object. The obtained deformations are plotted in Figs. 2(e) and 2(f) and in the analyzed volume all axial, chromatic and transverse distortions are noticeable. The majority of distortions are due to mD ≠ 1.

5. Realization of the Fourier holographic imaging system

This Section gives details on experimental realization and numerical implementation of our Fourier holographic imaging. The system consists of capture and display setups, which are shown in Fig. 3. The full-color imaging is obtained through superposition of three monochrome holograms corresponding to a chosen RGB colors through time multiplexing.

 figure: Fig. 3

Fig. 3 a) Synthetic aperture Fourier capture setup, and b) Fourier holographic display based on VW and LED source.

Download Full Size | PDF

Let us first analyze the lensless Fourier DH capture setup, where we use RGB lasers with wavelengths λR1 = 637 nm, λG1 = 532 nm, λB1 = 457 nm. In the primary hologram plane the CCD camera records interference of the object wave with the spherical reference wave. The two separate beams are formed after the beam splitter. The half-wave plate controls the contrast of the fringe pattern. The reference point source is formed with an aspheric lens illuminated with a plane wave. The beam collimation is realized with a pinhole situated in the focal plane of the lens C (focal length Fc = 300 mm). The second part of the working beam hits a glass diffuser providing uniform illumination of the object scene. The 3D test object is the “Lowiczanka” color figurine placed at a distance 1060 mm from the hologram plane. Our object has 120 mm in height, 60 mm in width and approximately 50 mm in depth. The primary holograms are captured on a Basler CCD camera (2448 × 2050, 3.45 µm pixel pitch) mounted on a motorized stage with 200 mm of the horizontal scanning range. For the blue wavelength we have aFoV1 = 7.6° in X, which is a vertical axis. Due to the off axis geometry in Y direction aFoV1 is twice smaller. Consequently, for chosen distance 1060 mm, the lateral size of the 3D object can reach 70 × 140 mm. Here we use a 1D synthetic aperture hologram of size 2540 × 60000 pixels, where a set of holograms was registered for different positions of CCD in Y direction. This gives access to extended and continuous range of horizontal perspectives encoded in the synthetic aperture hologram.

In our work the laser digital holograms of real 3D objects are optically reconstructed in the display with RGB LED sources (Doric LEDs, center wavelengths λR2 = 635 nm, λG2 = 515 nm, λB2 = 465 nm, fiber core 960 µm). The design of the holographic display system presented in Fig. 2(b) is also based on the Fourier geometry. The name Fourier is used because the reading wave is a spherical converging wave that matches the geometry of the capture system, and the display SLM is addressed with a diffracted object wave without this spherical phase factor. The sphericity of the reading wave is added by the lens Lf while the SLM is illuminated with a collimated beam generated by the LED sources located in the backfocal plane of the collimator lens (Fc = 400 mm). Thus, the display setup can be divided into two parts realizing two separate operations: (i) complex coding of CSBP representation, and (ii) conversion to the Fourier geometry. The first part is realized with lenses L1 (F1 = 100 mm) and L2 (F2 = 600 mm) creating a 4F afocal imaging system, which conjugates the SLM (Full-HD SLM, Holoeye 1080P, 1920 x 1080 pixels, pixel pitch 8 µm, refresh frame-rate 60 Hz) with the 3D hologram reconstruction volume with magnification m = 6, and the vertical absorbing cut-off filter in the Fourier plane of the SLM. The filter enables the complex field encoding from a phase only hologram [31]. It is also responsible for the reduced resolution in y direction. The second part relies on the additional lens Lf (Ff = 600 mm) in the output of the setup, which focuses the reconstructed object beam into the observer’s eye. In order to reconstruct the primary digital hologram of a real 3D object at arbitrary R2 the SLM is addressed with the secondary hologram carrying the corresponding diffraction field in the CSBP representation Oc(ξ2,Ff).

The size of the reconstructed object is limited by angular field of view aFoV2. In our setup aFoV2 = 8.8° > aFoV1 (Eq. (3)), hence the color imaging of full captured object with unitary magnification is possible. The off-axis geometry of the capture gives smaller image in y direction. Thus, the full-HD SLM is aligned vertically in the display, i.e. with 1920 pixels dimension in the x direction. Consequently, for this direction we have aFoV2 = 8.8°. In this way the off-axis geometry does not reduce capabilities of the display and we are able to reconstruct the object in size very close to its natural 3D dimensions. For this we first reconstruct the hologram from its 2160 × 1920 pixels with Eq. (1) and take upright image (left part of the reconstruction), then we propagate it over the distance R2 - Ff using the CFP technique developed in Sec. 3, where R2 = R1. As a result the CSBP distribution of the secondary hologram Oc(ξ2,Ff) is obtained with resolution of the plane ξ2. This discussion is correct for the B channel. For the remaining λn the hologram is numerically reconstructed from λn/λB (1080 × 1920) pixels, and then the upright image is cropped to 1080 × 1920. In this way the pixel size of numerical reconstructions at x1 is the same for all wavelengths. To reconstruct the hologram at arbitrary axial position R2 and magnification mD, the required change of size of pixel pitch have to consider the physical change of resolution in the display geometry that is accounted for by the CFP method. At a given distance R2 = αR1 the pixel pitch is magnified by the factor α, hence mD = αβ. Here, β is the part of the digital magnification that is realized through cropping of the hologram. For instance, consider image reconstruction for R2 = 2R1 and mD = 4. In this case β = 2. Thus, the hologram has to be first reconstructed from (2 × 1080) × (2 × 1920) samples and then cropped to 1080 × 1920.

In our implementation of the system the color imaging is achieved through the time multiplexing technique as in our experimental verification we put emphasis on the highest possible quality of reconstructed holograms. In the display RGB LED sources are synchronized with the SLM using SLM synchronization signal. The SLM works at 60 Hz thus full-color frame is obtained at frequency 20 Hz. It is also possible to obtain full-color imaging from a single frame when implementing Frequency Division Method (FDM) [31], but with a decreased resolution. The capture system provides R, G, B synthetic aperture data. Thus, moving across this large hologram it is possible to reconstruct 3D objects with continuously changing perspective.

6. Coherence analysis of a LED driven display

Here, the study of influence of spatially partially coherent illumination on an image resolution is provided. It is assumed that the display SLM is capable of complex coding by itself and T(χ) is a distribution of complex wavefield that is given by this SLM at the plane χ (Fig. 3). Let the SLM be illuminated a the statically stationary wave that is fully characterized by a spatial coherence degree g, which is related by FT with the source intensity S as g(χ’) = ∫S(ν)exp{2πiχ’ν/λFc}. Then, using difference variables χ’ = χ2 - χ1 and χ¯=12(χ1+χ2)the mutual intensity at plane χ is J(χ¯,χ')=g(χ')T(χ¯+χ'2)T*(χ¯χ'2).

Let us denote I(x2) as intensity of the partially coherent image reconstructed at a distance d = Ff - R2 from the secondary hologram plane (Fig. 1(b)). Negative value d is the case of virtual imaging, while positive value corresponds to real imaging. Determination of I(x2) requires introduction of two physical elements of the display system from Fig. 3(b), that is (i) the afocal conjugation between χ and ξ2 by ξ2 = , where m = F2/F1, and (ii) the field lens with focal Ff. This gives the observed intensity image I(x2) as:

I(x2)=J(ξ¯2m,ξ2'm)exp{i2πξ¯2ξ2'λ(1z21Ff)}exp{i2πx2ξ2'λz2}dξ¯2dξ2'.
Since the SLM displays the CSBP representation of the object wave Oc(mξ2,m−2Ff), we have T(ξ2m)=Oc(ξ2,Ff). After introducing T(ξ2m)=O(ξ2)exp{iπξ22λFf} Eq. (16) simplifies to:
I(x2)=g(ξ2'm)O(ξ_2ξ2'2)O*(ξ_2+ξ2'2)×exp{i2πξ¯2ξ2'λz2}exp{i2πx2ξ2'λz2}dξ¯2dξ2'.
More intuitive relation is obtained when using Ĩcoh, which is FT of the intensity image Icoh(x2) obtained with coherent illumination:
I(x2)=g(ξ2'm)I˜coh(ξ2'λz2)exp{i2πx2ξ2'λz2}dξ2'=g˜(mx2λz2)Icoh(x2),
where g˜ is a FT of the coherence degree g. Two immediate conclusions follow from this result. First, Ff has no effect on the image quality. Second, the coherence degree of the SLM illumination is scaled due to magnification m and image distance d, while the role of Ff is limited to the coherent space-frequency scaling, as discussed in Sec. 3.

Figure 4 presents a geometrical interpretation of the partial coherence effect. In this illustration the limited coherence is represented by coherence area Ac at the plane ξ2, where Ac = mAc0 with Ac0 representing the coherence area of the field actually illuminating the SLM plane, e.g. taking the distance between first zeros of the mutual intensity. For quantification of the reduction of image quality let us introduce two angles: αA, the angular size of Ac, and αVW the angular size of the VW, both measured from the axial image point P at d. Quantity αA expresses the maximum bandwidth of a spherical wave emitted from point P in conditions imposed by the given coherence area AC. Hence αA can be regarded as a factor, which limits the imaging resolution due to the source coherence. Equivalently, αVW can be regarded as the maximum bandwidth of the spherical wave in the coherent case. Thus, the αVW represents maximum resolution that can be generated by the resolution of SLM. In Fig. 4(a) three cases of imaging are illustrated by points PC, PPC and Pclim for virtual imaging. The point Pclim, which is obtained for αA = αVW represents a limiting case of the coherent imaging, because image of finer resolution cannot be reconstructed. Thus, the point PC (αA > αVW) will be reconstructed at full resolution while for the PPC (αA < αVW) there will be a drop of resolution due to partial coherence. The axial coordinate dclim of Pclim is given by:

dclim±=FfC1±C,
where + and − corresponds to the limits of the real and virtual imaging, respectively, and C = AC/BVW is a coherence parameter.

 figure: Fig. 4

Fig. 4 a) Geometrical interpretation of the effect of partially coherent imaging as an effect of source coherence area Ac; b) limits of reconstruction depth for partially coherent imaging at maximum resolution; c) resolution drop as a function of d for C = 0.15, 0.4 (blue curve indicates experimental setup); the parameters for calculations: λ = 0.515 μm, Ff = 600 mm.

Download Full Size | PDF

Suppose now that a reconstructed image is perceived by an observer. Let’s take an example of day vision with pupil size Beye = 4 mm and consider two cases, one for BVW > Beye (our experimental case), and second with BVW = Beye. The second case could be obtained in our display by increasing afocal magnification by 1.61. In the first case not all details of the reconstructed image will be perceived, even with laser light. The second case optimizes visual performance of the display and improves coherence parameter C by 1.612.

In Figs. 4(b) and 4(c) the coherence effect on the resolution of 3D image is investigated. Figure 4(b) visualizes the reconstruction depthsΔd=dclim+dclimas a function of coherence parameter C for which viewer will see holographic images with all details. In Fig. 4(b) two cases of C = 0.15 mm, 0.4 mm are additionally marked. The case of C = 0.15 mm describes coherence effect in the display for the selected wavelength λ = 0.515 μm, where AC = 0.97.

The value of C = 0.4 mm defines an optimized system with afocal magnification increased by 1.61. In Fig. 4(b) we have Δd = 184 mm and 571 mm for C = 0.15 and 0.4, respectively. Both cases are further investigated in Fig. 4(c), which illustrates how the image resolution defined here as αA/αVW falls with increasing distance d of the image from the field lens. Notably, imaging with intentional decrease of resolution by limited coherence (e.g. αA/αVW = 0.5) may be profitable due to further speckle reduction.

7. Experimental results

Here, the proposed two step Fourier imaging is experimentally investigated in two parts. The first one gives evidence of high quality imaging of real 3D color object. The second shows that imaging path enables geometrical manipulation of the reconstructed image and proves that high quality imaging is preserved for large object depths. Here, up 1.5 m for the DH of a real object and up to 10 m for CGH calculated from 3D cloud of points object, using LEDs in the display step. All images presented in this Section were captured with a zoom lens camera (Canon EOS 5D Mark II) placed in the VW plane.

In the first experiment the optical reconstruction of 3D color object with its original dimensions is realized. A lensless Fourier hologram of “Lowiczanka” figurine recorded at distance R1 = 1.06 m was used as an input. The captured data was then reconstructed at distance R2 = R1 to ensure imaging with unitary magnification, leading to close 1:1 copy of the object. Since our Fourier imaging system is characterized by aFoV2 > aFoV1, the full object could be displayed. Figure 5 presents images of the obtained 3D reconstruction. In Figs. 5(b) and 5(c) the camera is focused on different planes (jacket and skirt), which are separated in depth by approximately 18 mm. We enlarge those regions to illustrate detailed reconstruction of the object. In Fig. 5(b) a hand, colorful bracelet and flowers on the jacket are sharp while in Fig. 5(c), when defocused, even the stitch pattern of the skirt is visible. Only one of the two regions is clear in each picture, therefore our imaging technique is precise enough to distinguish between the two planes. Also, the skirt is visibly closer to observer than the jacket, therefore the 3D image is orthoscopic, as if it was a real object viewed directly. This feature is not provided in other works presenting a TV-like display of DHs, where only pseudoscopic images are reconstructed [14,16,18–21].

 figure: Fig. 5

Fig. 5 a) Photo of 3D “Lowiczanka” figurine and its optical reconstruction with time multiplexing method in two focus positions b) jacket, and c) skirt. See Visualization 1 illustrating rotating figurine.

Download Full Size | PDF

In an additional experiment we show different perspectives obtained from the same “Lowiczanka” hologram. The result is presented in Visualization 1. The SLM was addressed in time sequence by different parts of the synthetic aperture hologram corresponding to different transverse positions of camera in capture setup, which span object observation angles in range ± 15°. Since the experimental results emphasize the imaging quality of the display each color frame of visualization was captured separately and merged together into movie.

The second experiment, documented in Fig. 6, illustrates that our Fourier holographic imaging enables reconstruction of 3D scenes, in which axial location and transverse magnification of the reconstructed image can be set freely. Additionally it shows unnoticeable loss of quality when reconstructing 3D scenes with large depth, here up to 2 meters. Since registration of a scene of this size on a single optical table would be a technically challenging task, the large-scene hologram (for one SLM) has been synthesized numerically from holograms of two objects captured separately. The first object was a “Rooster” figurine placed at R1 = 0.5 m from the CCD, and the second was the “Lowiczanka” figurine from before, placed at R1 = 1.06 m. Next, two holograms were synthesized with different R2 position of the “Lowiczanka” object. In the first synthesized hologram complex amplitudes displaying each object were calculated with R2 = R1 in each case and added together before addressing theobject were calculated with R2 = R1 in each case and added together before addressing the SLM. The LED-driven reconstruction of the synthesized hologram was photographed in two focal positions corresponding to in-focus observation of each object. Figures 6(a) and 6(b) present the obtained images. Visualization 2 shows a continuous sweep of the focus position from one object to the other one. To demonstrate that 3D object can be reconstructed at arbitrary axial location, the second synthesized hologram was generated from the same DH data. In the 3D scene reconstruction distance of “Lowiczanka” was almost doubled (R2 = 2 m) while “Rooster” occupies the same axial location as previously. To keep the angular size of the figurine the image magnification was set to mD = 2. The result is shown in Fig. 6(c). It can be noticed that there is no visible change in reconstruction quality when comparing to Fig. 6(b). In terms of the imaging resolution, this corresponds to a drop of αA/αVW by factor of 0.81 (see marking points on the blue curve in Fig. 4(c)). Finally, to investigate possibility of transverse magnification of the reconstructed image the hologram form Fig. 6(c) was magnified by factor mD = 4. Results presented in Fig. 6(d) shows that fine details can be reconstructed with high quality.

 figure: Fig. 6

Fig. 6 Optical reconstruction of synthesized DH including a pair of 3D scenes with “Rooster” and “Lowiczanka” figurine captured in different focus positions a) “Lowiczanka” 1.06 m b) “Rooster” 0.5 m, c) “Lowiczanka” 2 m (“Rooster” 0.5 m), d) “Lowiczanka” 2 m (“Rooster” 0.5 m) digitally magnified by factor mD = 4. See Visualization 2 illustrating change of focus between “Rooster” and “Lowiczanka”.

Download Full Size | PDF

To fully present the capabilities of the display CGH calculated from 3D cloud of points representing computer graphic of a dog was used as a holographic content. The dotted structure makes it a good test for visualization of imaging quality. The hologram displayed on the SLM was generated using the phase-added stereogram method [37]. Figures 7(a)-7(d) illustrate holographic reconstructions of the CGH designed and captured for different reconstructions distances: 600 mm, 1 m, 2 m, and 10 m. The used 3D model consists of 7000 points and for R2 = 600 mm has dimensions: width 50 mm, height 70 mm, and depth 50 mm. For other values of R2 the 3D object was rescaled to keep the same angular size. The camera is focused on the right ear of the 3D model. This part of the image is enlarged. For each hologram the 3D object was rescaled to keep the same angular size. It can be noticed that the size of the single point is growing with the reconstruction distance. The highest difference is obtained between 600 mm and 1 m. However, the blur of the points is small in all the images, even for 10 m, and the quality of the reconstructed CGH is high. The theory outlined in Sec. 6 supports this observation. For example, shifting the reconstruction distance from 2 m to 10 m involves a drop of imaging resolution expressed as αA/αVWby factor of 0.76.

 figure: Fig. 7

Fig. 7 Optical reconstruction of CGH calculated from 3D cloud of points with enlarged in-focus position captured at different reconstruction distance a) 600 mm b) 1 m, c) 2 m d) 10 m.

Download Full Size | PDF

8. Conclusions

This paper presents an end-to-end Fourier holography solution of real world objects with high quality full-color 3D orthoscopic imaging, where axial position and magnification of the image can be set freely without additional computational load. The imaging technique is referred to as Fourier holography since the FT relation between distributions at the object and hologram planes is preserved in capture, numerical processing and optical reconstruction.

The proposed 3D imaging concept is based on holographic recording and reconstruction of a converging object beam. The reconstructed object wave is backpropagated directly to the observer’s eye that is placed at VW, which is an image of the CCD of capture. The capture step is realized in a laser-based lensless Fourier DH system while the LED-based display system relies on a flat SLM illuminated by collimated waves, a magnifying telescopic system and a field lens, which brings back natural convergence of the object field. The SLM is addressed by complex amplitude taken from a arbitrarily defocused plane with its spherical phase factor removed, thus correct 3D depth is maintained in the reconstruction.

The coupling between capture and display systems is carried out in a flexible way by means of an original numerical method, named CFP, within which the carriers of propagated beams are confocal. The CFP routine propagates the object field without the need for sampling of the spherical carriers, which assures high accuracy and efficiency of the solution. The size and resolution of the result is scaled according to the ratio of radii of the spherical carriers. It means that the Fourier digital holographic content can be reconstructed in the display at arbitrary location with the same pixel count and without loss of quality.

The complete processing framework provides an easy way for image magnification, which also plays an important role in axial relocation of the 3D image as well as synchronization of dimensions of RGB channels. In this case the input for CFP needs to be linearly rescaled, here realized through hologram cropping or zeropadding. When it comes to the full-color imaging the processing allows for chromatic aberration free reconstruction of a selected plane of the 3D object, which is also free of distortions. As a special case, this work explores possibility of 3D imaging of large undistorted objects with 1:1 magnification, theoretically and experimentally. The theory shows that the parasitic 3D magnifications caused by the wavelength mismatch between the capture and display systems are negligible.

Additionally, the proposed display configuration is capable of high quality reconstruction of large 3D color objects using LED illumination. We show experimentally that despite the limited spatial coherence of the light source high image quality is preserved across large object depths, here up to 10 meters. The results are supported by theoretical analysis of the influence of partially coherent illumination on image resolution. The study shows that in our present implementation of the system for the reconstruction distance of 10 m the observed resolution drops down to 0.164 of the maximal value. This value can be raised to 0.425 only by increasing the afocal magnification in the system by factor 1.61, which brings the VW size down to a typical diameter of an eye pupil.

Funding

National Science Centre, Poland, HoloTomo4D (2015/17/B/ST8/02220); The Cross-Ministry Giga KOREA Project Ministry of Science ICT and Future Planning, Korea (GigaKOREA GK18D0100); Warsaw University of Technology (statutory funds).

Acknowledgment

We acknowledge the experimental support of Anna Gołoś and Weronika Zaperty of Warsaw University of Technology.

References and links

1. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

2. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]   [PubMed]  

3. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]   [PubMed]  

4. H. Kang, E. Stoykova, and H. Yoshikawa, “Fast phase-added stereogram algorithm for generation of photorealistic 3D content,” Appl. Opt. 55(3), A135–A143 (2016). [CrossRef]   [PubMed]  

5. H. Araki, N. Takada, S. Ikawa, H. Niwase, Y. Maeda, M. Fujiwara, H. Nakayama, M. Oikawa, T. Kakue, T. Shimobaba, and T. Ito, “Fast time-division color electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator,” Chin. Opt. Lett. 15(12), 120902 (2017). [CrossRef]  

6. Y. Sando, D. Barada, and T. Yatagai, “Holographic 3D display observable for multiple simultaneous viewers from all horizontal directions by using a time division method,” Opt. Lett. 39(19), 5555–5557 (2014). [CrossRef]   [PubMed]  

7. G. Li, D. Lee, Y. Jeong, and B. Lee, “Holographic display for augmented reality using holographic optical element,” Proc. SPIE 9770, 97700D (2016). [CrossRef]   [PubMed]  

8. J. S. Chen, Q. Y. Smithwick, and D. P. Chu, “Coarse integral holography approach for real 3D color video displays,” Opt. Express 24(6), 6705–6718 (2016). [CrossRef]   [PubMed]  

9. Y. Lim, K. Hong, H. Kim, H. E. Kim, E. Y. Chang, S. Lee, T. Kim, J. Nam, H. G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). [CrossRef]   [PubMed]  

10. S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]   [PubMed]  

11. T. Kreis, “3-D Display by Referenceless Phase Holography,” IEEE Trans. Industr. Inform. 12(2), 685–693 (2016). [CrossRef]  

12. H. Sasaki, K. Yamamoto, Y. Ichihashi, and T. Senoh, “Image size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography,” Sci. Rep. 4(1), 4000 (2014). [CrossRef]   [PubMed]  

13. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4(1), 6177 (2014). [CrossRef]   [PubMed]  

14. M. Paturzo, P. Memmolo, A. Finizio, R. Näsänen, T. J. Naughton, and P. Ferraro, “Synthesis and display of dynamic holographic 3D scenes with real-world objects,” Opt. Express 18(9), 8806–8815 (2010). [CrossRef]   [PubMed]  

15. P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014). [CrossRef]   [PubMed]  

16. D. P. Kelly, D. S. Monaghan, N. Pandey, T. Kozacki, A. Michałkiewicz, G. Finke, B. M. Hennelly, and M. Kujawinska, “Digital holographic capture and optoelectronic reconstruction for 3-D displays,” Int. J. Digit. Multimed. Broadcast. 2010, 759329 (2010). [CrossRef]  

17. R. Kelner and J. Rosen, “Spatially incoherent single channel digital Fourier holography,” Opt. Lett. 37(17), 3723–3725 (2012). [CrossRef]   [PubMed]  

18. M. Kujawinska, T. Kozacki, C. Falldorf, T. Meeser, B. M. Hennelly, P. Garbat, W. Zaperty, M. Niemelä, G. Finke, M. Kowiel, and T. Naughton, “Multiwavefront digital holographic television,” Opt. Express 22(3), 2324–2336 (2014). [CrossRef]   [PubMed]  

19. P. Makowski, T. Kozacki, P. Zdankowski, and W. Zaperty, “Synthetic aperture Fourier holography for wide-angle holographic display of real scenes,” Appl. Opt. 54(12), 3658–3665 (2015). [CrossRef]  

20. T. Kozacki, M. Kujawińska, G. Finke, W. Zaperty, and B. Hennelly, “Holographic Capture and Display Systems in Circular Configurations,” J. Disp. Technol. 8(4), 225–232 (2012). [CrossRef]  

21. E. Stoykova, F. Yaraş, H. Kang, L. Onural, A. Geltrude, M. Locatelli, M. Paturzo, A. Pelagotti, R. Meucci, and P. Ferraro, “Visible reconstruction by a circular holographic display from digital holograms recorded under infrared illumination,” Opt. Lett. 37(15), 3120–3122 (2012). [CrossRef]   [PubMed]  

22. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]   [PubMed]  

23. A. Sugita, K. Sato, M. Morimoto, and K. Fujii, “Full-color holographic display and recording of 3D images,” Proc. SPIE 5742, 130–139 (2005). [CrossRef]  

24. P. L. Makowski, T. Kozacki, and W. Zaperty, “Orthoscopic real-image display of digital holograms,” Opt. Lett. 42(19), 3932–3935 (2017). [CrossRef]   [PubMed]  

25. G. W. Stroke, “Lensless Fourier-transform method for optical holography,” Appl. Phys. Lett. 6(10), 201–203 (1965). [CrossRef]  

26. R. Häussler, Y. Gritsai, E. Zschau, R. Missbach, H. Sahm, M. Stock, and H. Stolle, “Large real-time holographic 3D displays: enabling components and results,” Appl. Opt. 56(13), F45–F52 (2017). [CrossRef]   [PubMed]  

27. T. Kozacki and K. Falaggis, “Angular spectrum method with compact space-bandwidth: generalization and full-field accuracy,” Appl. Opt. 55(19), 5014–5024 (2016). [CrossRef]   [PubMed]  

28. D. Claus, D. Iliescu, and P. Bryanston-Cross, “Quantitative space-bandwidth product analysis in digital holography,” Appl. Opt. 50(34), H116–H127 (2011). [CrossRef]   [PubMed]  

29. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

30. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

31. T. Kozacki and M. Chlipala, “Color holographic display with white light LED source and single phase only SLM,” Opt. Express 24(3), 2189–2199 (2016). [CrossRef]   [PubMed]  

32. H. Araki, N. Takada, H. Niwase, S. Ikawa, M. Fujiwara, H. Nakayama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time time-division color electroholography using a single GPU and a USB module for synchronizing reference light,” Appl. Opt. 54(34), 10029–10034 (2015). [CrossRef]   [PubMed]  

33. Y. S. Kim, T. Kim, S. S. Woo, H. Kang, T. C. Poon, and C. Zhou, “Digital holographic recording of a diffusely reflecting object without speckle noise,” in Fringe 2013 (Springer, 2014), pp. 803–808.

34. W. Singer, M. Totzeck, and H. Gross, in Handbook of Optical Systems (Wiley, 2005), pp. 340.

35. X. Li, J. Liu, J. Jia, Y. Pan, and Y. Wang, “3D dynamic holographic display by modulating complex amplitude experimentally,” Opt. Express 21(18), 20577–20587 (2013). [CrossRef]   [PubMed]  

36. R. Meier, “Magnification and Third-Order Aberrations in Holography,” J. Opt. Soc. Am. 55(8), 987–992 (1965). [CrossRef]  

37. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Optical holographic reconstruction of real 3D scene. High quality reconstruction of rotated object.
Visualization 2       Optical holographic reconstruction of real 3D scene. Visualization of the large reconstructed depth.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Imaging principle of Fourier a) DH capture and b) display geometry.
Fig. 2
Fig. 2 Axial deformation Dz and transverse magnification M for three cases of imaging: a), b) R2 = R1, mD = 1, c), d) axial shift to R2 = 2 × R1, mD = 1, and e), f) axial shift to R2 = 2R1 and magnification mD = 2.
Fig. 3
Fig. 3 a) Synthetic aperture Fourier capture setup, and b) Fourier holographic display based on VW and LED source.
Fig. 4
Fig. 4 a) Geometrical interpretation of the effect of partially coherent imaging as an effect of source coherence area Ac; b) limits of reconstruction depth for partially coherent imaging at maximum resolution; c) resolution drop as a function of d for C = 0.15, 0.4 (blue curve indicates experimental setup); the parameters for calculations: λ = 0.515 μm, Ff = 600 mm.
Fig. 5
Fig. 5 a) Photo of 3D “Lowiczanka” figurine and its optical reconstruction with time multiplexing method in two focus positions b) jacket, and c) skirt. See Visualization 1 illustrating rotating figurine.
Fig. 6
Fig. 6 Optical reconstruction of synthesized DH including a pair of 3D scenes with “Rooster” and “Lowiczanka” figurine captured in different focus positions a) “Lowiczanka” 1.06 m b) “Rooster” 0.5 m, c) “Lowiczanka” 2 m (“Rooster” 0.5 m), d) “Lowiczanka” 2 m (“Rooster” 0.5 m) digitally magnified by factor mD = 4. See Visualization 2 illustrating change of focus between “Rooster” and “Lowiczanka”.
Fig. 7
Fig. 7 Optical reconstruction of CGH calculated from 3D cloud of points with enlarged in-focus position captured at different reconstruction distance a) 600 mm b) 1 m, c) 2 m d) 10 m.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

O R * ( u 1 )= O c ( u 1 , R 1 )=O( u 1 )exp[ i k n1 u 1 2 2 R 1 ],
FT[ O c ( u 1 , R 1 ) ]= O c ( x 1 , R 1 )=O( x 1 )exp[ i k n1 x 1 2 2 R 1 ].
aFo V 1 = λ 1 Δ u1 = B x1 R 1 and aFo V 2 = B ξ2 F f = B x2 R 2 ,
B VW = λ 2 F f Δ ξ2 = λ 2 R 2 B CCD m D λ 1 R 1 ,
O( ξ 2 )= O( x 2 )exp{ iπ ( x 2 ξ 2 ) 2 λ n ( R 2 F f ) }d x 2
O ˜ c ξ ( f ξ )exp{ iπ λ n F f f ξ 2 }=[ O ˜ c x ( f ξ )exp{ iπ λ n R 2 f ξ 2 } ]exp{ iπ λ n ( R 2 F f ) f ξ 2 },
O ˜ c x ( f ξ )= O c ( x 2 , R 2 )exp{ i2π x 2 f ξ }d x 2 ,
O ˜ c ξ ( f ξ )= O c ( ξ 2 , F f )exp{ i2π ξ 2 f ξ }d ξ 2 ,
O ˜ c ξ ( f ξ )=exp{iπλ F f f ξ 2 } O ˜ c x ( f x )exp{iπλ R 2 f x 2 i2πλ R 2 f f x } d f x exp{i2πλ F f f f ξ }df.
O ˜ c ξ ( f ξ )= O ˜ c x ( F f R 2 1 f ξ )×exp{ iπλ F f f f ξ 2 ( F f R 2 1 1 ) } .
Δ ξ2 = F f R 2 1 Δ x2 ,
O 2 ( x 2 , z 2 )= O 1 ( M 1 x 2 , D z 1 z 2 )exp{ iπκ m D x 2 2 λ 2 D z } ,
κ= 1 m D 2 R 1 λ 1 λ 2 R 2 ,
M= m D [ 1+ z 1 m D 2 κ ] 1 ,
D z = z 2 z 1 = λ 1 λ 2 m D M.
I( x 2 )= J( ξ ¯ 2 m , ξ 2 ' m )exp{ i2π ξ ¯ 2 ξ 2 ' λ ( 1 z 2 1 F f ) }exp{ i2π x 2 ξ 2 ' λ z 2 } d ξ ¯ 2 d ξ 2 ' .
I( x 2 )= g( ξ 2 ' m ) O( ξ _ 2 ξ 2 ' 2 ) O * ( ξ _ 2 + ξ 2 ' 2 )×exp{ i2π ξ ¯ 2 ξ 2 ' λ z 2 }exp{ i2π x 2 ξ 2 ' λ z 2 }d ξ ¯ 2 d ξ 2 ' .
I( x 2 )= g( ξ 2 ' m ) I ˜ coh ( ξ 2 ' λ z 2 )exp{ i2π x 2 ξ 2 ' λ z 2 }d ξ 2 ' = g ˜ ( m x 2 λ z 2 ) I coh ( x 2 ) ,
d clim ± = F f C 1±C ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.