Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Generation of distortion-free scaled holograms using light field data conversion

Open Access Open Access

Abstract

A novel method to convert holograms for their scaled three-dimensional image reconstructions is proposed. Conventional methods that directly scale the hologram itself or simulate the optical imaging have a limitation that axial magnification of the reconstruction is different from the transverse magnification, causing distortion. In order to achieve constant transverse and axial magnifications, the proposed method performs the scaling in the light field data domain. The proposed method first extracts the light field from a hologram and performs the scaling to the light field data. The modified light field is then used to synthesize a new hologram. For the transformation between the hologram domain and the light field domain, angular spectrum bandpass filtering and non-hogel based computer generated hologram techniques are utilized. The linear nature of the light field data ensures the distortion free scaling with the same transverse and axial magnification of the reconstruction. The proposed method is verified by optical experiments.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography is an effective technique that can provide depth information of three-dimensional (3D) objects or 3D scene. In addition to optical reconstruction of 3D scene using holograms, research for editing and authoring hologram data has also been studied recently. In order to provide useful services in various hologram application fields, hologram contents must be easily adopted and converted. A basic technology for this is editing of 3D scene by manipulating its hologram. Shift, rotation, and scaling of the 3D scene are essential elements for the hologram data manipulation. In the case of shift and rotation of the 3D scene, hologram data can be directly manipulated within its numerical aperture range. Scaling of the 3D scene, however, is not straightforward.

Several works have been reported on the hologram scaling, which include shifted Fresnel diffraction [14], numerical imaging [5] and subsampling technique. In usual numerical implementation of the Fresnel diffraction, the sampling parameters in the observation plane are determined by the sampling parameters of the source plane and the propagation distance [6]. The shifted Fresnel diffraction technique allows arbitrary sampling parameters in the observation plane, achieving effective magnification of the hologram and reconstructions. The numerical imaging technique designs a virtual lens to magnify optical field to a desired scaling ratio by adjusting the distances between the input plane, the virtual lens, and the image plane according to the lens formula. Finally, the subsampling technique directly scales the hologram data itself by changing the sampling pitch on the hologram plane.

Previous techniques work well with two-dimensional (2D) image reconstructions. For the 3D image reconstructions, however, they suffer from distortions. Although they use different approaches, all the previous techniques effectively magnify the optical field itself, which results in m2 axial magnification of the reconstruction for the m lateral magnification. Therefore previous techniques elongate or shorten the thickness of the 3D scene like usual simple lens imaging of 3D objects. A direct approach to have a constant magnification in both the lateral and the axial directions is to extract an accurate 3D model or depth map from the hologram, apply scaling to the 3D model, and then resynthesize the hologram from the scaled 3D model. However, to the authors’ best knowledge, exact and reliable 3D model extraction from a hologram has not been reported yet.

In this paper, we propose a method to scale the 3D images in the existing holograms. In the proposed method, instead of extracting a 3D model from hologram, light field data is extracted and processed for the 3D image scaling. The proposed method first extracts the light field, or orthographic view images from the hologram by using bandpass filtering in the angular spectrum domain. Then for the 3D image scaling, each orthographic view image is scaled with a desired magnification ratio. Finally, a new hologram is synthesized from the scaled orthographic view images using the non-hogel based CGH technique with random phase carrier wave [7,8]. Due to the linear relationship between the orthographic view images and the 3D images, the synthesized hologram has the distortion-free scaled 3D images with the same lateral and axial magnification ratio.

In the following sections, the 3D image distortion in the conventional hologram scaling is first introduced. Next the relationship between the orthographic view images and the corresponding 3D images is explained, revealing that simple scaling of each orthographic view image results in the distortion-free scaling of the corresponding 3D images. The steps of the proposed method are then explained. Finally, numerical and optical reconstruction results are presented for the verification of the proposed method.

2. Conventional method for hologram scaling

Conventional methods that magnify the 3D scene in the hologram include the subsampling of the hologram and the numerical imaging. In the subsampling method, an original hologram H(x,y) is scaled to be a new hologram H′(x,y)=H(x/m,y/m). Suppose that the original hologram contains the object wave emitted from a 3D object point P(xp,yp,zp) as given by

$$H(x,y) = \exp \left[ { - j\frac{k}{{2{_p}}}\{{\mathop {(x - x{_p})}\nolimits^2 + \mathop {(y - y{_p})}\nolimits^2 } \}} \right],$$
where k is the wave number given by k=2π/λ. The new scaled hologram is then given by
$$H^{\prime}(x,y) = H\left( {\frac{x}{m},\frac{y}{m}} \right) = \exp \left[ { - j\frac{k}{{2{m^2}z{_p}}}\{{\mathop {(x - mx{_p})}\nolimits^2 + \mathop {(y - my{_p})}\nolimits^2 } \}} \right].$$
Equation (2) indicates that the scaled hologram corresponds to the point source located at P′(mxp,myp,m2zp). The axial position of the point is scaled by m2 while the transverse position is scaled by m, which results in the 3D scene distortion as illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Subsampling method and 3D image distortion.

Download Full Size | PDF

Figure 2 shows the numerical imaging method. The hologram optical field is numerically propagated to a lens, multiplied with the lens transmittance function, and propagated again to the image plane of the scene. The distance d1 between the original scene plane and the lens, and the distance d2 between the lens and the image plane are adjusted to a desired magnification ratio according to the lens formula 1/d1+1/d2=1/f, where f is the focal length of the lens. Like the usual lens imaging, the numerical imaging also gives different magnifications along the transverse and axial directions, i.e.

$$M_{trans} = \frac{{d{_2}}}{{d{_1}}} = m,\textrm{ }M_{axial} = \frac{{\Delta d{_2}}}{{\Delta d{_1}}} = \mathop {\left( {\frac{{d{_2}}}{{d{_1}}}} \right)}\nolimits^2 = {m^2}.$$
Therefore, both conventional subsampling method and the numerical imaging method scale the scene m times in the transverse direction but m2 times in the axial direction, which distorts the 3D scene.

 figure: Fig. 2.

Fig. 2. Numerical imaging method and 3D image distortion.

Download Full Size | PDF

3. Proposed method

The proposed method scales the 3D scene in the hologram using a mutual conversion between the hologram and the light field domain. The whole process is shown in Fig. 3. Following sub-sections describe the light field – hologram conversion and the light field scaling.

 figure: Fig. 3.

Fig. 3. Overall process of the proposed method.

Download Full Size | PDF

3.1 Brief review of hologram and light field conversion

The proposed scaling technique is based on the conversion framework between the hologram and the light field. This conversion framework has been reported in our previous work [9] and is briefly reviewed in this sub-section. The first step of the framework is the extraction of the light field data from the hologram. The light field, or the spatio-angular distribution of the light rays, can be represented by a collection of the orthographic views of the 3D scene observed from different directions. In the framework used in this paper, we extract each orthographic view by applying the bandpass filtering to the hologram. The hologram, or the complex field of the 3D scene is Fourier transformed, and the angular spectrum of the complex field is cropped by a rectangular aperture, or bandpass filter. The cropped angular spectrum is then inverse Fourier transformed and its amplitude part is taken as the amplitude orthographic view.

Note that the amplitude of the resulting orthographic view is used in the proposed technique instead of its intensity due to the requirement of the non-hogel based CGH technique which is used in the final step of the proposed method. The projection angle, or the observation direction of the 3D scene is determined by the center spatial frequency of the bandpass filter. The bandwidth of the bandpass filter determines the effective resolution and angular selectivity, which has a trade-off relationship. The bandwidth is empirically set to be 1/4 of that of the original hologram in all simulations and experiments in this paper.

The orthographic view images extracted from a hologram contain a speckle noise. In the previous work [9], this speckle noise is suppressed by applying a deep learning technique to the extracted orthographic views. The de-noised orthographic views are then used to synthesize a new hologram with interleaved plane carrier waves for its speckle-less optical reconstruction. In this paper, however, the new hologram synthesis is performed with a random phase carrier wave to achieve shallow depth of field of the reconstruction. The random phase carrier wave inevitably accompanies the speckle in the final optical reconstruction of the hologram, which makes the speckle suppression in the intermediate step, i.e. in the extracted orthographic view images, not meaningful. In addition, the image noise removal techniques like speckle suppression tend to blur the images while suppressing the noise, which reduces the resolution of the final optical reconstruction of the hologram. Therefore, in this paper, speckle suppression technique for the extracted orthographic views is not applied.

The final step of the conversion framework is to synthesize a new hologram from the orthographic views. Although there exist many CGH techniques which synthesize the hologram from the light field, the non-hogel based CGH technique is used in the proposed technique [7,8]. Unlike many hogel-based CGH techniques, the non-hogel based CGH technique keeps the same spatial resolution as the orthographic views at the same pixel count as the original hologram before the conversion process. Using this framework based on the bandpass filtering, and non-hogel based CGH, the mutual conversion between the hologram and the light field domain can be conducted. The proposed scaling of the 3D scene is performed in the light field domain during this conversion as explained in the following sub-section.

3.2 Proposed scaling of 3D scene in light field domain

The light field data extracted from the hologram is amplitude distribution of the views in the orthographic projection geometry. Figure 4 shows the geometry of the orthographic view and the surface of the 3D object. For an orthographic projection angle (θx,θy), an arbitrary point P(xp, yp, zp) on the surface of the 3D object is projected onto the position (xp- zpθx, yp- zpθy) in the orthographic view image which is located at z=0 plane. Suppose that the amplitude of the 3D object surface at a transverse position (x,y) is represented by f(x,y). The orthographic amplitude view $I_{{\theta _x},{\theta _y}}^{ortho}({x,y} )$ is related to the f(x,y) by

$$f(x,y) = \mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho} ({x - h(x,y)\theta {_x},y - h(x,y)\theta {_y}} ),$$
where h(x,y) represents the depth of the 3D object surface at the transverse position (x, y), i.e. z = h(x,y).

 figure: Fig. 4.

Fig. 4. Geometry of the orthographic view and the surface of the 3D object.

Download Full Size | PDF

Now suppose that the 3D object is scaled by m times with respect to the origin as shown in Fig. 5. The amplitude f(x,y) and the depth h(x,y) of the scaled 3D object are given by

$$f^{\prime}(x,y) = f\left( {\frac{x}{m},\frac{y}{m}} \right),\quad h^{\prime}(x,y) = mh\left( {\frac{x}{m},\frac{y}{m}} \right).$$
From Eqs. (4) and (5), we can find that the relationship between the new amplitude orthographic view $I{^{\prime}}_{{\theta _x},{\theta _y}}^{ortho}$ for the scaled 3D object and the original orthographic view $I_{{\theta _x},{\theta _y}}^{ortho}$ is obtained by
$$\begin{array}{l} \mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho^{\prime}} ({x - h^{\prime}(x,y)\theta {_x},y - h^{\prime}(x,y)\theta {_y}} )= f^{\prime}(x,y) = f\left( {\frac{x}{m},\frac{y}{m}} \right)\\ = \mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho} \left( {\frac{x}{m} - h\left( {\frac{x}{m},\frac{y}{m}} \right)\theta {_x},\frac{y}{m} - h\left( {\frac{x}{m},\frac{y}{m}} \right)\theta {_y}} \right) = \mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho} \left( {\frac{{x - h^{\prime}(x,y)\theta {_x}}}{m},\frac{{y - h^{\prime}(x,y)\theta {_y}}}{m}} \right), \end{array}$$
which reduces to
$$\mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho^{\prime}} ({x,y} )= \mathop I\nolimits_{\mathop \theta \nolimits_x ,\mathop \theta \nolimits_y }^{ortho} \left( {\frac{x}{m},\frac{y}{m}} \right).$$
Therefore the orthographic view images for the scaled 3D object are simply given by scaling the corresponding orthographic views of the original 3D object with the scaling factor m. Note that the linear relationship between the 3D object and the amplitude orthographic view enables the scaling of the 3D object with the same scaling ratio m both in the transverse and axial directions as revealed in Eqs. (5)-(7). In the proposed method, the scaled orthographic view images are finally used to synthesize the new hologram containing the scaled 3D object by using the non-hogel-based CGH technique as explained in the previous section.

 figure: Fig. 5.

Fig. 5. Geometry of the orthographic view and the surface of the scaled 3D object.

Download Full Size | PDF

4. Numerical verification

For the verification of the proposed method, two kinds of holograms, i.e. one with multiple objects of discrete depths and the other one with a single object of continuous depth are synthesized and used as the original holograms before the scaling. The hologram of the discrete depth objects contains two plane objects at 4mm and -4mm distances from the hologram plane, respectively. Each plane object consists of two concentric circles at the same depth. In the case of the hologram of the continuous depth object, the concentric circles are inclined to cover 2mm∼7mm depth range. In both cases, the hologram resolution is 1200×1200 and the pixel pitch is 6.0µm. Figure 6 shows the 3D models of the objects and the amplitude and phase of the corresponding holograms.

 figure: Fig. 6.

Fig. 6. 3D models of (a) the discrete depth objects, and (b) the continuous depth objects. Amplitude and phase of the holograms of (c) the discrete depth objects and (d) continuous depth objects.

Download Full Size | PDF

The first step of the proposed method is to extract the light field or the array of the orthographic views. The passband width in the Fourier plane of the hologram is set to a quarter of the original hologram, and the passband is sequentially moved to extract the orthographic views of different projection angles. In our work, 30×30 orthographic views are extracted in the horizontal and vertical directions. Figure 7 shows a few examples of the orthographic views extracted from each hologram. The horizontal and vertical view index is indicated at the upper left corner in each orthographic view in Fig. 7.

 figure: Fig. 7.

Fig. 7. Orthographic views extracted from the hologram (1200×1200). (a) Objects with discrete depths, (b) objects with continuous depth.

Download Full Size | PDF

Next step is the orthographic view scaling. As described in section 3.2, for hologram scaling, the orthographic view is scaled by using the linear relationship between the orthographic view and the 3D object of a hologram. Each orthographic view is enlarged or reduced using the bicubic interpolation technique. In the case of enlargement, the scale factor is set to be m=2, resulting in 2400×2400 pixel resolution orthographic views. In the case of reduction, zero-padding is applied to maintain the pixel resolution of the orthographic view to 1200 × 1200 as in previous step. Figure 8 shows the reduction case, where the scale factor is set to be m=0.5.

 figure: Fig. 8.

Fig. 8. Down-scaled orthographic views by m=0.5 scaling factor. (a) Objects with discrete depths, (b) objects with continuous depth.

Download Full Size | PDF

The final step of the proposed method is to re-synthesize the hologram from the scaled orthographic views using the non-hogel based CGH method. Random phase carrier wave is used in the hologram synthesis for the shallow depth of field which makes the identification of the focused distance be clearer. For the up-scaling case, the central 1200 × 1200 part of each 2400 × 2400 resolution orthographic view is used for the hologram synthesis due to the memory limitation in our implementation. Figure 9 shows the amplitude and the phase of the resynthesized holograms.

 figure: Fig. 9.

Fig. 9. Amplitude and phase of synthesized holograms with random carrier wave. (a) 0.5× down-scaled hologram of objects with discrete depths, (b) 2× up-scaled hologram of objects with discrete depths, (c) 0.5× down-scaled hologram of objects with continuous depth, (d) 2× up-scaled hologram of objects with continuous depth.

Download Full Size | PDF

Numerical reconstruction is performed for the verification of the distortion-free scaling of the 3D objects. Figure 10 is the result of numerical propagations of the discrete depth hologram at different distances from -8 mm to +8 mm using the angular spectrum method. The objects in the original hologram have the depth of 4 mm for the upper circle and -4 mm for the lower circle, Fig. 10(a) shows that each circle is focused at the corresponding distance 4 mm and -4 mm. In the numerical reconstruction of down-scaled hologram shown in Fig. 10(b), each circle is focused at -2 mm and 2 mm, which is a half of the focus of the original hologram object. In the up-scaled hologram case shown in Fig. 10(c), it can be confirmed that each circle is focused at -8 mm and 8 mm, respectively.

 figure: Fig. 10.

Fig. 10. Numerical reconstructions at different distances. (a) Original hologram of discrete depth objects, (b) 0.5× down-scaled hologram of discrete depth objects, (c) 2× up-scaled hologram of discrete depth objects. The focused reconstructions are indicated by red arrows.

Download Full Size | PDF

Figure 11 shows the transverse size of the focused reconstructions. The double circle diameter of the original hologram reconstruction at 4 mm is 361pixels horizontally and 358pixels vertically as shown in Fig. 11(a). The double circle diameter of the 1/2× down-scaled hologram reconstruction at 2mm is measured to be 184pixels horizontally and 183pixels vertically as shown in Fig. 11(b), which is approximately 1/2 of the original diameter. In the 2× up-scaled case shown in Fig. 11(c), the horizontal diameter at the focused 8 mm distance is measured to be 738 pixels, which again agrees well with the theoretical value 722 = 361 × 2 pixels. From the results shown in Figs. 10 and 11, it is confirmed that the transverse direction and the axial direction have the same magnification ratio without distortion.

 figure: Fig. 11.

Fig. 11. Transverse size of the reconstructed objects (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

Figure 12 shows the result of the numerical reconstructions of the holograms of the inclined object which has the continuous depth of 2∼7 mm in the original hologram. For clear verification, the inner circle is selected as a comparative target and Fig. 13 shows the magnification around its focused distances. From Figs. 12(a) and 13(a), it can be seen that the upper part of the inner circle is focused on 4 mm, and the lower part is focused on 6 mm in the original hologram as expressed by the yellow dashed box. The corresponding parts are focused on 2 mm and 3 mm for the 1/2× scaled hologram as shown in Figs. 12(b) and 13(b), and on 8 mm and 12 mm for the 2× scaled hologram by the proposed method as shown in Figs. 12(c) and 13(c). These results confirm that the proposed method is well applied to holograms not only with discrete depths but also with continuous depth.

 figure: Fig. 12.

Fig. 12. Numerical reconstructions of the hologram with continuous depth objects. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Numerical reconstructions around the focus of the inner circle object. Yellow dashed line indicates the focused part. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

In order to test the proposed technique with a 3D object of high detail, a hologram of ‘hippo’ object is synthesized. The hippo object has a depth range -2 mm to 2 mm and the hologram has a resolution of 2160×2160 and a pixel pitch is 3.6µm. The scaled holograms with scale factors 0.5 and 2 are resynthesized. Figure 14 shows the amplitude and phase of the original hologram and the scaled holograms.

 figure: Fig. 14.

Fig. 14. Amplitude and phase of synthesized holograms with random carrier wave. (a) original hologram with a resolution of 2160×2160, (b) 0.5× down-scaled hologram with resolution of 2160×2160, (c) 2× up-scaled hologram with resolution of 4320×4320.

Download Full Size | PDF

Figure 15 shows the numerical reconstructions of the holograms at different distances. In order to compare the details of the reconstructed 3D objects clearly, the reconstruction results focused on the head and the body of the hippo of each hologram are selected and shown in Fig. 16 with magnification. From Figs. 15(a) and 16(a), it can be seen that the head part of the hippo is focused on 1 mm, and the body side of the hippo focused on -1 mm in the original hologram. The corresponding parts are focused on 0.5 mm and -0.5 mm for the 1/2× scaled hologram as shown in Figs. 15(b) and 16(b), and on 2 mm and -2 mm for the 2× scaled hologram as shown in Figs. 15(c) and 16(c). From the results in Figs. 15 and 16, it can be confirmed that the proposed method is well applied to the holograms of the objects with details.

 figure: Fig. 15.

Fig. 15. Numerical reconstructions of the holograms of hippo object. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Numerical reconstructions around the focus of the head and body of the hippo. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

5. Optical experiment

Optical experiment was also conducted to verify the proposed method. Hologram data for the verification consists of two concentric circles, and each object is respectively located at -4mm and 4mm from the hologram plane as in the numerical simulation. The resolution of the original hologram used in the experiment is 2400×1400. From this original hologram, up-scaled hologram and down-scaled hologram were synthesized using the proposed method, and the scaled factors were set to be m=0.5 and m=2, respectively. The amplitude and phase of each hologram are shown in Fig. 17.

 figure: Fig. 17.

Fig. 17. Amplitude and phase of the original and scaled holograms for optical experiment. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.

Download Full Size | PDF

The optical experiment setup is shown in Fig. 18. The laser of λ=520 nm wavelength illuminates a spatial light modulator (SLM) through a beam splitter. The SLM’s pixel pitch is 3.6µm and the resolution is 3840×2160 where only central 2400×1400 part was used in the experiment. In order to avoid unwanted noise, an aperture was set in the Fourier plane of the 4-f system so that only about 0.5(H)×1(V) portion of the vertical single side band area passes through the aperture and contributes to the reconstruction. Each hologram was also filtered by limiting the angular spectrum range in the Fourier plane to the same size as the physical aperture of the 4-f system. For the 4-f system, two lenses with the same focal length of 15 cm were used to prevent optical imaging magnification. Neutral-density (ND) filter is used to adjust the intensity of the laser. Optical reconstruction was directly captured by a camera image sensor without any camera imaging lens. The camera was mounted on a linear stage to capture the reconstruction at various distances including -8 mm, -4 mm, -2 mm, 2 mm, 4 mm and 8 mm from the image plane of the SLM.

 figure: Fig. 18.

Fig. 18. Optical experiment setup.

Download Full Size | PDF

Figures 19 and 20 show the results of numerical reconstructions and optical reconstructions of the original, 1/2× down-scaled, and 2× up-scaled holograms with the limited angular spectrum range. As the angular spectrum range is limited, the depth of focus is enlarged and the focusing effect is reduced in the experiment. Nevertheless, both in the numerical reconstructions and the optical reconstructions, it can be confirmed that the down-scaled hologram with a scale factor of 1/2 is focused at -2 mm, 2 mm and up-scaled hologram with the scaled factor 2 is focused at -8 mm, 8 mm, showing the expected 1/2× and 2× axial magnification of the reconstruction. The transverse size of the reconstruction is also measured to be 1/2× and 2×, respectively, as shown in Fig. 20. Therefore, the optical experiment successfully demonstrates that the transverse direction and axial direction have the same magnification ratio in the hologram scaled by the proposed method.

 figure: Fig. 19.

Fig. 19. Numerical reconstructions of the holograms in Fig. 17 (λ=520 nm, angular spectrum range limited). (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram. Red arrays indicate the focused parts.

Download Full Size | PDF

 figure: Fig. 20.

Fig. 20. Optical reconstructions of the holograms in Fig. 17 (λ=520 nm, angular spectrum range limited). (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram. Red arrays indicate the focused parts.

Download Full Size | PDF

6. Limitations and discussion

Although the distortion-less scaling of the hologram by the proposed method has been successfully verified by numerical simulations and optical experiments, the spatial resolution of the final reconstruction of the scaled hologram can be reduced from that of the original hologram. This spatial resolution loss is caused by the orthographic view extraction step. In the proposed method, the orthographic views are extracted by applying a bandpass filtering to the hologram. As illustrated in Fig. 21, the bandpass filtering limits the angular range of the reconstructed 3D points around a specific angle. The limited angular range increases the depth of field, making the entire 3D object in-focus in the amplitude part of the filtered hologram. The central angle of the angular range determines the central projection angle of the 3D object onto the hologram plane. Therefore, by the bandpass filtering, the amplitude part of the filtered hologram can be approximated to the corresponding orthographic view image. However, since the angular spectrum is limited by the bandpass filter, the maximum spatial resolution of the filtered hologram, or the orthographic view image is also reduced by the same factor. Moreover, even though the depth of field is enlarged by the limited angular spectrum, the 3D image points with large depth are still blurred. These two factors reduce the spatial resolution of the extracted orthographic views, which eventually affect the final reconstruction of the scaled hologram. Note that the two factors, the maximum spatial resolution and the depth of field, have a trade-off relationship, and thus there should be a compromise in the application. In all simulations and experiments presented in this paper, the bandwidth of the bandpass filter was set to be a quarter of the full bandwidth of the hologram, which was found to give plausible results in the used simulation and experiment conditions.

 figure: Fig. 21.

Fig. 21. Angular spectrum range of the hologram and the angular range of the reconstruction. (a) Full angular spectrum range is used, (b) bandpass filtering is applied.

Download Full Size | PDF

In order to analyze the resolution loss quantitatively, we performed additional simulations with point objects. The 3D scene has 8 horizontal point arrays. Each horizontal point array consists of 16 random-phase single-pixel point objects at the same depth. The 8 horizontal point arrays are located from -3.5 mm to +3.5 mm with 1 mm depth spacing from the hologram plane. The hologram was synthesized with 1200 × 1200 resolution and 6um pixel pitch. Figure 22 shows the orthographic view images extracted from the hologram by using the bandpass filtering with the quarter bandwidth like all other simulations and experiments. The top row of Fig. 22 shows 3 examples of the extracted views. In the middle and the bottom rows of Fig. 22, the magnified portions of the point objects at different depths in the extracted views are shown. As shown in Fig. 22, the point objects are blurred to multiple pixels in the extracted views due to the maximum spatial resolution loss caused by the bandpass filtering. It is also confirmed that the amount of blur is larger for +3.5 mm or -3.5 mm points than for the 0.5 mm points because of the non-ideal depth of field as explained above.

 figure: Fig. 22.

Fig. 22. Extracted orthographic views from the hologram of the 8 horizontal point arrays. Top row shows 3 examples of the extracted views and the middle and bottom rows are magnified portions for the point objects at different depths.

Download Full Size | PDF

 Figure 23 shows the reconstruction of the hologram synthesized from the extracted orthographic views when the orthographic views are used as they are without any scaling, i.e. m=1. The depth-dependent blur in the extracted views is reflected to the final reconstruction, making the single-pixel object point reconstructions spread over multiple pixels according to their depths. Figure 24 shows the reconstructions of the holograms synthesized with different scaling factors, i.e. m=0.5, m=1, and m=2. The blur in the extracted orthographic views is simply scaled by m and again reflected to the final reconstructions.

 figure: Fig. 23.

Fig. 23. Numerical reconstructions of the hologram synthesized with m=1. Each column is the reconstruction at different depth.

Download Full Size | PDF

 figure: Fig. 24.

Fig. 24. Numerical reconstructions of the holograms synthesized with m=0.5 (left column), m=1 (middle column), and m=2 (right column). In the upper two rows, the reconstructions are focused on the horizontal point array located at 5-th row from the top in each scaling case. In the lower two rows, the reconstructions are focused on the bottom-row horizontal point array.

Download Full Size | PDF

Figure 25 shows the numerical examples of the spatial resolution decrease caused by the non-ideal depth of field in the view extraction step. In Fig. 25, the peak-signal-to-noise-ratio (PSNR) values of the reconstructions of the holograms of the 2D ‘pepper’ image object which is at different depths are indicated. For each depth, the original hologram is synthesized, and the orthographic views are extracted from the hologram. The new hologram is then synthesized from the extracted views with m=1 and reconstructed at the object depth. The reconstructed images are compared with the z=0 case for the PSNR measurement. Figure 25 indicates the PSNR decreases as the object depth increases due to the depth-dependent blur in the view extraction as expected.

 figure: Fig. 25.

Fig. 25. PSNR measurement of the hologram reconstructions of the 2D object when the object depth varies from 0 mm to 7 mm. The PSNR of each object reconstruction is measured with respect to the z=0 case reconstruction.

Download Full Size | PDF

From the analysis above, we can conclude that the major factor causing the resolution loss in the proposed method is the maximum spatial resolution limitation and the non-ideal depth of field in the bandpass-filtering-based orthographic view extraction step. Although this imposes a limitation on the performance of the current implementation of the proposed techniques, it can be alleviated by enhancing the view extraction step. We believe that possible approaches include applying image super-resolution techniques [10,11] and the extended depth of field reconstruction techniques [12,13] to the extracted view images or the filtered hologram which denote active field of research recently.

7. Summary

A distortion-free hologram scaling method is proposed. Conventional hologram scaling techniques distort the reconstructed 3D images due to different transverse and axial magnification ratios, i.e. m in transverse and m2 in axial direction. The proposed method achieves the constant m magnification ratio both in transverse and axial directions by using linear scaling relationship between the orthographic views and the 3D objects. The proposed method first extracts the orthographic amplitude views by applying the bandpass filtering to the original hologram while sliding the passband in the angular spectrum range. Next the extracted orthographic amplitude views are up-scaled or down-scaled to a desired scale factor m to magnify the 3D object of the hologram. Finally a new hologram is synthesized with the scaled orthographic amplitude views using a non-hogel based CGH method. The proposed method is verified using holograms having discrete depth objects, continuous depth objects and continuous depth objects with high details. The numerical simulations and the optical experiments demonstrate the distortion-free scaling with the same magnification ratio in the transverse and axial directions by the proposed method successfully.

Funding

National Research Foundation of Korea (2017R1A2B2011084); Institute for Information and Communications Technology Promotion (GK20D0100).

Disclosures

The authors declare no conflicts of interest.

References

1. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007). [CrossRef]  

2. H. Zhang, L. Cao, and G. Jin, “Scaling of three-dimensional computer-generated holograms with layer-based shifted Fresnel diffraction,” Appl. Sci. 9(10), 2118 (2019). [CrossRef]  

3. T. Shimobaba, T. Kakue, N. Okada, M. Oikawa, Y. Yamaguchi, and T. Ito, “Aliasing-reduced Fresnel diffraction with scale and shift operations,” J. Opt. 15(7), 075405 (2013). [CrossRef]  

4. T. Shimobaba, M. Makowski, T. Kakue, M. Oikawa, N. Okada, Y. Endo, R. Hirayama, and T. Ito, “Lensless zoomable holographic projection using scaled Fresnel diffraction,” Opt. Express 21(21), 25285–25290 (2013). [CrossRef]  

5. S. Trejos, J. F. Barrera, A. Velez, M. Tebaldi, and R. Torroba, “Optical approach for the efficient data volume handling in experimentally encrypted data,” J. Opt. 18(6), 065702 (2016). [CrossRef]  

6. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).

7. J.-H. Park and M. Askari, “Non-hogel-based computer generated hologram from light field using complex field recovery technique from Wigner distribution function,” Opt. Express 27(3), 2562–2574 (2019). [CrossRef]  

8. J.-H. Park, “Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field,” Opt. Express 28(5), 6663–6683 (2020). [CrossRef]  

9. D.-Y. Park and J.-H. Park, “Hologram conversion for speckle free reconstruction using light field extraction and deep learning,” Opt. Express 28(4), 5393–5409 (2020). [CrossRef]  

10. J. Yang, J. Wright, and T. S. Huang, “Image super-resolution via sparse representation,” IEEE Trans. on Image Process. 19(11), 2861–2873 (2010). [CrossRef]  

11. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]  

12. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

13. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (25)

Fig. 1.
Fig. 1. Subsampling method and 3D image distortion.
Fig. 2.
Fig. 2. Numerical imaging method and 3D image distortion.
Fig. 3.
Fig. 3. Overall process of the proposed method.
Fig. 4.
Fig. 4. Geometry of the orthographic view and the surface of the 3D object.
Fig. 5.
Fig. 5. Geometry of the orthographic view and the surface of the scaled 3D object.
Fig. 6.
Fig. 6. 3D models of (a) the discrete depth objects, and (b) the continuous depth objects. Amplitude and phase of the holograms of (c) the discrete depth objects and (d) continuous depth objects.
Fig. 7.
Fig. 7. Orthographic views extracted from the hologram (1200×1200). (a) Objects with discrete depths, (b) objects with continuous depth.
Fig. 8.
Fig. 8. Down-scaled orthographic views by m=0.5 scaling factor. (a) Objects with discrete depths, (b) objects with continuous depth.
Fig. 9.
Fig. 9. Amplitude and phase of synthesized holograms with random carrier wave. (a) 0.5× down-scaled hologram of objects with discrete depths, (b) 2× up-scaled hologram of objects with discrete depths, (c) 0.5× down-scaled hologram of objects with continuous depth, (d) 2× up-scaled hologram of objects with continuous depth.
Fig. 10.
Fig. 10. Numerical reconstructions at different distances. (a) Original hologram of discrete depth objects, (b) 0.5× down-scaled hologram of discrete depth objects, (c) 2× up-scaled hologram of discrete depth objects. The focused reconstructions are indicated by red arrows.
Fig. 11.
Fig. 11. Transverse size of the reconstructed objects (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 12.
Fig. 12. Numerical reconstructions of the hologram with continuous depth objects. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 13.
Fig. 13. Numerical reconstructions around the focus of the inner circle object. Yellow dashed line indicates the focused part. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 14.
Fig. 14. Amplitude and phase of synthesized holograms with random carrier wave. (a) original hologram with a resolution of 2160×2160, (b) 0.5× down-scaled hologram with resolution of 2160×2160, (c) 2× up-scaled hologram with resolution of 4320×4320.
Fig. 15.
Fig. 15. Numerical reconstructions of the holograms of hippo object. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 16.
Fig. 16. Numerical reconstructions around the focus of the head and body of the hippo. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 17.
Fig. 17. Amplitude and phase of the original and scaled holograms for optical experiment. (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram.
Fig. 18.
Fig. 18. Optical experiment setup.
Fig. 19.
Fig. 19. Numerical reconstructions of the holograms in Fig. 17 (λ=520 nm, angular spectrum range limited). (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram. Red arrays indicate the focused parts.
Fig. 20.
Fig. 20. Optical reconstructions of the holograms in Fig. 17 (λ=520 nm, angular spectrum range limited). (a) Original hologram, (b) 0.5× down-scaled hologram, (c) 2× up-scaled hologram. Red arrays indicate the focused parts.
Fig. 21.
Fig. 21. Angular spectrum range of the hologram and the angular range of the reconstruction. (a) Full angular spectrum range is used, (b) bandpass filtering is applied.
Fig. 22.
Fig. 22. Extracted orthographic views from the hologram of the 8 horizontal point arrays. Top row shows 3 examples of the extracted views and the middle and bottom rows are magnified portions for the point objects at different depths.
Fig. 23.
Fig. 23. Numerical reconstructions of the hologram synthesized with m=1. Each column is the reconstruction at different depth.
Fig. 24.
Fig. 24. Numerical reconstructions of the holograms synthesized with m=0.5 (left column), m=1 (middle column), and m=2 (right column). In the upper two rows, the reconstructions are focused on the horizontal point array located at 5-th row from the top in each scaling case. In the lower two rows, the reconstructions are focused on the bottom-row horizontal point array.
Fig. 25.
Fig. 25. PSNR measurement of the hologram reconstructions of the 2D object when the object depth varies from 0 mm to 7 mm. The PSNR of each object reconstruction is measured with respect to the z=0 case reconstruction.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

H ( x , y ) = exp [ j k 2 p { ( x x p ) 2 + ( y y p ) 2 } ] ,
H ( x , y ) = H ( x m , y m ) = exp [ j k 2 m 2 z p { ( x m x p ) 2 + ( y m y p ) 2 } ] .
M t r a n s = d 2 d 1 = m ,   M a x i a l = Δ d 2 Δ d 1 = ( d 2 d 1 ) 2 = m 2 .
f ( x , y ) = I θ x , θ y o r t h o ( x h ( x , y ) θ x , y h ( x , y ) θ y ) ,
f ( x , y ) = f ( x m , y m ) , h ( x , y ) = m h ( x m , y m ) .
I θ x , θ y o r t h o ( x h ( x , y ) θ x , y h ( x , y ) θ y ) = f ( x , y ) = f ( x m , y m ) = I θ x , θ y o r t h o ( x m h ( x m , y m ) θ x , y m h ( x m , y m ) θ y ) = I θ x , θ y o r t h o ( x h ( x , y ) θ x m , y h ( x , y ) θ y m ) ,
I θ x , θ y o r t h o ( x , y ) = I θ x , θ y o r t h o ( x m , y m ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.