Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects

Open Access Open Access

Abstract

Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Computer holography to produce large-scale computer-generated holograms (CGHs) of nonphysical objects has been developed over the last decade [15]. These large-scale CGHs are composed of several billions or sometimes several tens of billions of pixels. The large numbers of pixels are used because a gigantic space-band product (SBP) is required to produce high-quality three-dimensional (3D) imaging in computer holography. To ensure viewing angles of several tens of degrees, the pixel pitches must be less than or equal to 1 μm. Additionally, to produce holograms of several hundreds of square centimeters in size, the CGHs generally require large numbers of pixels. This summarizes the SBP problem in computer holography.

Our large-scale CGHs not only have large SBPs but also can reconstruct natural motion parallax because their occlusions are processed via the silhouette method [4]. As a result, these large-scale CGHs, which are called high-definition CGHs [1], can reconstruct very deep 3D scenes in full parallax without any vergence–accommodation conflicts. The quality of the reconstructed 3D images is comparable to those produced by traditional optical holography. The creation of 3D images in conventional holography consists of two steps: recording and reconstruction. In the recording step, fringes generated by the interference of an object field with a reference field are recorded on light-sensitive materials, such as silver halide films. In digital holography (DH), these light-sensitive materials are replaced with an image sensor [6,7], and the DH technique has been developed over the last few decades. Using this technique, two-dimensional (2D) images are commonly reconstructed from the captured fringe pattern. These 2D images contain information about the phase of the object light, unlike conventional photographs, and thus DH offers many attractive possibilities for measurement applications [8].

Because the object field can be extracted from the captured fringes using techniques such as phase shifting [9], CGHs could also be created from physical objects using DH techniques. We call this idea digitized holography, because both of the processes of conventional optical holography (i.e., recording and reconstruction) are replaced by their digital counterparts in this concept [10]. Figure 1 illustrates the concept and shows an example of digitized holography. The object fields that are captured by DH are handled in a computer and can be arranged in a virtual 3D scene with other nonphysical objects, which are represented by polygon-meshed CG models.

 figure: Fig. 1.

Fig. 1. Digitized holography: (a) schematic illustration of the concept, and (b) optical reconstruction of a high-definition CGH called “Bear-II” created using digitized holography [10]. Bear-II is composed of more than four billion pixels.

Download Full Size | PDF

Digitized holography has several definite advantages over traditional holography in addition to this capability for mixed reconstruction of physical and nonphysical objects. Digitized holography also makes it possible to edit the resulting 3D scene digitally. For example, it is possible to design the position of the physical object freely and then make multiple objects with fields that were captured individually appear in a 3D scene multiple times. The object can also be resized using digital techniques [11]. However, there are also several challenges that must be overcome to realize digitized holography. The sensor pitches of the image sensors that are currently available in the market are too large to achieve viewing angles of several tens of degrees. Additionally, the SBP, i.e., the total number of sensor pixels, in these sensors is also too small to allow the captured object field to be used to create large-scale CGHs. Therefore, a lensless Fourier setup [12] and synthetic aperture techniques [13] must be used to capture the object fields.

In addition to digitized holography, the reconstruction of existing real physical objects using CGHs has attracted research attention for a long time. For example, stereoscopic images have been used to create CGHs of physical objects [1416]. In this case, a disparity map is used to provide the object depth data. Multiple-viewpoint projection [17] or multiple-viewpoint-image-based techniques commonly use more than two images. To capture multiple images, the former technique uses a single camera that is located at several different viewpoints [18,19] or a microlens array [20]. The integral photography method also uses microlens arrays to generate holograms [2124]. Recently, the light field concept was used to synthesize holograms of physical objects [25,26]. A commercial plenoptic camera was actually used for image capture [26]. However, the SBPs of the object fields that are captured by these techniques are commonly not large enough for use in large-scale digitized holography. In addition, because the vergence–accommodation conflict problem has not been sufficiently resolved in these techniques to date, deep 3D scenes cannot be reconstructed using high-definition CGHs. The ray-sampling plane technique, which is based on ray-wavefront conversion, has been used successfully to create CGHs that reconstruct deep 3D scenes of nonphysical objects with reduced vergence–accommodation conflict [27,28]. In this technique, while a scanning vertical camera array is used to capture the field of the physical object of interest [29], the resulting image quality is not sufficient for reconstruction of high-quality 3D images in large-scale computer holography.

Recently, we proposed a new technique for reconstruction of high-definition CGHs in full color [30]. In this technique, red-green-blue (RGB) color filters, which were fabricated using the same techniques that are used for liquid crystal display panels, are attached to the fringe pattern of the high-definition CGH. Because this fringe pattern is divided into multiple stripes and the striped fringe behind the color filter is calculated with wavelengths that correspond to the primary colors, viewers can then see full-color 3D images. This simple technique has made it possible to display color CGHs easily in exhibitions.

Digitized holography was only used to reconstruct monochromatic 3D images until recently. However, we have lately created several full-color CGHs for real objects and one of them uses the technique [31]. To realize full-color reconstruction in digitized holography, the large-scale object fields must be captured at three wavelengths. In this paper, we present the required technique along with experimental results of its implementation. The captured object fields are reconstructed in a 3D mixed scene containing several nonphysical objects. The technique used to capture the object fields at three wavelengths is discussed in Section 2. Section 3 then describes creation of the mixed 3D scene. Optical reconstructions of a large-scale CGH that is composed of four billion pixels are demonstrated in Section 4.

2. CAPTURE OF THE OBJECT FIELD AT THREE WAVELENGTHS

A. Experimental Setup

Figure 2 shows the experimental setup used to capture the object fields at three wavelengths using a lensless Fourier setup [12] and a synthetic aperture technique [13]. The outputs of three light sources with wavelengths that correspond to the three primary colors, i.e., red (R), green (G) and blue (B), are switched using electrically controlled mechanical shutters (S). The three beams are combined into a single beam using beam splitters. The combined beam is then divided into two arms to provide reference and illumination light. We use half-wavelength plates (HWPs) to control the distribution ratios for each of the wavelengths. Here, note that HWP4 must work at three wavelengths. Although the working wavelength of HWP4 is 400–700 nm according to the product specifications, a certain amount of error is unavoidable in the distribution ratios.

 figure: Fig. 2.

Fig. 2. Experimental setup used to capture an object field by lensless Fourier synthetic aperture digital holography at three wavelengths. M, mirror; S, shutter; ND, neutral density filter; HWP, half-wavelength plate; BS, beam splitter; PBS, polarizing beam splitter; MO, microscope objective; SF, spatial filter.

Download Full Size | PDF

The illumination arm is also subdivided into two beams and illuminates the subject through objective lenses. The reference beam is converted into a spherical field using the spatial filter (SF) and then irradiates the image sensor through a plate beam splitter (BS4). Here, mirror M6 is installed on a piezo translator to vary the phase of the reference field. The image sensor is installed on a mechanical X-Y translation stage to allow its position to be varied within a plane that lies perpendicular to the optical axis.

B. Procedure for Image Capture at Three Wavelengths

Because the phase-shifting technique is used to extract the object field alone from the fringe images, at least three fringe images must be captured almost simultaneously. Therefore, we first opened the shutter of a specific laser and recorded three fringe images by varying the reference phase at the same wavelength. Then, the opened shutter was switched and three more fringe images were recorded at another wavelength at the same position.

After completing image capture at the three wavelengths at the same position, the image sensor was moved using the X-Y mechanical stage, and the object fields were then captured again at the three wavelengths but at a new position. We repeated this procedure within a specific area in which optical fringes are generated. The sequence of operations involving the shutters, the piezo translator, the image sensor, and the X-Y stage was controlled using a computer.

C. Captured Fields and Fourier-Transformed Object Fields

The object fields of a physical object were actually captured at three wavelengths using the above procedure. A photograph of the object is shown in Fig. 3. The object is composed of ceramic materials and has several colors. The amplitude images of the captured fields and the parameters used in the capture process are shown in Fig. 4 and Table 1, respectively. Here, the movement distance of the image sensor is less than the sensor size, which means that the neighboring fringe patterns overlap with each other. A cross-correlation function of the field captured in the overlap area is used for exact assembly of the individual captured fields into a complete field. The recording process was repeated 576(=12×16×3) times for each wavelength. The object field that was obtained by the three-step phase-shifting technique was embedded in a 32K×32K sampling array (where 1K=1024).

Tables Icon

Table 1. Parameters Used for Capture of Object Fields at Three Wavelengths

 figure: Fig. 3.

Fig. 3. Photograph of the subject used for the experiment.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Amplitude images of object fields captured at three wavelengths.

Download Full Size | PDF

Suppose that the captured field is fs(xs,ys;λ) at a wavelength λ; the object wave field is then given by [32]

f(x,y;λ)=F{fs(xs,ys;λ)}us=x/(λdR),vs=y/(λdR)×exp[iπλdR(x2+y2)],
where us and vs are the Fourier frequencies with respect to the sensor coordinates xs and ys, respectively; these coordinates are shown in Fig. 2. dR represents the distance between the sensor plane and the center of the spherical reference wave, which is considered to be located at the pin-hole of the SF in Fig. 2.

Amplitude images of the Fourier-transformed fields are shown in Fig. 5. These amplitude images appear to be unfocused because the object fields were recorded over a large area. This is equivalent to macrophotography being performed with a small depth of focus.

 figure: Fig. 5.

Fig. 5. Amplitude images of the Fourier transformed object fields (a) 633 nm (red), (b) 532 nm (green), and (c) 488 nm (blue).

Download Full Size | PDF

3. CREATION OF A MIXED 3D SCENE

A. Preparation of Captured Object Fields

The object fields that are shown in Fig. 5 have different sampling intervals because they were captured using lensless Fourier DH. The sampling interval of a Fourier-transformed field in lensless Fourier DH is given as follows [32]:

Δ=λdRNΔS,
where N and ΔS are the number of samples and the sensor pitch, respectively.

To calculate the synthetic object field of a mixed 3D scene with occlusion processing performed by the silhouette method [4], it is necessary for the three object fields to have the same sampling interval. Therefore, the object fields were extracted within the dashed rectangles shown in Fig. 5 and were resampled using a cubic interpolation method to ensure that all object fields had the same sampling interval of 1.0 μm.

B. Mixed 3D Scene

Figure 6 shows the 3D scene that was created. Two nonphysical objects are included within the scene: a table set produced using polygon-meshed CG models and a digital image that is used for the background. These nonphysical objects are shown in Fig. 7. The table set is composed of 8328 polygons, while the background image is composed of 512×512pixels.

 figure: Fig. 6.

Fig. 6. Mixed 3D scene composed of physical and nonphysical objects.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Models of nonphysical objects: (a) polygon-meshed objects, and (b) digital image used to provide the background.

Download Full Size | PDF

According to the principle of the silhouette method [4], the background image field is first calculated and is then propagated to the deepest position in the table set that is represented by the polygon mesh. The table set has a complex shape and thus has complex self-occlusions. The object field was therefore calculated using the switch-back technique that provides polygon-by-polygon silhouette shielding [4].

C. Synthetic Wave Field of a Mixed 3D Scene

In the mixed 3D scene, the physical objects, which are a teapot and a cup, are placed on the table. Occlusion processing must also be performed for the physical objects. The silhouette technique is again used to perform the occlusion processing, i.e., the fields of the background and the table set are shielded by multiplying a binary mask with a shape that agrees with the cross sections of the physical objects [10]. The binary mask that was generated from the captured field is shown in Fig. 8. The wave field of the nonphysical objects is calculated on a plane that intersects with the pot at its center, as shown in Fig. 9. This wave field is then multiplied using the binary mask. However, the wave field of the physical object that is shown in Fig. 5 is given in the plane that intersects with the cup in front of the pot. Therefore, the wave field is propagated backwards by 15 mm to ensure that the wave field is given in the same plane as the masked wave field, and is then added to the masked field. Finally, the total wave field is propagated by 120 mm to the hologram plane.

 figure: Fig. 8.

Fig. 8. Binary mask used to perform occlusion processing by the silhouette method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Procedure used for silhouette shielding of the physical object.

Download Full Size | PDF

4. RECONSTRUCTION OF FULL-COLOR CGH AND DISCUSSION

A. Fabrication of Full-Color CGH

The fringe patterns were generated through numerical interference with a reference wave, as follows:

I(x,y)=|O(x,y)+R(x,y)|22Re{O(x,y)R*(x,y)}+B,
where O(x,y) and R(x,y) are the object and reference fields, respectively. The symbol Re{·} represents the real part of a complex number, and the constant B approximately represents |O(x,y)|2+|R(x,y)|2.

In practice, we only calculated the bipolar fringe intensity Re{O(x,y)R*(x,y)} by assuming that B=0 and then binarized the bipolar intensity using a zero threshold to print the binary fringe using a laser writer. The three binary fringes in the RGB colors are combined into a single pattern. This combined pattern was then printed using a DWL-66 laser writer (Heidelberg Instruments). Finally, the full-color CGH was then fabricated by attaching the RGB color filters to the printed fringe [30].

B. Optical Reconstruction of Fabricated Full-Color CGHs

An optical reconstruction of the captured object field alone is shown in Fig. 10. The three pictures were taken from different viewpoints. The CGH is composed of 32, 768×32, 768 pixels and combines the three generated binary fringes, as described in the previous section. In this case, the Fourier-transformed object fields that were shown in Fig. 5 were resampled as described in Section 3.A and propagated individually for 10 cm using the band-limited angular spectrum method [33]; then, the three fringes were generated.

 figure: Fig. 10.

Fig. 10. Optical reconstruction of full-color CGH for the physical object only. The pictures were taken from different angles.

Download Full Size | PDF

Figure 11 and Visualization 1 show optical reconstructions of the CGH of the mixed 3D scene. The CGH parameters are summarized in Table 2. This CGH is composed of 65, 536×65, 536 pixels and is named Tea Time. The actual color of the physical object is reproduced to a certain extent. The continuous motion parallax of the physical and nonphysical objects is verified in the pictures and the movie.

Tables Icon

Table 2. Parameters of Fabricated Full-Color CGH of Mixed 3D Scene

 figure: Fig. 11.

Fig. 11. Optical reconstruction of the full-color high-definition CGH “Tea Time.” The pictures were taken from different angles (see Visualization 1).

Download Full Size | PDF

Note here that the stripe width and the guard gap width of the RGB color filters are 80 μm and 20 μm, respectively, and a multi-chip white light-emitting diode (LED) is used to illuminate both of the CGHs [30].

C. Discussion

During the fabrication and reconstruction of the CGHs, we did not pay much attention to the color reproduction, i.e., to the calibration of the RGB colors. This is because we used a multi-chip white LED to illuminate the CGHs and the reconstructed colors can be varied by simply adjusting the drive currents of each of the LED chips.

The brightness of each fringe that is captured by the image sensor is dependent on the powers of the light sources and the optical system used for image capture. The image sensor also has a spectral sensitivity. Therefore, it seems likely that a technique to adjust the color balance or the white balance will be necessary. However, we cannot change the reconstructed colors by simply multiplying the object fields by specific coefficients, because the bipolar fringe intensity is binarized using a zero threshold for printing. This hard quantization removes the information about the strength of the fringe. When a single-chip white LED is used as the illumination source, a technique to change the color brightness during fringe generation is also required.

5. CONCLUSION

Large-scale, full-color, and full-parallax CGHs were created by digitized holography. One of these CGHs was composed of more than four billion pixels and reconstructed a mixed 3D scene containing physical and nonphysical objects. The wave fields of the physical object were captured by lensless Fourier synthetic aperture DH at three wavelengths that corresponded to the three primary colors.

The three captured object fields were then incorporated into a 3D scene that included nonphysical objects produced using polygon-meshed CG models and a digital image. The occlusion of the physical object was processed successfully using a silhouette-shaped mask. Full-color reconstruction of the CGH was realized via a technique using RGB color filters. The optical reconstruction of the fabricated CGH shows reasonably exact color reproduction along with exact motion parallax. However, a technique is likely to be required for precise color reproduction under illumination sources other than multi-chip white LEDs.

Funding

Japan Society for the Promotion of Science (JSPS) (KAKENHI 15K00512); Ministry of Education, Culture, Sports, Science and Technology (MEXT); Strategic Research Foundation at Private Universities (2013–2017).

REFERENCES

1. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]  

2. H. Nishi, K. Matsushima, and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50, H245–H252 (2011). [CrossRef]  

3. K. Matsushima, H. Nishi, and S. Nakahara, “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,” J. Electron. Imaging 21, 023002 (2012). [CrossRef]  

4. K. Matsushima, M. Nakamura, and S. Nakahara, “Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique,” Opt. Express 22, 24450–24465 (2014). [CrossRef]  

5. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56, F37–F44 (2017). [CrossRef]  

6. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]  

7. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]  

8. P. Picart, ed., New Techniques in Digital Holography (Wiley, 2015).

9. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]  

10. K. Matsushima, Y. Arima, and S. Nakahara, “Digitized holography: modern holography for 3D imaging of virtual and real objects,” Appl. Opt. 50, H278–H284 (2011). [CrossRef]  

11. D. Fujita, K. Matsushima, and S. Nakahara, “Digital resizing of reconstructed object images in digitized holography,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2013), paper DW2A.7.

12. C. Wagner, S. Seebacher, W. Osten, and W. Jüptner, “Digital recording and numerical reconstruction of lensless Fourier holograms in optical metrology,” Appl. Opt. 38, 4812–4820 (1999). [CrossRef]  

13. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]  

14. S.-C. Kim, D.-C. Hwang, D.-H. Lee, and E.-S. Kim, “Computer-generated holograms of a real three-dimensional object based on stereoscopic video images,” Appl. Opt. 45, 5669–5676 (2006). [CrossRef]  

15. S. Ding, S. Cao, Y. F. Zheng, and R. L. Ewing, “From image pair to a computer generated hologram for a real-world scene,” Appl. Opt. 55, 7583–7592 (2016). [CrossRef]  

16. H. Zheng, T. Wang, L. Dai, and Y. Yu, “Holographic imaging of full-color real-existing three-dimensional objects with computer-generated sequential kinoforms,” Chin. Opt. Lett. 9, 040901 (2011). [CrossRef]  

17. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48, H120–H136 (2009). [CrossRef]  

18. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001). [CrossRef]  

19. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20, 1537–1545 (2003). [CrossRef]  

20. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15, 5754–5760 (2007). [CrossRef]  

21. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45, 4026–4036 (2006). [CrossRef]  

22. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4 K IP images to 8 K holograms,” Opt. Express 20, 21645–21655 (2012). [CrossRef]  

23. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21, 18742–18753 (2013). [CrossRef]  

24. S.-K. Lee, S.-I. Hong, Y.-S. Kim, H.-G. Lim, N.-Y. Jo, and J.-H. Park, “Hologram synthesis of three-dimensional real objects using portable integral imaging camera,” Opt. Express 21, 23662–23670 (2013). [CrossRef]  

25. N. Chen, Z. Ren, and E. Y. Lam, “High-resolution Fourier hologram synthesis from photographic images through computing the light field,” Appl. Opt. 55, 1751–1756 (2016). [CrossRef]  

26. Y. Endo, K. Wakunami, T. Shimobaba, T. Kakue, D. Arai, Y. Ichihashi, K. Yamamoto, and T. Ito, “Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera,” Opt. Commun. 356, 468–471 (2015). [CrossRef]  

27. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19, 9086–9101 (2011). [CrossRef]  

28. S. Igarashi, T. Nakamura, and M. Yamaguchi, “Fast method of calculating a photorealistic hologram based on orthographic ray-wavefront conversion,” Opt. Lett. 41, 1396–1399 (2016). [CrossRef]  

29. M. Yamaguchi, K. Wakunami, and M. Inaniwa, “Computer generated hologram from full-parallax 3D image data captured by scanning vertical camera array,” Chin. Opt. Lett. 12, 060018 (2014). [CrossRef]  

30. Y. Tsuchiyama and K. Matsushima, “Full-color large-scaled computer-generated holograms using RGB color filters,” Opt. Express 25, 2016–2030 (2017). [CrossRef]  

31. K. Matsushima, Y. Tsuchiyama, N. Sonobe, S. Masuji, M. Yamaguchi, and Y. Sakamoto, “Full-color large-scaled computer-generated holograms for physical and non-physical objects,” Proc. SPIE 10233, 1023318 (2017). [CrossRef]  

32. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]  

33. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Optical reconstruction of fabricated large-scale full-color CGH.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Digitized holography: (a) schematic illustration of the concept, and (b) optical reconstruction of a high-definition CGH called “Bear-II” created using digitized holography [10]. Bear-II is composed of more than four billion pixels.
Fig. 2.
Fig. 2. Experimental setup used to capture an object field by lensless Fourier synthetic aperture digital holography at three wavelengths. M, mirror; S, shutter; ND, neutral density filter; HWP, half-wavelength plate; BS, beam splitter; PBS, polarizing beam splitter; MO, microscope objective; SF, spatial filter.
Fig. 3.
Fig. 3. Photograph of the subject used for the experiment.
Fig. 4.
Fig. 4. Amplitude images of object fields captured at three wavelengths.
Fig. 5.
Fig. 5. Amplitude images of the Fourier transformed object fields (a) 633 nm (red), (b) 532 nm (green), and (c) 488 nm (blue).
Fig. 6.
Fig. 6. Mixed 3D scene composed of physical and nonphysical objects.
Fig. 7.
Fig. 7. Models of nonphysical objects: (a) polygon-meshed objects, and (b) digital image used to provide the background.
Fig. 8.
Fig. 8. Binary mask used to perform occlusion processing by the silhouette method.
Fig. 9.
Fig. 9. Procedure used for silhouette shielding of the physical object.
Fig. 10.
Fig. 10. Optical reconstruction of full-color CGH for the physical object only. The pictures were taken from different angles.
Fig. 11.
Fig. 11. Optical reconstruction of the full-color high-definition CGH “Tea Time.” The pictures were taken from different angles (see Visualization 1).

Tables (2)

Tables Icon

Table 1. Parameters Used for Capture of Object Fields at Three Wavelengths

Tables Icon

Table 2. Parameters of Fabricated Full-Color CGH of Mixed 3D Scene

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

f ( x , y ; λ ) = F { f s ( x s , y s ; λ ) } u s = x / ( λ d R ) , v s = y / ( λ d R ) × exp [ i π λ d R ( x 2 + y 2 ) ] ,
Δ = λ d R N Δ S ,
I ( x , y ) = | O ( x , y ) + R ( x , y ) | 2 2 Re { O ( x , y ) R * ( x , y ) } + B ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.