Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot multiple-depth macroscopic imaging by spatial frequency multiplexing

Open Access Open Access

Abstract

We present a low-coherence interferometric imaging system designed for 3-dimensional (3-D) imaging of a macroscopic object through a narrow passage. Our system is equipped with a probe-type port composed of a bundle fiber for imaging and a separate multimode optical fiber for illumination. To eliminate the need for mechanical depth scanning, we employ a spatial frequency multiplexing method by installing a 2-D diffraction grating and an echelon in the reference arm. This configuration generates multiple reference beams, all having different path lengths and propagation directions, which facilitates the encoding of different depth information in a single interferogram. We demonstrate the acquisition of 9 depth images at the interval of 250 μm for a custom-made cone and a plaster teeth model. The proposed system minimizes the need for mechanical scanning and achieves a wide range of depth coverage, significantly increasing the speed of 3-D imaging for macroscopic objects.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical imaging is often used in diverse fields such as biology, medicine, material science, and industrial inspection. Thus far, most technological developments have been driven in the direction of improving the resolution and quality of the imaging in microscopy to reveal the detailed structures of microscopic objects. The optical methods, in turn, have also been adopted as a potential candidate for imaging large-scale macroscopic objects due to the advantages of high-resolution and high-precision measurement in a non-contact manner. In particular, the capability of depth sectioning, which enables the identification of 3-D structures of target objects, makes optical imaging more promising for wide-ranging applications for macroscopic objects. Examples include 3-D vision in autonomous navigation systems [13], 3-D identification of macroscopic objects [4], and profilometry for object surfaces [5,6]. Techniques such as structured-light illumination [7,8], Fourier-transform profilometry [9,10], laser triangulation [11,12], single-pixel detection [13], and time-of-flight [14] have been used for large-scale measurements and, thus, have often been implemented in the development of commercial products by combining with well-established light sources and imaging elements.

Although these approaches have the advantage of being simple in terms of optical configurations, their depth resolution was limited due to either the absence or the lack of a gating mechanism for optical sectioning. For high-resolution and high-precision macroscopic imaging, there have been many advances using a variety of techniques, such as structured light illumination [15], Fourier transform profilometry [16], laser triangulation [17], and Fourier multiplexing [18,19]. For higher depth selectivity, methods based on interferometric measurements have become an alternative for imaging macroscopic objects. In this approach, depth information is acquired by the temporal gating produced by the intrinsic coherence length of a light source. The optical coherence tomography (OCT) technique has been straightforwardly extended for the imaging of large-scale objects by combining with wide-angle scanners and has demonstrated high-resolution and high-precision profiling of large-scale objects [2022]. Holographic imaging is another method for measuring the 3-D shapes of real-world objects in a wide-field imaging mode. With the advantage of not requiring beam scanners, digital holography has been employed for imaging 3-D shapes of large-scale objects [23]. An infrared wave was used to image human-sized objects through flames [24], and speckle illumination through a multimode fiber was used for imaging a macroscopic object behind a turbid layer [25,26].

For a variety of applications, particularly for medical or industrial purposes, these methods have been implemented in compact optical configurations to be close to the target location in a limited space. Usually, thin imaging probes with small dimensions such as bundle fibers or graded index lenses were used to improve the accessibility of the system through narrow passages. Although the thin imaging probes are essential for implementing such a small system, using the same probe for both illumination and detection results in strong reflection from the end surface of the probe, easily obscuring object information. The back-reflection from the probe tip becomes more pronounced especially when imaging macroscopic objects through a fine imaging probe due to the low-light detection efficiency caused by the small numerical aperture (NA). Since the amount of noise caused by the unwanted back-reflection is much stronger than the weak signal from an object, it is difficult to distinguish the fine details of a sample under investigation. Furthermore, for 3-D information, it is essential to move either the sample or the imaging probe to scan the image depth. Usually, mechanical stages supporting a sample or imaging optics are moved in the axial direction. Although there were systems developed for macroscopic imaging [20], most OCT systems have been designed for acquiring images of small microscopic objects along the depth direction and thus the achievable depth ranges are not sufficient to perform 3-D imaging of macroscopic objects in general. Thus, the need for moving parts involved with the mechanical scanning is the main reason for the limitations on imaging speed for 3-D acquisition.

In this study, we demonstrate a probe-type interferometric imaging system capable of acquiring 3-D information of a macroscopic object. A bundle fiber attached to a collimation lens of a small diameter was used as an acquisition probe to access a target object through a narrow passage such as an oral cavity. To reject the surface reflection from the probe tip, an illuminating light was introduced through a separate pathway using a multimode fiber. To accelerate the acquisition speed for the 3-D imaging, an array of 3×3 reference beams was generated by a 2-D grating, and a series of optical path length differences with equal spacing was introduced to those 9 reference beams by using an echelon. By combining the sample with all the reference beams at once, multiple depth images were recorded with different spatial frequencies in a single interferogram. By separating and isolating all the acquired spatial frequency components, 9 images showing different depths were attained simultaneously without mechanical scanning. With this imaging system, multi-depth imaging and 3-D reconstructions of a simple test object, as well as a complex macroscopic object, were successfully demonstrated with minimal mechanical scanning.

2. Method and experimental setup

A schematic of our experimental setup is presented in Fig. 1. A laser diode (Thorlabs, LP637-SF70) with a center wavelength of ${\lambda _0} = 637$ nm was used as a light source. The coherence length of the laser was measured as approximately 400 $\mathrm{\mu }\textrm{m}$. The laser output beam was split into a sample beam and a reference beam at the beam splitter (BS1). The beam in the sample arm was reflected off a 2-axis galvanometer mirror (GM, Thorlabs, GVS011) for beam steering and subsequently relayed by L1 and L2 configured as a 4-f telescope. The GM was placed in the front focal plane of L1, and a 100 cm-long multimode optical fiber (MMF, Thorlabs, M59L01) was placed at the back focal plane of L2. The use of the MMF is to separate the illumination from the detection. At the distal side of the MMF, a collimating lens (CL1, Thorlabs, F220SMA-B) was used for the sample illumination. Since the GM and the MMF were positioned at the planes conjugate to each other, the launching angle of the beam into the MMF was adjusted by the rotation angle of the GM. While traveling through the MMF, the sample beam excited multiple modes of the fiber based on its launching angle. Thus, steering the angle of the GM resulted in different speckle patterns on the sample.

 figure: Fig. 1.

Fig. 1. Schematic of the experimental setup for multiple-depth imaging. LD: 637-nm laser diode, BS1&BS2: beam splitters, GM: 2-axis galvanometer mirror, L1–L7: lenses, MMF: multi-mode fiber, CL1&CL2: collimating lenses, M1–M3: mirrors, BF: bundle fiber, OL: objective lens, TS: translation stage, G: 2-D grating, I: iris. In the sample arm, the illumination and detection pathways are separated using the MMF and the BF to reject the back-reflection noise. In the reference arm, the G generates 9 different reference beams with different propagation angles. The echelon provides each of the reference beams with an appropriate optical delay to select different imaging depths. Inset: a photo of the echelon.

Download Full Size | PDF

The light scattered by the sample was collected by a collimating lens (CL2, Thorlabs, TC12FC-633) and delivered by a 10-cm-long bundle fiber (BF, Fujikura, FIGH-40-920G). The BF has 40,000 cores with an average diameter of 3 $\mathrm{\mu }\textrm{m}$ and a core-to-core spacing of 4∼5 $\mathrm{\mu }\textrm{m}$. The proximal side of the BF was located at the focal plane of the objective lens (OL, Olympus, 10X, 0.25 NA), and the image obtained by the BF was delivered to the camera (Lumenera, LM135, 1392 ${\times} $ 1040 pixels, 4.6-$\mathrm{\mu }\textrm{m}$ pixel pitch) via a tube lens labeled as L3 in Fig. 1. An iris (I) was placed at the Fourier plane of the OL to adjust the size of the angular spectrum of the object. The size of the iris was set such that the effective NA in our measurements was typically fixed at 0.0032. The 10-mm field-of-view (FOV) in the sample plane was reduced by 17.2 times at the distal end of the BF, and the object image at the proximal end of the BF was magnified by 8.3 times at the camera. The diameter of the collimating lenses is about 12 mm including their housings. And the outer diameter of the BF is 1.2 mm and that of the MMF is 1.1 mm with the jacket. Therefore, the overall size of the imaging probe is about 24 mm at the distal end and 2.3 mm along the body. The maximum diameter of the probe can be reduced to 14 mm simply by using the collimators without the housing parts.

In the reference arm, the beam was first reflected off the mirror (M1) attached to the translation stage (TS) to accurately adjust the path length difference between the two arms. Then, the beam was expanded by a beam expander consisting of L4 and L5 and delivered by a 4-f system composed of L6 and L7. At the front focal plane of L6, a custom-made 2-D grating (G) was placed. The 2-D grating was constructed using a pair of gratings (Edmund, #46-069, 80 grooves/mm) with the same specification assembled orthogonally. Therefore, a 2-D array of multiple diffraction orders were created at the Fourier plane of L6. Since the grating has an equal power distribution among the 0th and ±1st diffraction orders, an array of 3 × 3 beams with similar power was generated at the Fourier plane. To introduce the appropriate path length differences in the diffraction orders of the reference beam, an echelon was placed at the Fourier plane. The echelon was made by stacking multiple slide glasses. To eliminate the multiple reflection among the glass layers, we attached the glass plates using UV-curing optical adhesive (Norland, NOA61) with a refractive index of 1.56, which is fairly close to that of glass. Thus, the unwanted multiple reflection from the echelon was avoided. The glass layers have the appropriate lateral shifts in either the vertical or horizontal directions such that it has 9 different thicknesses with equal spacing. The lateral shift was set to be the same as the separation of the diffraction orders in the Fourier plane so that the echelon can impose a different path length difference on each of the diffraction orders.

Since the imaging system was configured in reflection geometry, the axial resolution was expected to be ∼200 μm, approximately half the coherence length of the light source. But the actual axial resolution was measured to be approximately 240 μm. This discrepancy can be attributed to the dispersion caused by the MMF, BF, and other optical elements located in the beam paths. To render the depth imaging more efficient, the thickness of the glass layers was chosen such that the echelon introduced a path length difference of approximately 250 μm between the neighboring beams. The 3 × 3 reference beams with different path lengths were combined with the sample beam at BS2 in the off-axis configuration and formed a single interferogram at the camera. Since the reference beams have different angles with respect to the sample beam at the camera, each reference beam created an interferogram with a different spatial frequency. Thus, the information regarding 9 different depths was separated in the Fourier space of the interferogram.

The interferogram was processed into a complex field image using the standard method based on the Hilbert transform. Since irregular speckles were illuminated through the MMF, the retrieved images also contained speckle patterns. To suppress the speckle patterns in the final image, multiple object images were acquired at different angles of the GM. Typically, 200 complex-field images were acquired, and all their intensity images were accumulated. Then, the irregular speckle patterns were averaged out with the object structure remaining in the image.

3. Results

3.1 Imaging of a flat sample with a tilt

We performed an experiment with a flat sample to verify the depth resolving ability of our imaging system. A flat sheet of paper was placed on the sample plane with its plane tilted by 20 degrees with respect to the vertical plane (Fig. 2(a)). A raw interference image at the camera is shown in Fig. 2(b). The single interferogram includes information regarding 9 different depths in the interference fringes with 9 different spatial frequencies. As an example, the regions denoted by the red and the blue boxes in Fig. 2(b) have different depth information due to the tilt of the sheet of paper. Since the sample wave scattered at each depth can interfere only with the reference beam having the same optical path length, the corresponding fringe pattern is associated with a different direction, as shown in Fig. 2(c), where the direction of the interference fringes is indicated with the yellow arrows. Due to the different directions of the fringe patterns, the depth information could be separated into the different spatial frequency domains as denoted by the red and the blue circles in Fig. 2(d). Therefore, by separately applying the Hilbert transform to each of the object spectra, multiple-depth images could be obtained at once. Figures 2(e) and 2(f) are intensity images of the sheet of paper obtained with the spectra indicated by the red and the blue circles in Fig. 2(d), respectively. The imaging FOV is $10 \times 10\textrm{ m}{\textrm{m}^2}$ with a spatial resolution of about 100 $\mathrm{\mu }\textrm{m}$, which was determined by a separate measurement. In the current configuration, 9 angular spectra are shifted all along the same direction in the spatial frequency domain as shown in Fig. 2(d). Since there remains empty space along other directions, the spectra can be packed with a higher density by utilizing the vacant room. Either the FOV or the resolution can be further enhanced by this optimization [27,28].

 figure: Fig. 2.

Fig. 2. Multi-depth imaging for a sheet of paper tilted 20 degrees. (a) Sample configuration. Illumination (red) and detection (gray) paths are separate. (b) Interferogram recorded by a camera. The areas denoted by the red and blue squares have different depth information. (c) Zoomed-in images for the red and blue squares in (b). (d) Intensity map of the Fourier transform of the interferogram in (b). Each of the 9 circular areas has different depth information. (e) Intensity image of the paper retrieved using the spectrum denoted by the red circle in (d). (f) The same as (e), but using the spectrum denoted by the blue circle in (d). (g) Line profile along the yellow arrow in (e). Scale bar: 2 mm in (b) and (e), color bars: log-scaled amplitude in arbitrary units in (d) and normalized intensity in (b), (e), and (f).

Download Full Size | PDF

The line profile obtained along the yellow arrow in Fig. 2(e) is shown by the blue line in Fig. 2(g), and the dashed red line indicates its Gaussian fit. The full width at half maximum (FWHM) of the line profile was measured as 780 ± 110 $\mathrm{\mu }\textrm{m}$. Since the tilting angle of the sheet of paper was 20 degrees, the axial resolution was calculated as Δz = 780 × $\textrm{tan }20^\circ $ = 284 μm. This is a little larger than the system’s axial resolution of 240 μm previously mentioned. This discrepancy is thought to be caused by the multiple light scattering occurring inside the paper layer.

3.2 3-D imaging of a cone-shaped sample

In this section, we demonstrate the 3-D imaging of a sample having a simple and well-known shape. A cone-shape test object was fabricated from a silicone substrate using a 3-D printer, as shown in Fig. 3(a). The dimensions of the cone are 10 mm and 5 mm for the diameter of the flat base and the height, respectively. The test cone was loaded on the sample plane with the base orthogonal to the optical axis so that its vertex was seen by the system. The image of the top view captured with LED illumination is shown in Fig. 3(b). An interferogram generated by the setup was Fourier transformed, and each object spectrum was isolated with a circular mask, as shown in Fig. 3(c). By applying the Hilbert transform to each spectrum, complex-field images at 9 different depths were retrieved. Figure 3(d) shows the depth images taken with a single-shot recording. As seen in the profiles, the images have irregular speckle patterns due to the illumination through the MMF. To suppress the speckle noise remaining in the images, we took an average of 200 intensity distributions measured at various GM angles. After the averaging, the final images were produced as presented in depth order in Fig. 3(e). Since the sample has a cone shape, the image obtained at each depth shows a circular contour. The radius of the circle increases as the imaging depth varies from the vertex to the base of the cone. The total imaging depth range that could be covered with a single measurement was 2.2 mm due to the optical path length difference set by the echelon. Since the height of the cone was 5 mm, we moved the TS in the reference arm by 2.25 mm and measured an additional set of depth images to cover a longer imaging range.

 figure: Fig. 3.

Fig. 3. 3-D reconstruction of a cone-shaped object. (a) A conical sample made from a silicon substrate by 3-D printing. (b) Top view image of the cone taken with LED illumination. (c) Object spectra containing multiple depth information recorded in a single interferogram. (d) Intensity profiles of the depth images obtained from the spectra in (c) taken with a single-shot. (e) The same as (d) but with speckle averaging. (f) A 3-D reconstruction of the cone using the depth images in (d). (g) The same as (f) but using the depth images in (e). (h) The radii of the circular contours in (e) as a function of the imaging depth. Scale bar: 2 mm.

Download Full Size | PDF

From the coherence length of the system and the slope of the conical surface, the theoretical value of the contour thickness can be determined. Considering that the axial resolution in the setup was 240 $\mathrm{\mu }\textrm{m}$ and the slope of the cone shape was 45°, the thickness of the contour could be estimated as $240\,\mathrm{\mu }\textrm{m}$. In the depth images in Fig. 3(e), the thickness of the contour was measured as $288\, \pm \,90\, \mathrm{\mu }\textrm{m}$, which corresponds to the expectation within an error range. Figures 3(f) and 3(g) show the 3-D shape reconstructed using the experimental data in Figs. 3(d) and 3(e), respectively. For the 3-D reconstruction, point cloud data was extracted from each depth image and a surface mesh was created by the open-source software, Meshlab. In Fig. 3(f), we plotted the radii of the circular contours as a function of the imaging depth. The experimental result shows a good agreement with that of the expectation. A small deviation from the linear line may have been caused by the diffusion of light into the silicone substrate.

3.3 3-D reconstruction of a complex object with the surface morphology

Finally, we demonstrate 3-D imaging and reconstruction of an object with a more complex shape and morphology. For this type of object, a real-scale plaster teeth model was used. Figure 4(a) shows an image of a target tooth with LED illumination. After acquiring the depth images of the target tooth, the intensity images were obtained. The depth images from the top to the bottom of the tooth are shown in Figs. 4(c)–4(k). In the depth images, each intensity profile shows a stronger distribution along the edge of the object, similar to a contour line. This is because the backscattering occurs mostly near the surface of the plaster model. Since the range that can cause interference with the reference beam changes depending on the slope of the object, the thickness of the contour varies according to the object’s morphology. The thinner the contour width, the steeper the object's surface. In the depth images in Figs. 4(c)–4(e), the contours are thick because the imaging depth was laid near the top area of the tooth. Then, the contour widths in Figs. 4(i)–4(k) become thinner as the imaging depth resided in the middle of the tooth. We reconstructed the 3-D shape of the tooth using the same reconstruction algorithm used for the cone-shaped object. Through a comparison, it was observed that the morphological shape of the 3-D reconstruction agreed well with the target tooth. The reconstructed image contained granular structures mainly due to the pixelation of the bundle fiber. There is room to improve the 3-D reconstruction algorithm beyond the level achievable by the open-source program to potentially attenuate such artifacts.

 figure: Fig. 4.

Fig. 4. 3-D reconstruction of the plaster teeth model. (a) Image of the target tooth recorded with an LED illumination. (b) 3-D reconstruction of the target tooth. (c)–(k) Nine intensity of images of the target toot taken by the setup. Scale bar: 2mm, color bar: normalized intensity.

Download Full Size | PDF

4. Discussion and conclusion

Recently, different types of multiplexing techniques were demonstrated. In Ref. [29], a reference beam was temporally dispersed through an MMF and each speckle pattern at a specific time segment encoded the information at the corresponding depth of a sample. The method demonstrated single-shot multi-depth profiling by using the pre-recorded speckle patterns for all the time segments with an axial resolution of about 13 μm and a depth range of more than 10 mm. It has a higher axial resolution and a longer working range than our proposed method. To obtain the depth profiles, it requires the pre-recorded speckle patterns as the prior knowledge. Since the speckles are vulnerable to change of the fiber shape, the pre-calibration is mostly unsustainable. Moreover, this method requires a lateral scanning of a sample to obtain 2-D depth images due to its point detection configuration. The need for the object scanning reduces the imaging speed, especially for macroscopic objects, and thus it is not straightforward to use this method for macroscopic imaging.

In Ref. [30], multi-focus microscopy was demonstrated based on space multiplexing. A single FOV was duplicated into multiple pieces but with different path lengths by a stack of multiple prisms and beam splitters. Thus each image has a different focus on the same FOV. The foci of blurry images far from the center focus were corrected by a 3-D deconvolution algorithm. Due to the simple optical arrangement, this method can be integrated with various exiting imaging modalities. However, the heavy computational load involved with the 3-D deconvolution causes a slow refresh rate of the processed images on the order of about 0.1 Hz. In addition, replacement of the beam splitter stack to shift the depth spacing will cost a lot more than changing the echelon in our method. In Ref. [31], high-speed 3-D imaging was demonstrated using a side viewing-configuration with dual cameras. The method demonstrated a wide FOV up to 180 mm×130 mm and a depth range of 40 mm. And an imaging speed of up to 1 kfps was presented by using the high-speed cameras. However, due to the pattern projection scheme for illumination, images were acquired with a 4-step measurement.

Lately, the spatial frequency multiplexing technique similar to ours was used for improving image quality. Six images multiplexed at once were synthesized to improve the spatial resolution [27] as well as to enhance the depth selectivity [28]. But depth imaging was out of their scope. In Ref. [32], four depth images were taken simultaneously with lateral FOV of about 1 mm2 and axial resolution of 25 μm. In comparison with the present work, the number of multiplexed images was smaller and thus the depth range was limited.

Here, we propose a probe-type interferometric imaging system that can determine the 3-D morphology of macroscopic objects. Our system is designed to access an object located in a narrow space such as an oral cavity by employing a thin bundle fiber attached to a small collimation lens as an imaging probe. By separating the illumination beam path using a multimode fiber, the back-reflection from the probe tip is physically rejected. We also used off-axis holography to obtain the depth information of the volumetric sample. By combining a pair of gratings with a custom-made echelon, 9 different reference beams were generated where each of them had a different optical path length and a propagation angle. All the reference beams were combined with a sample beam reflected from a volumetric surface to generate a single interferogram with each beam’s own spatial frequency. Thus, multiple depth images containing information from 9 different locations could be acquired in a single interferogram simultaneously. Due to the discrete delays set by the echelon introduced to the reference beam path, the axial measurement was also discrete. In order not to lose the information that falls into the adjacent echelon layers, the echelon delay was set to match the depth selectivity set by the coherence length of light source. This enabled our system to capture the depth profiles within the imaging range with minimal loss of information. The accuracy of our measurements and the 3-D reconstruction algorithm were verified by using a simple test object with a conical shape. As a potential practical application, we also present multi-depth imaging with a 3-D mapping of a teeth model. By employing the spatial frequency multiplexing, 9 depth images are acquired at once. With the current configuration, the depth range achieved by a single acquisition of 9 images is limited to 2.25 mm. However, the number of multiplexed images can be increased, and the covered depth range can be extended with proper design of an echelon. At the present study, we employed a single shift of the reference mirror to generate another set of 9-depth images, and thus the time-consuming mechanical scanning can be minimized while acquiring volumetric information for 3-D imaging of a longer range. Although the system employs the GM scanning, which involves the mechanical scanning of mirrors, it is not an essential component for the image acquisition. Rather it can be considered optional for better image quality since our method works with a single-shot acquisition with no need for the GM scanning. Another mechanical scanning involving the translation of the reference mirror is required only when there is a need to extend imaging range longer than 2.25 mm. Overall, the requirements for these mechanical scanning are minimized due to the spatial multiplexing of multiple depths. Considering all the performances, one possible application of our method is an oral scanner that can obtain 3-D morphology of a single tooth. Together with the single-shot capability that will allow our method to avoid patients’ motion artifacts during the interference imaging, the 10-mm-scale FOV with a resolution of about 100 μm is optimal for acquiring a whole 3-D shape of a tooth. The relatively shallow imaging range achievable by a single-shot can be enhanced by increasing the delay of the echelon. The depth within the layer-to-layer space can be determined by the interference visibility [33], which can potentially extend the depth range with no loss of information. In addition, the acquisition time can be further reduced simply by using a high frame rate camera, after which it will be possible to present real-time 3-D imaging for real practice.

Funding

Institute for Basic Science (IBS-R023-D1); National Research Foundation of Korea (2021R1A2C2012069); Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korea government (MSIT-2020-0-00864, Development of Hologram-based Deformation/Defect Detection Techology for Nondestructive Products, 30%)

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Sooyeong and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,” in 18th International Conference on Pattern Recognition (ICPR'06, 2006), pp. 861-865.

2. D. Lee, G. Kim, D. Kim, H. Myung, and H.-T. Choi, “Vision-based object detection and tracking for autonomous navigation of underwater robots,” Ocean Eng. 48, 59–68 (2012). [CrossRef]  

3. S. Mattoccia, P. Macrí, G. Parmigiani, and G. Rizza, “A compact, lightweight and energy efficient system for autonomous navigation based on 3D vision,” in 2014 IEEE/ASME 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA, 2014), pp. 1-6.

4. J. Busck, “Underwater 3-D optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005). [CrossRef]  

5. S. Thibault and E. F. Borra, “Telecentric three-dimensional sensor with a liquid mirror for large-object inspection,” Appl. Opt. 38(28), 5962–5967 (1999). [CrossRef]  

6. R. K. Ula, Y. Noguchi, and K. Iiyama, “Three-Dimensional Object Profiling Using Highly Accurate FMCW Optical Ranging System,” J. Lightwave Technol. 37(15), 3826–3833 (2019). [CrossRef]  

7. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

8. X.-Y. Su, W.-S. Zhou, G. von Bally, and D. Vukicevic, “Automated phase-measuring profilometry using defocused projection of a Ronchi grating,” Opt. Commun. 94(6), 561–573 (1992). [CrossRef]  

9. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

10. J. Li, X. Su, and L. Guo, “Improved Fourier transform profilometry for the automatic measurement of three-dimensional object shapes,” Opt. Eng. 29(12), 1439 (1990). [CrossRef]  

11. J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Computer Graphics and Image Processing 18(1), 1–17 (1982). [CrossRef]  

12. M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital Michelangelo project: 3D scanning of large statues,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH, 2000), pp. 131-144.

13. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

14. M.-C. Amann, T. Bosch, M. Lescure, R. Myllylae, and M. Rioux, “Laser ranging: a critical review of unusual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001). [CrossRef]  

15. A. Forbes, M. de Oliveira, and M. R. Dennis, “Structured light,” Nat. Photonics 15(4), 253–262 (2021). [CrossRef]  

16. C. Zuo, T. Y. Tao, S. J. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (mu FTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

17. J. Y. Liang, P. Wang, L. R. Zhu, and L. H. V. Wang, “Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution,” Nat. Commun. 11, 5252 (2020). [CrossRef]  

18. Q. Y. Yue, Z. J. Cheng, L. Han, Y. Yang, and C. S. Guo, “One-shot time-resolved holographic polarization microscopy for imaging laserinduced ultrafast phenomena,” Opt. Express 25(13), 14182–14191 (2017). [CrossRef]  

19. A. Ehn, J. Bood, Z. M. Li, E. Berrocal, M. Alden, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci Appl 6, e17045 (2017). [CrossRef]  

20. Z. Wang, B. Potsaid, L. Chen, C. Doerr, H.-C. Lee, T. Nielson, V. Jayaraman, A. E. Cable, E. Swanson, and J. G. Fujimoto, “Cubic meter volume optical coherence tomography,” Optica 3(12), 1496–1503 (2016). [CrossRef]  

21. S. Song, J. Xu, and R. K. Wang, “Long-range and wide field of view optical coherence tomography for in vivo 3D imaging of large volume object based on akinetic programmable swept source,” Biomed. Opt. Express 7(11), 4734–4748 (2016). [CrossRef]  

22. T. Callewaert, J. Guo, G. Harteveld, A. Vandivere, E. Eisemann, J. Dik, and J. Kalkman, “Multi-scale optical coherence tomography imaging and visualization of Vermeer's Girl with a Pearl Earring,” Opt. Express 28(18), 26239–26256 (2020). [CrossRef]  

23. C. P. McElhinney, B. M. Hennelly, and T. J. Naughton, “Extended focused imaging for digital holograms of macroscopic three-dimensional objects,” Appl. Opt. 47(19), D71–D79 (2008). [CrossRef]  

24. M. Locatelli, E. Pugliese, M. Paturzo, V. Bianco, A. Finizio, A. Pelagotti, P. Poggi, L. Miccio, R. Meucci, and P. Ferraro, “Imaging live humans through smoke and flames using far-infrared digital holography,” Opt. Express 21(5), 5379–5390 (2013). [CrossRef]  

25. S. Woo, S. Kang, C. Yoon, H. Ko, and W. Choi, “Depth-selective imaging of macroscopic objects hidden behind a scattering layer using low-coherence and wide-field interferometry,” Opt. Commun. 372, 210–214 (2016). [CrossRef]  

26. S. Woo, M. Kang, C. Yoon, T. D. Yang, Y. Choi, and W. Choi, “Three-dimensional imaging of macroscopic objects hidden behind scattering media using time-gated aperture synthesis,” Opt. Express 25(26), 32722–32731 (2017). [CrossRef]  

27. S. K. Mirsky and N. T. Shaked, “First experimental realization of six-pack holography and its application to dynamic synthetic aperture superresolution,” Opt. Express 27(19), 26708–26720 (2019). [CrossRef]  

28. S. K. Mirsky and N. T. Shaked, “Six-pack holographic imaging for dynamic rejection of out-of-focus objects,” Opt. Express 29(2), 632–646 (2021). [CrossRef]  

29. S. Y. Lee, P. C. Hui, B. Bouma, and M. Villiger, “Single-shot depth profiling by spatio-temporal encoding with a multimode fiber,” Opt. Express 28(2), 1124–1138 (2020). [CrossRef]  

30. S. Xiao, H. Gritton, H. A. Tseng, D. Zemel, X. Han, and J. Mertz, “High-contrast multifocus microscopy with a single camera and z-splitter prism,” Optica 7(11), 1477–1486 (2020). [CrossRef]  

31. C. Jiang, P. Kilcullen, Y. M. Lai, T. Ozaki, and J. Y. Liang, “High-speed dual-view band-limited illumination profilometry using temporally interlaced acquisition,” Photonics Res. 8(11), 1808–1817 (2020). [CrossRef]  

32. L. Wolbromsky, N. A. Turko, and N. T. Shaked, “Single-exposure full-field multi-depth imaging using low-coherence holographic multiplexing,” Opt. Lett. 43(9), 2046–2049 (2018). [CrossRef]  

33. J. Xu, R. Cao, M. Cua, and C. Yang, “Single-shot surface 3D imaging by optical coherence factor,” Opt. Lett. 45(7), 1734–1737 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic of the experimental setup for multiple-depth imaging. LD: 637-nm laser diode, BS1&BS2: beam splitters, GM: 2-axis galvanometer mirror, L1–L7: lenses, MMF: multi-mode fiber, CL1&CL2: collimating lenses, M1–M3: mirrors, BF: bundle fiber, OL: objective lens, TS: translation stage, G: 2-D grating, I: iris. In the sample arm, the illumination and detection pathways are separated using the MMF and the BF to reject the back-reflection noise. In the reference arm, the G generates 9 different reference beams with different propagation angles. The echelon provides each of the reference beams with an appropriate optical delay to select different imaging depths. Inset: a photo of the echelon.
Fig. 2.
Fig. 2. Multi-depth imaging for a sheet of paper tilted 20 degrees. (a) Sample configuration. Illumination (red) and detection (gray) paths are separate. (b) Interferogram recorded by a camera. The areas denoted by the red and blue squares have different depth information. (c) Zoomed-in images for the red and blue squares in (b). (d) Intensity map of the Fourier transform of the interferogram in (b). Each of the 9 circular areas has different depth information. (e) Intensity image of the paper retrieved using the spectrum denoted by the red circle in (d). (f) The same as (e), but using the spectrum denoted by the blue circle in (d). (g) Line profile along the yellow arrow in (e). Scale bar: 2 mm in (b) and (e), color bars: log-scaled amplitude in arbitrary units in (d) and normalized intensity in (b), (e), and (f).
Fig. 3.
Fig. 3. 3-D reconstruction of a cone-shaped object. (a) A conical sample made from a silicon substrate by 3-D printing. (b) Top view image of the cone taken with LED illumination. (c) Object spectra containing multiple depth information recorded in a single interferogram. (d) Intensity profiles of the depth images obtained from the spectra in (c) taken with a single-shot. (e) The same as (d) but with speckle averaging. (f) A 3-D reconstruction of the cone using the depth images in (d). (g) The same as (f) but using the depth images in (e). (h) The radii of the circular contours in (e) as a function of the imaging depth. Scale bar: 2 mm.
Fig. 4.
Fig. 4. 3-D reconstruction of the plaster teeth model. (a) Image of the target tooth recorded with an LED illumination. (b) 3-D reconstruction of the target tooth. (c)–(k) Nine intensity of images of the target toot taken by the setup. Scale bar: 2mm, color bar: normalized intensity.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.