Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Large DOF microscopic fringe projection profilometry with a coaxial light-field structure

Open Access Open Access

Abstract

Fringe projection profilometry (FPP) has been widely researched for three-dimensional (3D) microscopic measurement during recent decades. Nevertheless, some disadvantages arising from the limited depth of field and occlusion still exist and need to be further addressed. In this paper, light field imaging is introduced for microscopic fringe projection profilometry (MFPP) to obtain a larger depth of field. Meanwhile, this system is built with a coaxial structure to reduce occlusion, where the principle of triangulation is no longer applicable. In this situation, the depth information is estimated based on the epipolar plane image (EPI) of light field. In order to make a quantitative measurement, a metric calibration method which establishes the mapping between the slope of the line feature in EPI and the depth information is proposed for this system. Finally, a group of experiments demonstrate that the proposed LF-MFPP system can work well for depth estimation with a large DOF and reduced occlusion.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP), with advantages of low cost, full-field and high point density to obtain surface topography, has been widely employed in many fields such as industry inspection, reverse engineering, assessment of cultural heritage and entertainment [1,2]. In recent decades, industrial design and manufacturing become more precise and miniaturized, which promotes FPP techniques be developed for microscopic shape measurement of small objects with a centimeter or millimeter scale.

Compared with traditional FPP, microscopic fringe projection profilometry (MFPP) have a much smaller field of view (FOV) and shorter depth of field (DOF) [3]. Up to now, MFPP is normally built with two different frameworks to reduce the FOV of the projection and imaging. One is based on the modification of stereo microscopy [411], and another employs long working distance (LWD) lens [1223]. The development of miniaturized digital projector and optical design make compact MFPP systems be built relatively simple. Benefit of that, the research about MFPP systems increase significantly in the last ten years. Most of them employed telecentric lens to build MFPP systems whose DOF is better than other LWD lens, and paid attention to calibration methods to ensure these systems do measurement with good accuracy [1421]. However, its DOF is still not enough to fulfill the requirement of large measurement volume, and related works about extending the DOF of MFPP system remain lacking. Besides that, based on the principle of triangulation, the projection and imaging branches in MFPP are normally formed a certain angle. Therefore, any occlusion in the FOV, either for the camera or the projector, will lead to a related shadow area lacking meaningful data. It further reduces the effective information that can be obtained by the system, especially for these measured objects with sorely sharp variation in surface such as grooves, step distribution and deep holes.

To extend the measurement volume of MFPP system, one approach is to improve its tolerance of image blurring, which can be achieved by modifying the projected fringe patterns, for instance, employing binary fringe patterns [24,25] and different frequency of fringe patterns [26]. Another approach is to extend its DOF through hardware changes, such as moving the camera by adding a precisely controlled moving platform [27], inserting a coded aperture for imaging lens [28] and introducing an electrically tunable lens [29,30]. In our previous work, a multi-view MFPP system is built, where the Scheimpflug condition was introduced to imaging branches for obtaining a larger common focus area of projection and imaging, and multiple cameras captured light rays reflected from objects in different directions, these obtained redundant information were used to extend the DOF of this system and reduce occlusion [31]. Following this idea, light field imaging (LFI) is introduced for our MFPP system (LF-MFPP), which can obtain not only the spatial information but also the angular information of light rays. With this characteristic, LFI has the capability to realize multi-view imaging, digital refocus, synthetic aperture and DOF extending, etc. In the normal FOV, the FPP technique combined with LFI has been researched in recent years. LFI can help FPP systems with phase unwrapping [32,33] and multi-view 3D measurement [34,35], and FPP can assist LFI to complete precise calibration and metric depth estimation [3640]. In our LF-MFPP system, LFI is employed to extend the DOF of imaging branch. Meanwhile, the binary fringe projection technique [25] is used to enlarge the effective depth detection range of projection branch.

As described above, the angle between imaging and projection branches would increase occlusion for the MFPP system. Besides that, this angle can also reduce the common focus area of imaging and projection, especially when their DOF is limited [31]. Therefore, the LF-MFPP system in this paper is built with a coaxial structure to reduce occlusion and adjust the focus area of projection and imaging to be overlapped. Nevertheless, FPP techniques based on the principle of triangulation is almost inapplicable in this situation, where drastic errors would be amplified near the optical axes of system [41]. So most of methods obtained the depth information by analyzing the defocus degree of fringe projection [4246]. These intensity-based methods have a problem that one defocus degree will correspond to two depth location before and after the focusing plane, which cannot be further determined only by defocus cue. It means the measurement range of system will be reduced by half, which is unacceptable when the DOF of system is limited.

In the LF-MFPP system, as LFI can record not only the intensity but also the direction of light rays, we reformulate the problem of detecting defocus degree to a line feature extraction problem in the Epipolar Plane Image (EPI), which is a two-dimensional (2D) spatial-angular slice of the light field. The depth estimation based on EPI is one of most studied methods in light field depth estimation, which is known that the slope of line features in EPI corresponds to the depth information of object. These reported depth estimation methods based on EPI generally used the texture information of the object to extract line features [4751]. Therefore, these methods are less effective in the case of weak texture, repeated texture and noise, making the reconstruction results sparse and noisy. In the LF-MFPP system, the FPP technique can provide phase information for every pixel in EPI [40], with that the line features can be extracted by simply searching correspondence points with the same phase value, instead of using complicated imaging processing methods as before. Meanwhile, these methods generally made depth estimation in imaging space, only few works transformed the depth information in imaging space to the object space by calibration [52]. In this paper, a simple metric calibration method for this LF-MFPP system is proposed to obtain depth information. At last, a group of experiments are described to prove that our proposed LF-MFPP system can work well for depth estimation with large DOF and reduced occlusion.

2. System configuration and principle

2.1 System configuration

Figure 1 shows a schematic diagram of the proposed LF-MFPP system. A projector and a light field camera (LFC) are placed with a coaxial structure by using a beam splitter. Thus, they have almost the same angle of view, which can reduce the occlusion and maximize the common focus area. In this situation, baseline between the projector and the camera is extremely short, and approximately undeformed fringe patterns will be observed by the camera, although they have been modulated by the shape of objects. Therefore, the traditional FPP techniques based on binocular stereo vision or phase-height mapping are no longer applicable. In our LF-MFPP system, an unfocused LFC is employed, the location of micro lens corresponds to the spatial information of incident light rays, and the pixels under each micro lens collect the angular information of light rays. As shown in Fig. 1, the projector projects phase-shifting fringe patterns to provide phase encoding for the objects, thus the light rays reflected from the same object point have been encoded by the same phase information. After the LFC capturing these light rays, one phase value will encode a group of pixels. The distribution of these corresponding pixels on the sensor plane contains the depth information of the object points. This distribution is easier to be analyzed and quantified in the EPI of light field, where the corresponding pixels are intuitively distributed as straight lines. The slope of line is related to the depth of an object point, and their relationship will be analyzed in next subsection.

 figure: Fig. 1.

Fig. 1. A schematic diagram of the LF-MFPP system.

Download Full Size | PDF

2.2 Depth estimation based on EPI

Figure 2 shows the relationship between the depth of object and the slope of line feature in the EPI. In Fig. 2(a), the light field imaging process of an object point is illustrated. P is an object point whose distance from the main lens is ${z_c}$, light rays with different directions from the point P are captured by the camera. As the point P is not on the focusing plane, these light rays do not intersect at one point on the imaging plane. Benefit of placing a micro lens array (MLA) at the original imaging plane, these light rays can be recorded separately by the LFC instead of just getting a blurred spot. To represent these light rays, the main lens plane and MLA plane are set as the angular plane $({u,v} )$ and spatial plane $({s,t} )$, respectively. Without loss of generality, Fig. 2(a) shows a 2D spatial-angular slice of the light field by fixing an angular coordinate v and a spatial coordinate $t$. Here, each light ray can be represented as a point with the coordinate $({u,s} )$ in the EPI, as shown in Fig. 2(b). Therefore, all of imaging light rays between the main lens and the MLA in Fig. 2(a) can be described by a line marked with green color in Fig. 2(b), and it is easy to observe that the slope of this line is related with the defocus degree of the point P.

 figure: Fig. 2.

Fig. 2. A schematic diagram of light field depth estimation based on EPI, (a) a schematic diagram of light field imaging process, (b) The EPI corresponding to Fig, 2(a).

Download Full Size | PDF

Since the resolution of our LF-MFPP system is not high enough to consider diffraction, we use geometric optics to analyze the relationship between the depth of the object point and the slope of line in the EPI. As shown in Fig. 2(a), the light rays from point P intersect the focusing plane from the point ${x_1}$ to ${x_2}$, then the following relationship can be derived based on the similar triangle principle.

$$\frac{{\Delta x}}{d} = \frac{{{z_c} - {l_1}}}{{{z_c}}}$$
Where $\Delta x$ is the distance between the points ${x_1}$ and ${x_2}$, $d$ is the aperture of the main lens, ${l_1}$ is the distance between the focusing plane and main lens plane.

As shown in Fig. 2(a), considering the main-lens as an ideal thin-lens, ${s_1}$ and ${s_2}$ are the image of the point ${x_1}$ and ${x_2}$, respectively. The distance between the point ${s_1}$ and ${s_2}$ is defined as $\Delta s$, the relationship between $\Delta x$ and $\Delta s$ is illustrated with red lines in Fig. 2(a), which satisfies the vertical axis magnification of main lens, as shown in Eq. (2).

$$\frac{{\Delta x}}{{\Delta s}} = \frac{{{l_1}}}{{{l_2}}}$$
Where ${l_2}$ is the distance between the main lens plane and the MLA plane. Combining Eq. (1) and Eq. (2), the relationship between ${z_c}$ and $\Delta s$ is expressed as Eq. (3).
$${z_c} = \frac{{d{l_1}{l_2}}}{{d{l_2} - {l_1}\Delta s}}$$
In the sensor plane of LFC, ${N_u}$ and ${N_v}$ are the number of pixels behind each micro lens in horizontal and vertical directions, respectively (Normally, ${N_u} = {N_v}$). The indexes of these ${N_u} \times {N_v}$ pixels represent the angular coordinate, the spatial coordinate of these pixels corresponds to the index of the micro lens that they belong to. EPI can be extracted from the sensor plane by fixing one angular coordinate and one spatial coordinate, and the slope of the line feature in the EPI corresponding to the point P can be expressed as Eq. (4).
$$k = \frac{{{N_u}}}{{\Delta {s_m}}}$$
Where $\Delta {s_m}$ is the number of micro lenses included in the length of $\Delta s$. Supposing the diameter of micro lens is ${d_m}$, then the relationship between the depth ${z_c}$ and the slope of corresponding line feature $k$ can be expressed by Eq. (5).
$${z_c} = \frac{{d{l_1}{l_2}}}{{d{l_2} - \frac{{{l_1}{N_u}{d_m}}}{k}}}$$
Where $d$, ${d_m}$, ${l_1}$, ${l_2}$ and ${N_u}$ are constant which are determined by the physical configuration of the LFC. For simplifying Eq. (5), let $k^{\prime}$ be the reciprocal of k, then Eq. (5) can be rewritten as follows:
$${z_c} = \frac{1}{{{a_1} - {a_2}k^{\prime}}}$$
Where ${a_1}$ and ${a_2}$ represent $\frac{1}{{{l_1}}}$ and $\frac{{{N_u}{d_m}}}{{{l_2}d}}$, respectively. Considering another situation, when the point P is on the right side of the focusing plane, Eq. (6) is still valid with $k^{\prime}$ taking a negative value. When the point P is on the focusing plane, ${z_c} = {l_1}$ can be exactly derived from Eq. (6) as $k^{\prime}$ is equal to 0 at this time. In summary, Eq. (6) can fully describe the relationship between the depth information of the object and the slope of the line feature in the EPI.

2.3 Extracting line features in EPI with phase information

From the analysis above, we know that precisely extracting line features in the EPI is very important for depth estimation. However, only depending on texture information of objects to extract line features in EPI suffers from the lack of robustness and accuracy in the case of weak texture, repeated texture and noise. As previously mentioned, in our LF-MFPP system, phase-shifting fringe patterns are projected onto the surface of objects by the projector. These fringe patterns are modulated by the shape of object and then captured by the LFC. So, every pixel of the LFC can get a set of fringe gray value, with which the modulated wrapped phase is calculated with phase-shifting algorithm. After that, with the help of gray code, an unwrapped phase value can be calculated to encode each pixel. To illustrate this situation, a row of sub-aperture images of LFI with phase information encoded are selected to exhibit here, the EPI was constituted by extracting the same row of these images. Phase information can assist corresponding point on the different rows of EPI be searched with sub-pixel accuracy. After that, the line feature can be extracted by fitting these corresponding points, which can be observed when the EPI is illustrated by the pseudo-color map in Fig. 3.

 figure: Fig. 3.

Fig. 3. Sub-viewpoint images and EPI encoded by phase information.

Download Full Size | PDF

2.4 Improving phase quality with extended DOF

From the above description, it is known that the quality of phase information will affect the accuracy of the line features extraction in the EPI. For the MFPP systems, the extremely limited DOF usually leads to fringe patterns captured with low modulation, which will increase the error of calculated phase. In our LF-MFPP system, binary fringe projection technique and LFI are employed for extending its DOF. For projection branch, using binary phase-shifting fringe patterns can obtain phase information with low noise than traditional sinusoidal method when projector is properly defocused. Its theoretical basic is that defocus can not only filter the higher order harmonics of the square wave (binary fringe) to make the first harmonic (sinusoidal fringe) to be dominant but also suppress noise [25]. In addition, in the focus area of the projector, a low-pass Gaussian kernel can be used to filter the captured binary fringe patterns for approximating the defocus effect. In conclusion, with binary fringe patterns projected, the effective depth detection range of the projector is enlarged. For imaging branch, the aperture size of camera is a major factor affecting the DOF. Benefit of introducing the MLA, the aperture of LFC is divided into many sub-apertures, every pixel below a micro lens corresponds to different sub-apertures and have a larger DOF than that of traditional cameras [53].

2.5 Calibration method for depth estimation

As the relationship between the slope of the line feature in EPI and the depth of the object follows Eq. (6), a calibration approach for depth estimation was developed, as shown in Fig. 4. A standard planar target was placed perpendicular to the optical axis of LFC, and parallelly moved along the optical axis with N times $(N \ge 10)$ in its detection space. At each location of the planar target, fringe patterns will be projected for providing phase information to assist the line feature extraction in EPI. For any spatial coordinate $({{s_m},{t_m}} )$ on the $(s,t)$ plane of the LFC, one corresponding slope of the line feature in EPI can be obtained after each movement of the planar target. Then a group of slope-depth (k-z) value pairs will be obtained for the spatial coordinate $({{s_m},{t_m}} )$. When the world coordinate system (WCS) is set on the planar target instead of on the main lens plane, the relationship between $Z$ and $k$ ($k = \frac{1}{{k^{\prime}}}$) is modified as Eq. (7):

$$Z = \frac{1}{{{a_1} - {a_2}k^{\prime}}} + {a_3}$$
The parameters ${a_1}$, ${a_2}$ and ${a_3}$ can be estimated using the obtained k-z data. After this calibration process, each spatial coordinate on the $({s,t} )$ plane of the LFC will obtain the mapping relationship between slope $k$ and depth $Z$. During calibration, it is difficult to guarantee that the planar target is placed exactly perpendicular to the optical axis of the camera. That’s why we estimate the mapping coefficients for each spatial coordinate separately rather than assign the same coefficients for all the spatial coordinates.

 figure: Fig. 4.

Fig. 4. Schematic diagram of calibration for depth estimation.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. A prototype of the LF-MFPP system.

Download Full Size | PDF

3. Experimental results

To verify the proposed method described in this paper, a prototype of the LF-MFPP system is established with an unfocused LFC (Lytro Illum) and a DMD projector, as shown in Fig. 5. The spatial resolution of LFC is 434${\times}$625, and its angular resolution is 15${\times}$15. The projector is specially customized which can be installed with C-mount lens. The DMD chip used in this projector is DLP4710, whose resolution is 1080${\times}$1920. The field of view (FOV) of this system is 0.8${\times}$1.2cm by reversely setting a lens of single lens reflex (SLR) to the LFC, and setting a telecentric lens to the projector, respectively. The F-number of the SLR lens is 1.8, and its focal length is 85mm. the magnification of the telecentric lens is 0.5${\times}$, and its numerical aperture (NA) is 0.026. The working distance of the modified LFC and projector are about 40mm and 165mm, respectively. The LFC and the projector are placed vertically, and a planar beam splitter is introduced to make them arranged with a coaxial structure.

As mentioned before, one of the purposes of introducing the LFC for our MFPP system is to extend the DOF of imaging system. The characteristics of LFC is that every pixel corresponds to a sub-aperture, but the integral of all the pixels under one micro lens corresponds to its whole aperture, which can be equal to a traditional camera. To evaluate the measurable range of the LFC, an OLED chip displayed a binary fringe pattern is placed perpendicular to the optical axis of the LFC, and then it is translated along the camera’s optical axis within ${\pm}$5mm of the focusing plane. The contrast of the binary fringe pattern captured in different depths is calculated. Figure 6(a) shows the contrast curves calculated from central sub-aperture images (illustrated with red line) and the integral of all sub-aperture images (illustrated with blue line). The two curves can be used to indicate the different DOF between the LFC and traditional camera. The green line is the half of the maximum contrast, if the full width at half maximum (FWHM) of the contrast curve is taken as the DOF, it can be seen that the DOF of traditional camera is about 0.7mm, while that of the LFC is slightly larger than 2.5mm. Figure 6(b) and Fig. 6(c) give the comparison of images captured under the two situations, when the OLED is placed 1.5mm away from the focusing plane. Moreover, the period of the detected binary fringe in this experiment is 10 pixels width, which is relatively small. In practice, the period of the fringe pattern used in following experiments is 27 pixels width, which make the measurable range of the LFC can reach about 5mm. This is because the smaller the width of fringe patterns in the spatial domain, the higher the corresponding frequency in the frequency domain. As it is known, the imaging system defocusing can be approximated by a low-pass Gaussian blurring effect. In this case, at the same degree of defocus, it is easy to understand that the fringe pattern with lower frequency will be less affected by the low-pass Gaussian blurring effect. In other words, using a larger period fringe pattern can appropriately extend the DOF of the LF-MFPP system. However, fringe patterns with too large period are also not recommended, which will be more sensitive to noise. Besides that, for the projector branch, the DOF of the telecentric lens is about 2.1mm. With binary fringe techniques used, the measurable range of the projector can also reach 5mm.

 figure: Fig. 6.

Fig. 6. (a) Contrast curves of the traditional camera and light field camera. (b) and (c) are captured images by the traditional camera and light field camera, respectively, when the OLED chip is placed 1.5mm away from the focusing plane.

Download Full Size | PDF

For calibrating the relationship of the slope of line features of EPI and depth information, A standard planar target is placed perpendicular to the optical axis of LFC, and parallelly moved along the optical axis with 21 times around the focusing plane, the translation distance between adjacent positions is 0.2mm. At each location of the planar target, binary phase-shifting fringe patterns and gray code are projected for calculating phase information to assist the line features extraction in EPI. For every spatial coordinate of the LFC, after a group slope of line k and depth location of planar target Z be obtained, the relationship between k and of Z are calibrated based on the Eq. (7).

At different depth location, the situation of extracting line features based on phase information in the EPI is shown in Fig. 7, where the red and blue points are corresponding points searched with the same phase value in different EPI, which are obtained with planar target located at 2mm and 1mm, respectively. It is noted that the points corresponding to edge sub-aperture can not fit well with straight lines, one reason is that the limited aperture of our built LFC causes vignetting generated in the edge sub-aperture image, which seriously affect the quality of images, as shown in Fig. 8. Another reason is that the distortion of the LFC also affect the imaging position of the object point, which is relatively obvious in the edge sub-aperture images. Considering the above situation, each EPI is constituted without using the two most marginal sub-aperture images. Meanwhile, these phase values corresponding to low fringe contrast are also filtered by the contrast template. Finally, the robust regression method instead of the least square method is used for straight lines fitting to eliminate the influence of outliers, these measures can minimize the impact of the above two reasons.

 figure: Fig. 7.

Fig. 7. Corresponding points and its fitted lines in the EPI.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Vignetting in the edge sub-aperture images (left and right) of the LFC.

Download Full Size | PDF

After the calibration, a ceramic plate is measured to demonstrate the effectiveness of the proposed method. The ceramic plate is parallelly moved to the location of -1.9mm, -1.1mm, -0.5mm, 0.1mm 0.9mm and 1.7mm by a motorized translation stage in sequence. The resolution of the motorized translation stage is 29 nm, its positioning accuracy is less than 1$\mu m$ and the bidirectional repeatability is 1.5 $\mu m$. At each location, 9-steps phase-shifting fringe patterns and corresponding gray code are projected on the ceramic plate for calculating phase information, and then the depth is estimated based on EPI. The reconstructed depth map of the ceramic plate is shown in Fig. 9(a). To quantitatively evaluate the performance of the proposed system, we calculate the depth value of the six planes after making plane fitting. The measured depth value of the six planes are shown in Table 1. It is obvious that all of these errors are within 10$\mu m$. The standard deviation (Std) of fitting errors of the reconstructed depth maps are also shown in Table 1, most of them are less than $5\mu m$. The Std of fitting error of the reconstructed depth map corresponding to the plane located at -1.9mm is selected to show in Fig. 9(b), which can be observed that, affected by the vignetting and distortion mentioned above, the fitting error of the edge area of the depth map is indeed larger than the central area.

 figure: Fig. 9.

Fig. 9. (a) The depth map of a ceramic plate at different depth location, (b) Fitting error distributions of one of the reconstructed planes.

Download Full Size | PDF

Tables Icon

Table 1. The depth value and standard deviation of the reconstructed ceramic plate at different locations

An iron fastener with steps, grooves and deep hole shape is selected to make a depth estimation based on our proposed method. Like almost MFPP systems, the LF-MFPP system is not good at detecting objects with shiny surface. So the iron fastener is sprayed with colorant, as shown in Fig. 10(d). Its height is about 5mm, the part of the fastener marked by the red frame is captured by the LF-MFPP system. The result of its depth estimation is shown in Fig. 10(a) and (b). The reconstruction details at different areas of the iron fastener are shown in Fig. 10(c). This dense depth map demonstrates that the proposed LF-MFPP system with coaxial structure can work well for depth measurement of the object with sorely sharp variation in surface and its measurable depth range can reach 5mm.

 figure: Fig. 10.

Fig. 10. (a) and (b) are the depth map of an iron fastener shown in different perspectives, (c) shows the reconstruction details at different areas of the iron fastener, (d) the picture of the iron fastener.

Download Full Size | PDF

4. Conclusion

In this paper, a LF-MFPP system with coaxial structure is proposed. Binary fringe projection and light field imaging are employed to extend the DOF of projection branch and imaging branch, respectively. The LF-MFPP system estimates depth information of objects based on EPI, where the relationship between the slope of the line feature and the depth information has been derived. The fringe projection technique provides phase encoding to assist the line feature extraction in the EPI by directly searching corresponding points with the same phase value. A group of experiments prove that the proposed LF-MFPP system has extended measurable range in depth and work well for estimating the depth of objects with sorely sharp variation in surface. As the phase information can provide sub-pixel positioning accuracy, the LF-MFPP system can realize lateral resolution with micron scale. Specifically, its FOV is 0.8${\times}$1.2cm, and its spatial resolution is about 2$\mu m$. For our LF-MFPP system, the depth estimation is sensitive to the error and noise in the slope of the line feature in EPI. Therefore, to reduce the depth estimation error, removing distortion and making global optimization in EPI will be the work in future.

Funding

National Natural Science Foundation of China (61875137); Fundamental Research Project of Shenzhen Municipality (JCYJ20190808153201654); Sino-German Cooperation Group (GZ 1391); Key Laboratory of Intelligent Optical Metrology and Sensing of Shenzhen (ZDSYS20200107103001793).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  .

2. Y. Yin, D. He, Z. Liu, X. Liu, and X. Peng, “Phase aided 3D imaging and modeling: dedicated systems and case studies,” Proc. SPIE9132, 1 (2014).

3. Y. Hu, Q. Chen, S. Feng, and C. Zuo, “Microscopic fringe projection profilometry: A review,” Opt. Lasers Eng. 135, 106192 (2020). [CrossRef]  .

4. R. Windecker, M. Fleischer, and H. J. Tiziani, “Three-dimensional topometry with stereo microscopes,” Opt. Eng. 36(12), 3372–3377 (1997). [CrossRef]  .

5. C. Zhang, P. S. Huang, and F.-P. Chiang, “Microscopic phase-shifting profilometry based on digital micromirror device technology,” Appl. Opt. 41(28), 5896–5904 (2002). [CrossRef]  .

6. K. P. Proll, J. M. Nivet, K. Korner, and H. J. Tiziani, “Microscopic three-dimensional topometry with ferroelectric liquid-crystal-on-silicon displays,” Appl. Opt. 42(10), 1773–1778 (2003). [CrossRef]  .

7. H. Schreier, D. Garcia, and M. Sutton, “Advances in light microscope stereo vision,” Exp. Mech. 44(3), 278–288 (2004). [CrossRef]  .

8. A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, “Fringe projection based quantitative 3D microscopy,” Optik 124(21), 5052–5056 (2013). [CrossRef]  .

9. Y. Q. Yu, S. J. Huang, Z. H. Zhang, F. Gao, and X. Q. Jiang, “The research of 3D small-field imaging system based on fringe projection technique,” Proc. SPIE9297, 92972D (2014). [CrossRef]  

10. Y. Hu, Q. Chen, Y. Zhang, S. Feng, T. Tao, H. Li, W. Yin, and C. Zuo, “Dynamic microscopic 3D shape measurement based on marker-embedded Fourier transform profilometry,” Appl. Opt. 57(4), 772–780 (2018). [CrossRef]  .

11. Y. Hu, Q. Chen, T. Tao, H. Li, and C. Zuo, “Absolute three-dimensional micro surface profile measurement based on a Greenough-type stereomicroscope,” Meas. Sci. Technol. 28(4), 045004 (2017). [CrossRef]  .

12. C. Quan, X. Y. He, C. F. Wang, C. J. Tay, and H. M. Shang, “Shape measurement of small objects using LCD fringe projection with phase shifting,” Opt. Commun. 189(1-3), 21–29 (2001). [CrossRef]  .

13. C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang, “Microscopic surface contouring by fringe projection method,” Opt. Laser Technol. 34(7), 547–552 (2002). [CrossRef]  .

14. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013). [CrossRef]  .

15. Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014). [CrossRef]  .

16. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22(26), 31826–31835 (2014). [CrossRef]  .

17. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015). [CrossRef]  .

18. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015). [CrossRef]  .

19. L. Rao, F. Da, W. Kong, and H. Huang, “Flexible calibration method for telecentric fringe projection profilometry systems,” Opt. Express 24(2), 1222–1237 (2016). [CrossRef]  .

20. S. Zhang, B. Li, F. Ren, and R. Dong, “High-Precision Measurement of Binocular Telecentric Vision System With Novel Calibration and Matching Methods,” IEEE Access 7, 54682–54692 (2019). [CrossRef]  .

21. Y. Hu, Q. Chen, S. Feng, T. Tao, A. Asundi, and C. Zuo, “A new microscopic telecentric stereo vision system - Calibration, rectification, and three-dimensional reconstruction,” Opt. Lasers Eng. 113, 14–22 (2019). [CrossRef]  .

22. Y. Hu, Q. Chen, Y. Liang, S. Feng, T. Tao, and C. Zuo, “Microscopic 3D measurement of shiny surfaces based on a multi-frequency phase-shifting scheme,” Opt. Lasers Eng. 122, 1–7 (2019). [CrossRef]  .

23. Y. Hu, Y. Liang, T. Tao, S. Feng, C. Zuo, Y. Zhang, and Q. Chen, “Dynamic 3D measurement of thermal deformation based on geometric-constrained stereo-matching with a stereo microscopic system,” Meas. Sci. Technol. 30(12), 125007 (2019). [CrossRef]  

24. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  .

25. B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017). [CrossRef]  .

26. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]  .

27. Y. Z. Liu, Y. J. Fu, Y. H. Zhuan, P. X. Zhou, K. J. Zhong, and B. L. Guan, “Large depth-of-field 3D measurement with a microscopic structured-light system,” Opt. Commun. 481, 126540 (2021). [CrossRef]  

28. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007). [CrossRef]  .

29. Y. Qu, P. Zhang, and Y. Hu, “3D measurements of micro-objects based on monocular wide-field optical microscopy with extended depth of field,” Microsc. Res. Tech. 81(12), 1434–1442 (2018). [CrossRef]  .

30. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-field 3D shape measurement using an electrically tunable lens,” Opt. Express 27(21), 29697–29709 (2019). [CrossRef]  .

31. M. Wang, Y. Yin, D. Deng, X. Meng, X. Liu, and X. Peng, “Improved performance of multi-view fringe projection 3D microscopy,” Opt. Express 25(16), 19408–19421 (2017). [CrossRef]  .

32. Z. Cai, X. Liu, Z. Chen, Q. Tang, B. Z. Gao, G. Pedrini, W. Osten, and X. Peng, “Light-field-based absolute phase unwrapping,” Opt. Lett. 43(23), 5717–5720 (2018). [CrossRef]  .

33. Z. W. Cai, X. L. Liu, G. Pedrini, W. Osten, and X. Peng, “Structured-light-field 3D imaging without phase unwrapping,” Opt. Lasers Eng. 129, 106047 (2020). [CrossRef]  

34. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]  .

35. W. Qingyang, H. Haotao, C. Shunzhi, L. Qifeng, C. Zefeng, and L. J. Xiaoting, “Research on 3D imaging technology of light field based on structural light marker,” Infrared Laser Eng. 49, 0303019 (2020) [CrossRef]  .

36. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Unfocused plenoptic metric modeling and calibration,” Opt. Express 27(15), 20177–20198 (2019). [CrossRef]  .

37. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth estimation in structured light fields,” Opt. Express 27(9), 13532–13546 (2019). [CrossRef]  .

38. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Light-field depth estimation considering plenoptic imaging distortion,” Opt. Express 28(3), 4156–4168 (2020). [CrossRef]  .

39. Z. Cai, X. Liu, Q. Tang, X. Peng, and B. Z. Gao, “Light field 3D measurement using unfocused plenoptic cameras,” Opt. Lett. 43(15), 3746–3749 (2018). [CrossRef]  .

40. S. Xiang, L. Liu, H. Deng, J. Wu, Y. Yang, and L. J. O. E. Yu, “Fast depth estimation with cost minimization for structured light field,” Opt. Express 29(19), 30077–30093 (2021). [CrossRef]  

41. C. Liu, L. Chen, X. He, V. Duc Thang, and T. Kofidis, “Coaxial projection profilometry based on speckle and fringe projection,” Opt. Commun. 341, 228–236 (2015). [CrossRef]  .

42. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in International Conference on Computer Graphics and Interactive Techniques, (2006), pp. 907–915.

43. M. Zhong, X. Su, W. Chen, Z. You, M. Lu, and H. Jing, “Modulation measuring profilometry with auto-synchronous phase shifting and vertical scanning,” Opt. Express 22(26), 31620–31634 (2014). [CrossRef]  .

44. H. Jing, X. Su, Z. You, and M. Lu, “Uniaxial 3D shape measurement using DMD grating and EF lens,” Optik 138, 487–493 (2017). [CrossRef]  .

45. H. L. Jing, X. Y. Su, and Z. S. You, “Uniaxial three-dimensional shape measurement with multioperation modes for different modulation algorithms,” Opt. Eng. 56(3), 034115 (2017). [CrossRef]  

46. Y. Zheng, Y. Wang, and B. Li, “Active shape from projection defocus profilometry,” Opt. Lasers Eng. 134, 106277 (2020). [CrossRef]  .

47. S. WannerB. Goldluecke, and Ieee, “Globally Consistent Depth Labeling of 4D Light Fields,” in 2012 Ieee Conference on Computer Vision and Pattern Recognition (2012), pp. 41–48.

48. L. V. Huijin, K. Gu, Z. Yongbing, and D. Qionghai, “Light field depth estimation exploiting linear structure in EPI,” 2015 IEEE International Conference on Multimedia & Expo Workshops (2015), p. 6.

49. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-Field Depth Estimation via Epipolar Plane Image Analysis and Locally Linear Embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017). [CrossRef]  .

50. H. Sheng, P. Zhao, S. Zhang, J. Zhang, and D. Yang, “Occlusion-aware depth estimation for light field using multi-orientation EPIs,” Pattern Recognit. 74, 587–599 (2018). [CrossRef]  .

51. S. Zhang, H. Sheng, C. Li, J. Zhang, and Z. Xiong, “Robust depth estimation for light field via spinning parallelogram operator,” Comput. Vis. Image Understanding 145, 148–159 (2016). [CrossRef]  .

52. P. Zhou, Z. Yang, W. J. Cai, Y. L. Yu, and G. Q. Zhou, “Light field calibration and 3D shape measurement based on epipolar-space,” Opt. Express 27(7), 10171–10184 (2019). [CrossRef]  .

53. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in International Conference on Computer Graphics and Interactive Techniques, (2006), pp. 924–934.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. A schematic diagram of the LF-MFPP system.
Fig. 2.
Fig. 2. A schematic diagram of light field depth estimation based on EPI, (a) a schematic diagram of light field imaging process, (b) The EPI corresponding to Fig, 2(a).
Fig. 3.
Fig. 3. Sub-viewpoint images and EPI encoded by phase information.
Fig. 4.
Fig. 4. Schematic diagram of calibration for depth estimation.
Fig. 5.
Fig. 5. A prototype of the LF-MFPP system.
Fig. 6.
Fig. 6. (a) Contrast curves of the traditional camera and light field camera. (b) and (c) are captured images by the traditional camera and light field camera, respectively, when the OLED chip is placed 1.5mm away from the focusing plane.
Fig. 7.
Fig. 7. Corresponding points and its fitted lines in the EPI.
Fig. 8.
Fig. 8. Vignetting in the edge sub-aperture images (left and right) of the LFC.
Fig. 9.
Fig. 9. (a) The depth map of a ceramic plate at different depth location, (b) Fitting error distributions of one of the reconstructed planes.
Fig. 10.
Fig. 10. (a) and (b) are the depth map of an iron fastener shown in different perspectives, (c) shows the reconstruction details at different areas of the iron fastener, (d) the picture of the iron fastener.

Tables (1)

Tables Icon

Table 1. The depth value and standard deviation of the reconstructed ceramic plate at different locations

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

Δ x d = z c l 1 z c
Δ x Δ s = l 1 l 2
z c = d l 1 l 2 d l 2 l 1 Δ s
k = N u Δ s m
z c = d l 1 l 2 d l 2 l 1 N u d m k
z c = 1 a 1 a 2 k
Z = 1 a 1 a 2 k + a 3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.