Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

State-of-the-art active optical techniques for three-dimensional surface metrology: a review [Invited]

Open Access Open Access

Abstract

This paper reviews recent developments of non-contact three-dimensional (3D) surface metrology using an active structured optical probe. We focus primarily on those active non-contact 3D surface measurement techniques that could be applicable to the manufacturing industry. We discuss principles of each technology, and its advantageous characteristics as well as limitations. Towards the end, we discuss our perspectives on the current technological challenges in designing and implementing these methods in practical applications.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

With the increasing availability of computational power on computing devices and cloud sources, and options of affordable optical three-dimensional (3D) surface metrology tools, there is a rapidly growing interest in employing those tools for practical solutions. Yet there is one question that has not been fully addressed: What would be the “best” option for a given application? This seemingly easy question is difficult to answer without understanding the state-of-the-art technologies and their advantageous features as well as the limitations. With this review, we intend to provide valuable information for a decision-maker to select a technology that is more likely to be successful for a given application.

The concept of structured light (SL) has been used by different scientific communities with different interpretations [1]. For example, in computer science, SL often refers to 3D imaging techniques using binary-coded structured patterns projected by a digital video projector, while in the optics community, SL is broadened to include techniques using sinusoidal structured patterns and later defocused binary patterns being projected by a digital video projector or even a mechanical projector. Essentially, SL, as optical non-contact 3D imaging methods, produces a 3D representation by “probing” the visible surfaces of an object by projecting illuminations with predefined spatiotemporal structures. As such, SL could broadly include interferometry or time of flight (ToF). In general, the output of a 3D imaging system is usually a set of points (i.e., point cloud) with $(x,y,z)$ coordinates for each measurement point in the Cartesian coordinate system.

In this paper, we will classify the 3D optical imaging methods according to the “probing” structured patterns. The patterns can be broadly classified into three categories: discrete, continuous, or hybrid—meaning a combination of the previous two. The discrete methods refer to systems using structured patterns, including, for example, in space, a dot, a line, an area of dot patterns, a group of lines, and binary area patterns; or in time, a pulse or train of pulses. The continuous methods refer to systems using structured patterns continuously in space or time, including color spectral encoding, interference using coherent or incoherent light, continuous-wave (CW) ToF, digital fringe projection (DFP), and binary defocusing. The hybrid methods use both discrete and continuous patterns to accomplish a measurement. This paper focuses primarily on 3D surface measurement using triangulation-based SL methods; however, as there are numerous similarities between “conventional” SL methods and interferometry or ToF, this paper will also overview those methods to provide a larger picture of the state-of-the-art 3D optical surface measurement techniques.

For each technology, we will briefly overview the principle and present the advantageous features and limitations from our perspective. We focus primarily on recent developments that have shown to improve the performance or capability of 3D surface measurement techniques. Also, we will refer to several classical technical or review papers for the interested reader to learn more. We will include technologies that have not been included or thoroughly discussed in other review papers, along with technologies proven successful in applications.

It is of importance to know that when referring to 3D surface measurement, accuracy is likely to be one of the first performance parameters to consider. However, accuracy requirements vary in different applications. There are applications where accuracy is not the most critical performance metric. For example, in 3D surface measurement techniques embedded in consumer electronics devices (e.g., smartphones, tablets), power consumption requirements, speed, and footprint place a higher priority than accuracy. There are also applications such as human-machine interfaces, autonomous vehicles, and robots that present challenges not only on 3D measurement accuracy but also more on instantaneous data analytics. As such, low-accuracy yet flexible and more affordable technologies will also be covered.

Despite numerous successes in employing 3D surface measurement techniques to solve practical application problems, the application portfolio is likely to continue growing. Moreover, each application requires a certain level of customization to achieve the best performance. However, to the best of our knowledge, there is still a lack of general guidelines or tools for non-experts to easily and rapidly optimize measurement systems (software and hardware) to achieve the best performance. We will discuss some recent efforts that could pave the way for overcoming these challenges.

Large strides have been made in the field of 3D surface measurement over the past decades. However, there are still numerous remaining challenges for any of the state-of-the-art techniques to conquer. This paper will list some of the critical challenges that we believe need to be addressed. Often, interdisciplinary collaborative effort is necessary to tackle some of the challenging problems. We will cast our perspectives on how to address each of these challenges.

We aim at an integrative review, attempting to find common ideas and concepts from reviewed materials and to provide critical summaries of each subject. This paper is written as a reference for researchers, graduate students, engineers, or scientists from industry working in the field of optical metrology or developing products and applications that use these systems or principles.

Section 2 explains the general principles of recovering 3D information from structured patterns. Later, in Section 3 we present recent advances. Following, in Section 4, we discuss several challenges in the field, and finally, in Section 5, we provide a summary of the review paper.

2. FUNDAMENTALS OF 3D OPTICAL SURFACE MEASUREMENT TECHNIQUES

Recovering the 3D shape of an object through the intensity registered in a sensor is the purpose of active optical techniques. They are used to probe the scene with a customized/tailored light beam that enables highly precise and reliable measurements of the object surface topography through codification methods that depend on the type of structured illumination method and setup. There are numerous methods for optical 3D surface measurement, with each having its advantages and disadvantages. They can be classified into two major categories: methods that require triangulation and methods that do not. The former derives from the human perception system (i.e., stereo vision). The latter is related to the physical nature of light (e.g., how light travels in space and time). Even though there are many methods to recover depth from other properties of light (e.g., shadowing, lens interaction), we consider primarily three major areas of 3D surface measurement methods: triangulation, ToF, and wave interference (e.g., holography, interferometry). We should emphasize that, to the best of our knowledge, there is no existing system that works best for all, and each method is most appropriate for certain metrological requirements [e.g., accuracy, uncertainty, object size, depth of field (DOF)]. Figure 1 summarizes the overall performance. This section will explain the fundamentals of each method along with state-of-the-art advancements.

 figure: Fig. 1.

Fig. 1. Performance of various optical surface measurement techniques. Image was recreated based on the image in Ref. [2].

Download Full Size | PDF

A. Interferometry-Based Surface Metrology

Interferometry is the most accurate measurement technology at the heart of modern optical metrology. It was used for the SI definition of the meter, for the detection of gravitational waves, and generally for the most sensitive measurements in science and industry. Optical interferometry has been explored widely for surface measurement because of the advantages of non-contact and high measurement accuracy. This subsection will discuss these techniques.

1. Phase-Shifting Interferometry

To achieve high measurement resolution and accuracy, phase-shifting interferometry (PSI) is often the natural choice [3]. Various phase-shifting algorithms have been developed for phase retrieval [4]. In general, for an $N$-step phase-shifting algorithm, the phase can be recovered by

$$\phi (x,y) = - {\tan}^{- 1} \left[{\frac{{\sum\nolimits_{k = 1}^N {{I_k}(x,y)\sin (2\pi k/N\,)}}}{{\sum\nolimits_{k = 1}^N {{I_k}(x,y)\cos (2\pi k/N\,)}}}} \right],$$
where
$${I_k}(x,y) = I^\prime (x,y) + I^{\prime \prime} (x,y)\cos [\phi (x,y) + 2\pi k/N\,].$$
Here, $I^\prime (x,y)^\prime $ denotes average intensity, $I^{\prime \prime} (x,y)$ denotes intensity modulation, and $\phi (x,y)$ is the carrier phase. High-speed applications typically use a three-step ($N = 3$) or four-step ($N = 4$) phase-shifting algorithm because it requires capturing a small number of patterns.

Since Eq. (1) uses an inverse tangent function, the resultant phase value ranges from $[- \pi , + \pi)$ with a $2\pi$ modus. However, due to the $2\pi$ ambiguity of the phase measurement, it requires a null setup to obtain accurate surface test results.

The phase obtained from Eq. (1) is a wrapped phase, which usually cannot be used directly for 3D surface measurement before removing the $2\pi$ discontinuities. The process of detecting and rectifying the phase for each pixel is called phase unwrapping. Once the phase is unwrapped, the obtained phase can be used for subsequent 3D reconstruction.

Phase unwrapping can be classified as spatial phase unwrapping and temporal phase unwrapping. The spatial phase unwrapping algorithm [5,6] analyzes the wrapped phase to determine a “proper” number of $2\pi$’s (or fringe order) to be added to a point based on surface smoothness assumption. The temporal phase unwrapping algorithm (e.g., [7,8]) temporally acquires additional information to determine the unique fringe order for each point. Each of these phase unwrapping methods has its merits and limitations. The spatial phase unwrapping methods do not require additional temporal information acquisition. However, they require the surface to be smooth, or impose a limited depth range, or increase system complexity and cost. The temporal phase unwrapping algorithms are more robust for arbitrary objects, yet require longer times to acquire necessary information.

2. Coherence Scanning Interferometry

The speckle noise caused by single-wavelength coherent laser light decreases the signal-to-noise ratio (SNR) and thus limits its achievable resolution. To eliminate the problems caused by speckle, white light or a broadband light is used as its light source to illuminate the measurement and reference surfaces [9]. Because a white light source has a very limited coherence length, interference signals can be observed only when the optical path difference (OPD) between the reference arm and measurement arm is within the coherence length of the light source. Typically only a few interference fringes can be observed, and the maximum fringe position is at the zero OPD position. Through vertically scanning one of its optical arms, a set of interferograms at each image pixel of the CCD camera can be recorded. This technology is called coherence scanning interferometry (CSI). CSI is also known as coherence radar (CR), white-light scanning interferometry (WLSI), or vertical scanning interferometry (VSI) [1013]. It is used widely in microscale 3D profilometry. A typical CSI system setup for obtaining a 3D surface of a measurand is shown in Fig. 2. The light from a broadband light source is collimated and split into a reference beam and a measurement beam by the beam splitter. The reference beam and the measurement beam incident to the reference mirror and measurand surface are reflected back and superpositioned after the two beams combined by the beam splitter. The superpositioned interference images of the measured surface topography are sampled by each of the imagers of the light detector, which is normally a CCD/CMOS camera. The OPD between these two beams is reflected in the phase of an interference image; by analyzing the interferograms, the zero OPD position of each pixel can be determined, which corresponds to the mechanical scanning position of the scanner. In this way, the surface topography of the measurand can be determined accurately with a subnanometer vertical resolution. The lateral resolution is dependent on the microscope’s objective lens used in the measurement, normally submicrometer or a few micrometers.

 figure: Fig. 2.

Fig. 2. Basic principle of CSI.

Download Full Size | PDF

CSI is sensitive to environmental disturbances and requires a controlled environment for its use, which differs from applications such as shop floor testing and in situ/in-line measurement. It also has some unwanted measurement errors due to its interference nature and data processing algorithms [14]. CSI is normally used for micro-scale surface measurement. For large surface measurements, multiple overlapped measurement and stitching algorithms are needed, which can be error prone and time consuming. It is also troublesome to use on non-standard surfaces, for instance, surfaces with variable reflectivity, multilayered materials, and additive manufacturing.

3. Computer-Generated Holography

Computer-generated holography (CGH) is the method to generate holographic interference patterns digitally [15]. For macroscale surface measurement such as optical surface measurement, CSI is used widely for high-accuracy optical inspection. However, because of the $2\pi$ ambiguity of the phase measurement, CSI has only a few hundreds of nanometers vertical measurement range. For surface form errors exceeding the $2\pi$ ambiguity range, it requires a null setup to obtain accurate surface test results. For near-plane or near-spherical surfaces under test, an optical null compensator can be used to set up a null measurement. For freeform and aspheric surfaces, which are used widely due to their advantages in functionalities and performance, a physical null is extremely difficult to be satisfied. In this case, CGHs can be used as the null components in CSI measurement, which have the advantage that the wavefronts of the objects are entirely digitally synthetic hologram generated [1618]. CGHs are powerful because the holograms can change a wavefront into virtually any shape that a computer can specify.

CGHs are increasingly used as null components in interferometric tests for their capabilities to accurately generate a freeform null wavefront [1618]. However, CGHs could be excessively expensive and can null only a specific surface configuration.

 figure: Fig. 3.

Fig. 3. Basic principle of ToF.

Download Full Size | PDF

B. Time-of-Flight-Based Surface Metrology

Interferometry-based techniques are often the choice for microscale 3D surface metrology, but there are applications where accuracy is not the primary concerns while the field of view (FOV) and range are. ToF-based surface measurement techniques can be appealing.

ToF is essentially a ranging technique that simultaneously measures many points, as opposed to point-by-point measurement such as in scanning lidar [19]. The distance $d$ to an object is calculated by measuring the time delay $\tau$ from the round-trip of an emitted modulated light and the detected back-reflected light. The distance is determined by

$$d = \frac{{c \cdot \tau}}{2},$$
where $c$ is the speed of light. Despite the simplicity of Eq. (3), its implementation is technologically challenging because it involves the speed of light. The accurate measurement of the round-trip time $\tau$ is usually solved by two approaches: i) direct methods that either measure the time $\tau$ by pulsed light or phase $\varphi$ by CW operation, and ii) indirect methods that derive $\tau$ (or $\varphi$) from time-gated measurements of the signal at the receiver. It is important to note that, in general, the emitted signal can have temporal or spatiotemporal modulation (space–time structured illumination) to probe the surface of the object and perform surface measurement [20].

The most common operation mode found in commercial devices is the CW approach in which the source intensity is modulated at radio frequencies (tens of MHz). The detector reconstructs the phase change $\Delta \varphi$ between the reflected and emitted signals. The distance is calculated by scaling the phase by the modulation frequency, as shown in Fig. 3. This method is called the amplitude modulated CW (AMCW) ToF, and it offers a suitable SNR for real-time, consumer applications [21]. In this mode of operation, when the emitted pulse extends beyond the maximum range, the resulting phase is wrapped, and phase unwrapping is required. Often another low frequency of modulated light is used to capture another phase map that can be used to unwrap the phase using an algorithm similar to the two-wavelength PSI algorithm [7].

The typical operation consists of emitting modulated near-infrared (NIR) light via light-emitting diodes (LEDs) and then reflected from the surface to the sensor. As illustrated in Fig. 4, every sensor pixel samples light reflected by the scene four times at equal intervals for every period ${m_0}, \ldots ,{m_3}$, which allows for the parallel measurement of its phase difference:

$$\Delta \varphi = {\tan}^{- 1} \left({\frac{{{m_3} - {m_1}}}{{{m_0} - {m_2}}}} \right).$$
 figure: Fig. 4.

Fig. 4. ToF depth measurement using phase offset. Copyright [2011] IEEE. Reprinted, with permission, from Ref. [22].

Download Full Size | PDF

The target distance $d$ can be calculated from phase $\Delta \varphi$ by

$$d = \frac{{c \cdot \Delta \varphi}}{{4\pi \cdot {f_m}}},$$
where ${f_m}$ is the modulation frequency. Once the target distance $d$ is known, the camera lens is calibrated (discussed in Section 2.D.3), and $(x,y,z)$ coordinates can be calculated.

Although ToF has the limitations of accuracy and depth resolution, it has been used extensively in commercial products (e.g., Microsoft Kinect Azure DK, 2020 iPad Pro) especially for long-range measurement because of its merits including compactness and relative robustness to motion error.

C. Triangulation-Based Surface Metrology

The interference-based techniques are used primarily for extremely-high-accuracy and microscale measurement, and ToF-based techniques are good for low-accuracy and large-scale measurements. The triangulation-based methods to be discussed in this section land in between.

1. Fundamental Concepts

Triangulation-based SL techniques originated from the conventional stereo vision method that recovers 3D information by imitating the human perception system. For a given 3D point in object space ${\bf P}(x,y,z)$, ${{\bf p}^{\bf l}}(u,v)$ is the 2D image point perceived from the first view, and ${{\bf p}^{\bf r}}(u,v)$ is the 2D image point perceived from the other view. If the angles of perception (${\theta ^l},{\theta ^r}$) are known, and two viewpoints (${{\bf o}^l},{{\bf o}^r}$) are also known (and the distance $b$ between them is also known), the object point in 3D space ${\bf P}(x,y,z)$ can be uniquely determined using simple triangulation. Figure 5 illustrates a special case when these three points lie on the $x - z$ plane. To precisely reconstruct a given object point ${\bf P}$, the triangulation-based approach hinges on finding the corresponding point pairs (${{\bf o}^l},{{\bf o}^r}$), precisely determining their locations, as well as the view angles (${\theta ^l},{\theta ^r}$).

 figure: Fig. 5.

Fig. 5. Basic principle of triangulation-based SL.

Download Full Size | PDF

Typical triangulation-based SL systems use at least one camera and one structured pattern emitter [23]. The structured pattern emitter replaces one of the views for a stereo system described above. A 3D point can be reconstructed once the corresponding pairs are known, and the system is calibrated. Section 2.D.2 discusses the details of SL system calibration.

2. 2D Discrete Structured Light Patterns

The simplest possible system is that the emitter sends out a single illumination dot at a time, the camera captures the acquired image, and software algorithms analyze the captured image to extract the illumination point. Once the camera model is precisely determined, for each point on the camera image, its location and angle can be determined. Additional calibration can be adopted to determine the relative location between the camera coordinate system and the emitter, as well as the angle of the emitter. Once the entire system is calibrated, 3D coordinates of the object point being illuminated can be reconstructed using triangulation. Though conceptually straightforward, the single-dot-based methods require scanning in both $x$ and $y$ directions to measure a 3D surface. As a result, such a technique is not employed extensively primarily because of its low measurement efficiency.

To speed up the measurement process, methods based on discrete dot patterns have been developed. The dot distribution is often random or pseudo random. As a result, the coded pattern is often regarded as a statistical pattern [24]. To quickly and uniquely discern the coded information from a captured image to find the corresponding point pairs, the statistical pattern encodes unique features within a small 2D window such that for any given point on the camera image $({u^c},{v^c})$, they can be differentiated from any other areas. Such coding methods have seen great commercial successes in consumer electronics because of their simplicity, small footprint, and low cost (e.g., Microsoft Kinect V1, Intel RealSense, iPhone, Orbbec Astra). However, such a method has low spatial resolution because (1) the structured pattern is discrete in both $x$ and $y$ directions; (2) it has difficulty in achieving high measurement accuracy because it is difficult to precisely locate the corresponding points from the captured image to the projected pattern; and (3) it could be sensitive to ambient light with the same spectral distributions.

3. 1D Discrete and 1D Continuous Structured Light Patterns

Another way to speed up the measurement process of the single discrete dot projection-based methods is to use a line pattern. This technique is employed extensively in short-range laser scanning devices. Since the structured pattern is continuous in one dimension, such a method can achieve high measurement resolution in one direction, thus high measurement accuracy. As such, SL-based line scanning can be used for applications where the measurement accuracy requirement is high. Laser range scanning techniques see great success in the manufacturing production line because the parts to be measured move at a constant speed. The relative movement between object and laser line naturally allows the whole surface measurement without swiping the laser line.

To further improve measurement efficiency, coded area patterns were designed. In such a method, all points are simultaneously illuminated with structured patterns without gaps. Depending on how the information is coded, it could be continuous in both directions or only in one direction. Since an SL system requires uniquely determining only one direction correspondence after applying the geometric constraints of the system such as epipolar geometry [25,26], structured patterns can be unique in one direction (e.g., patterns with structured stripes). If each stripe is uniquely encoded, the stripes can be identified from the captured images. If the stripe is binary (black or white) in nature, such a method is often regarded as binary coding. For binary coding methods, a sequence of structure patterns is required to determine a unique stripe. For each pixel, the black and white sequence can define a unique code (often regarded as a codeword) that can be projected by the projector. The area structured patterns are often generated by a computer and projected by an image/video projector. The projector has to be calibrated for 3D reconstruction.

Assuming black represents 0 and white represents 1, the sequence of structured images with black and white stripes is captured to convert to 0’s and 1’s that decode the corresponding codeword for each stripe. The corresponding stripes information along with the calibrated projector and camera information allow the reconstruction of 3D information for the entire area at once. Various structured pattern codification strategies have been discussed thoroughly and evaluated by Salvi et al. [23].

The binary coding methods allow each point to be measured independently. Unlike all triangulation-based SL methods discussed above where measurements can be realized with each structured image being captured, the binary coding methods require multiple structured images to perform a single measurement. As a result, the binary coding method is sensitive to object motion for any measurement point. In contrast, the methods discussed above could measure a given point without being influenced by the object motion.

To achieve high-speed measurement and reduce motion artifacts, structured patterns must be switched rapidly, and captured in a short period of time. For example, Rusinkiewicz and Levoy [27] developed a real-time 3D shape measurement system using the stripe boundary code [28] that requires only four binary patterns for codification. Such a system achieved 15 Hz 3D data acquisition speed. The digital light processing (DLP) development kits allow binary images to be switched at kiloHertz (kHz) or above. Thus, achieving high-speed measurements using this technique is not a major concern.

However, since each stripe width is larger than one camera and one projector pixels, the spatial resolution is limited, and thus the achievable measurement accuracy is not high. To circumvent such a problem, 2D continuous structured patterns were proposed.

4. 2D Continuous Structured Light Patterns

Though adopted extensively, the spatial resolution for the methods based on 0D or 1D continuous structured patterns is not limited only by the camera, but also by the projected structured pattern. Furthermore, since these techniques use intensity information directly for correspondence pair establishments, they could be affected by surface texture. As a result, it is difficult to achieve high measurement accuracy.

Some of the approaches to generate 2D continuous structured patterns are by interference with coherent light, by physical grating, or by the Moiré effect [29]. This section discusses primarily the triangulation-based method using digital video projectors for structured pattern generations, and such a method is often regarded as DFP. Instead of intensity, the carrier phase information is often extrapolated to establish correspondence for 3D reconstruction.

In theory, a single fringe pattern is sufficient to recover the carrier phase using the Fourier transform [30]. Such a method for 3D surface measurement is often regarded as Fourier transform profilometry (FTP). Kemao [31,32] developed the windowed Fourier transform (WFT) method to increase the robustness [33] and broadly extend its applications [34]. The single-pattern FTP has the advantages of speed and simplicity, yet has the limitations of being sensitive to noise, surface texture, and geometric surface structures. By projecting another structured pattern [35,36], the modified FTP method substantially improves its capability and could be more robust to surface texture or geometry changes.

Because of their speed advantages, FTP methods have been demonstrated successful for fast events capture [3739]. Due to the limitations discussed above, FTP methods are often used to measure objects at least locally smooth without strong texture variations. The reason is that FTP methods use local or global pixel information to recover phase pixel by pixel. This restriction introduces phase recovery problems; therefore, pixel by pixel phase recovery is desirable, which is why phase-shifting algorithms were developed.

Phase-shifting algorithms developed in interferometry [4] have been employed directly here for phase retrieval, except the fringe patterns are computer generated. Similarly, the phase obtained also has $2\pi$ ambiguities that can be unwrapped using spatial [5,6] or temporal phase unwrapping algorithms [7,8].

Due to the flexibilities of an FTP system, other phase unwrapping approaches have been developed including variations of temporal phase unwrapping algorithms [4042], geometric constraints-based phase unwrapping, multiview geometry-based phase unwrapping, or hybrid methods, along with others [43]. Adding a secondary camera or projector to provide additional constraints could also be used to unwrap the phase pixel by pixel [4446]. The inherent geometric constraints of an SL system can also be used to determine fringe order for phase unwrapping [47]. The hybrid phase unwrapping methods we developed enhance temporal phase unwrapping (e.g., improve robustness and/or speed). These methods include the use of embedded markers [4850], ternary coded patterns [51], phase coded patterns [41,52], and others. The spatial geometric constraint-based phase unwrapping methods do not require additional information acquisition temporarily. They either require the surface to be smooth, have a limited depth range, or increase system complexity and cost. All the newly developed temporal phase unwrapping algorithms could be more robust for arbitrary objects but require a longer time to acquire necessary information.

5. Hybrid Structured Light Patterns

Square waves become pseudo sinusoidal waves after applying a low-pass filter, and low-pass filtering can be physically realized by lens defocusing. Therefore, the binary defocusing techniques that have been developed in recent years “bridge” the continuous pattern and the discrete pattern for 3D surface measurement [53,54]. Due to hardware advancements, especially the DLP platforms, the binary defocusing method has enabled speed breakthroughs [55]. It has also overcome several limitations of standard DFP techniques that use 8-bit computer-generated patterns, such as relaxing the precise timing requirement between the projector and the camera [56], or eliminating the impact of the projector’s nonlinear response [56]. It has even allowed the achievement of higher depth resolution [57].

Because binary patterns can be modulated freely, the recovered phase quality has been further improved by 1D modulated patterns [58,59], 2D modulated patterns [6062], and 3D modulated patterns (2D $ + $ time) [63,64]. Those 1D modulation techniques could improve phase quality for middle-range patterns, but fail to improve quality when fringe patterns are too wide or too narrow [65]. 2D area modulated techniques work well for wide fringe patterns but still have limited improvements for narrow fringe patterns when the number of pixels is very small. 3D optimization could produce a higher-quality phase than those of 1D or 2D but at the cost of data acquisition time. Instead of digitally optimizing the binary patterns, a cylindrical lens was also found effective for improving phase quality [66]. The drawback of such an approach is that it requires additional hardware components besides a standard projector.

D. 3D Surface Metrology System Calibration

System calibration plays a key role for any metrology system, and the system measurement accuracy is largely dependent on the calibration accuracy. This section discusses the calibration approaches used in each category of surface measurement techniques.

1. Interferometry System Calibration

ISO 5436 [67,68] specified the measurement standards of surface measurement instruments. For interferometric-based surface measurement instruments, normally vertical and lateral calibrations need to be performed before a measurement. The calibrations are normally performed by measuring calibration artifacts according to ISO 5436. The details of any concerns regarding the calibration and verification including materials, calibration artifacts, filtering and data processing, and software measurement standards can be found in Ref. [69].

2. Triangulation System Calibration

The triangulation-based SL system can be calibrated using the reference plane approach that was developed in interferometry systems. Basically, this approach measures an “ideal” planar surface as the reference plane that requires to be parallel to the projector-camera baseline [30], and other artifacts for spatial and depth calibration. The measured surface is the difference between the actual measurement and the reference plane. This approach is often seen in the literature, where an equation that relates object depth to the phase distribution is calibrated based on the system geometry. This calibration approach works well if both the projector and the camera use telecentric lenses. However, the macroscale SL system typically does not use telecentric lenses.

For the SL system without a telecentric lens, the camera imaging system is often mathematically modeled as a pinhole system [70]. The pinhole model represents two transformations: the transformation from the world coordinate system $({x^w},{y^w},{z^w})$ to the camera lens coordinate system $({x^c},{y^c},{z^c})$ through translation and rotation (i.e., extrinsic parameters); and the transformation from to the camera lens coordinate system $({x^c},{y^c},{z^c})$ to the image coordinate system $({u^c},{v^c})$ through projection (i.e., intrinsic parameters). Under an ideal situation without considering lens distortion, the mathematical transformations can be described as matrix operations:

$${[{u^c},{v^c},1]^T} = {\bf A} \cdot [{\bf R},{\bf t}] \cdot {[{x^w},{y^w},{z^w},1]^T},$$
where $^T$ denotes a matrix transpose, intrinsic parameters are modeled as a $3 \times 3$ matrix ${\bf A}$ representing the focal length and the principal point of the imaging system, and extrinsic parameters are modeled as a $3 \times 3$ rotation matrix ${\bf R}$ and a $3 \times 1$ translation vector ${\bf t}$. Camera calibration essentially estimates the intrinsic and extrinsic parameters. One of the most popular camera calibration methods requires only a flat calibration plane with some known feature points (e.g., checkerboard, circle patterns) and processes those images with existing open-source software packages (e.g., OpenCV camera calibration toolbox).

The projector can be regarded as an inverse camera, and thus the same mathematical model can be used to describe the projector. Zhang and Huang [71] developed a method that enables the projector to capture images like a camera. As a result, the camera and the projector are calibrated following the standard stereo calibration [70]. Once the intrinsic and extrinsic parameters have been calibrated, the 3D coordinates of a point are obtained. Later, Li et al. [72] extended such a method for out-of-focus projector calibration, Bell et al. [73] developed a method to calibrate the out-of-focus camera, and An et al. [74] developed a method for large-range SL system calibration.

The above calibration procedure does not take into account lens distortions, although it could be sufficient in applications where high accuracy is not required. However, in reality, the camera and projector lenses have distortions, mostly radial and tangential distortions. These distortions make the imaging points deviate from their ideal locations, and introduce systematic errors in the 3D reconstruction [75]. For highly accurate 3D reconstruction, these distortions need to be corrected [76]. Before triangulation, the lens distortion correction (also called undistortion) is carried out. There also have been many improvements and innovations to the general calibration methods, for instance, Yin et al. [77] used a bundle adjustment strategy, and Huang et al. [78] employed least square algorithms for calibration of parameter estimation. An et al. [79] developed a method for large-scale system calibration, and Vargas et al. [80] developed a hybrid method that further improves the calibration accuracy.

It is worth noting that the “standard” pinhole model may not work well for high-accuracy 3D surface measurement because it cannot precisely model lens artifacts, especially for affordable lenses. Often a more complex model such as ray tracing could be necessary. Instead of representing the camera imaging system as a smooth function, the ray-tracing method considers each pixel ray independently. Thus, the local distortions of the lens system can be considered. The challenge though is that there is no mature method to be easily adopted for non-experts.

3. Time-of-Flight System Calibration

A ToF camera requires both a standard camera calibration procedure [70] and a distance calibration procedure [81,82]. Since for each point the $(u,v)$ coordinates on the camera are known, the distance $d$ from the sensor to the object is also known, and $(x,y,z)$ coordinates in the Cartesian space can be solved with the calibrated camera parameters.

The pinhole model and intrinsic calibration parameters are needed to compute Cartesian 3D points from depth points [26]. The standard calibration follows the same pinhole camera model that we described earlier. However, the typical low resolution of the amplitude image makes it difficult to detect the board reliably. Several heuristic methods have been proposed to improve feature detection and provide a more robust and reliable camera calibration [21].

Although the distance value in ToF seems straightforward to calculate, several factors introduce errors in the estimated distance $d$. There are systematic errors including distance-distortion errors caused by non-ideal sinusoidal waves in the modulation process or temperature-related drift in the overall depth values. These errors can be compensated for by calibration. As such, thorough distance calibration procedures are required. The typical systematic error compensation methods include using look-up tables (LUTs), B-splines, or polynomials [22]. Note that because the ToF camera measures the ToF along the light path, the error calibration should be done with respect to the radial distance, not in the Cartesian space [81].

There are also unpredictable non-systematic errors. For instance, the SNR distortion appears in scenes not uniformly illuminated. Thus, poorly illuminated areas tend to have higher noise than better illuminated ones. Another source of error is multiple path interference [83,84], in which the sensor captures multiple light reflections. These are often due to surface edges or object concavities. This is a critical problem in ToF, and there have been many attempts at dealing with it via special acquisition conditions and iterative schemes that split the acquired signal into a direct and global component [83].

3. RECENT DEVELOPMENTS

Over the past decades, large strides have been made in the field of 3D surface measurement using active optical methods. This section discusses some of the recent advancements.

A. Microscale 3D Surface Profilometry

Microscale optical interferometry is used widely in measuring microscale structures accurately. However, it is extremely sensitive to environmental disturbances such as air turbulence, temperature drift, and mechanical vibration due to the different optical paths between the measurement arm and reference arm. Here, we review recent developments (experimental and theoretical/simulation) to improve optical interferometry mainly through understanding the mechanisms in noise reduction methods to reduce uncertainty in measurements and to allow its use outside the lab. One approach is to acquire the measurement data fast by using a high-speed camera and fast phase-shifting method or even sampling all the measurement data simultaneously [8588]. Another approach is to arrange the two interference arms as completely common-path, such as scatterplate interferometers, which are also insensitive to noise [8991]. The above two noise reduction methods are usually applied for laser-based PSI, which is limited to the measurement of relatively smooth surfaces that are due to the $2\pi$ phase ambiguity of PSI without employing a multi-wavelength technique.

CSI can overcome the $2\pi$ phase ambiguity problem and enable the absolute measurement of the OPD by determining the peak position from the interferogram [11,12,92]. However, the need to perform mechanical scanning of a heavy probe head or the specimen stage limits the measurement speed, which restricts its applications within the optical laboratory. Many efforts have been made in extending the applications of CSI to in situ measurement and complex measurement surfaces and situations [9395].

Wavelength scanning interferometry (WSI) is based on the phase shifts that are caused by wavelength variations to avoid mechanical scanning used in CSI [9698]. Absolute OPD can still be measured without any $2\pi$ phase ambiguity. By adding an active servo control system that serves as a phase-compensating mechanism to eliminate the effects of environmental noise. The application of WLSI can be extended to in situ/in-process measurement [99].

Multi-wavelength interferometry (MWI) extends the measurement range of a single wavelength PSI from a few hundreds nanometers to tens of micrometers by utilizing the synthetic wavelength of MWI [100,101]. Single-shot color PSI has been explored to extend MWI into in situ surface inspection [102].

Focus detection [103105] and confocal microscopy [106108] are two techniques used widely for microscale surface measurement. Focus detection microscopy can be used for very rough surfaces such as rusted metal surfaces and multilayer transparent film thickness measurement but is difficult when applied to shiny surface measurement because the reflections make the focus detection difficult. Confocal microscopy can be applied to steep surface slope measurement that is far beyond the acceptance angle of the objective lens.

A longstanding competitor in the micro-scale 3D surface measurement is microscopic DFP profilometry. This technique is more versatile and less sensitive to environmental disturbances. It can achieve high speeds that are desirable for in situ or online measurements. Such systems either modify a standard stereo-microscopic system by replacing one stereo view with a projector [109113] use small FOV, non-telecentric lenses with long working distance (LWD) [114118], or replace pinhole lenses with telecentric lenses for a standard triangulation-based SL system [119121]. The fundamental difference between the interferometry-based surface measurement techniques and this technique is that the former can perform on-axis measurement, while DFP requires triangulation. The triangulation requirement limits its capability, for example, to measure deep holes or sharp edges due to shadow or occlusion associated problems.

B. Time-of-Flight Surface Measurement

ToF cameras have been around for over two decades. Recent innovations have substantially improved the performance of ToF cameras and pushed their limits.

The measurement error caused by multiple propagation paths (MPI) from the light source to the receiving pixel has been one of the most challenging issues for any ToF system. Early approaches made severe assumptions on scene characteristics or relied on placing tags in the scene, which are not practical for many applications. Recently, Whyte et al. [122] attempted to model the multiple paths as a direct return and a global return. Bhandari et al. [83] proposed to increase the sampling of received signal with two or up to four modulation frequencies for signal separation. Gupta et al. [123] developed a concept called phasor imaging that indicated that the global effects vanish for frequencies higher than a certain scene-dependent threshold. This observation allows for recovering depth in the presence of MPI. This approach has been extended, for example, to obtain depth in the presence of fog and other scattering media [124].

A related and important problem in manufacturing is measuring the shape of transparent objects and their backgrounds. The sparse deconvolution approach proposed by Kadambi et al. [125] can not only address the MPI problem, but also detect the background and the transparent object by performing two measurements and processing the inconsistent points between two observations [126,127]. The issues associated with low spatial resolution of ToF cameras have been addressed, for instance, by using an additional color camera and performing upscaling based on the high-resolution color image. General data-fusion methods have been introduced to complement the strengths and limitations of different technologies, such as ToF and stereo vision [128]. Furthermore, instead of fusing multiple sensors, a new paradigm of ToF modulation has shown promise to improve ToF technology. Recent efforts of producing spatially modulated ToF light in a single device can reduce the MPI problem [129] and resolve the $2\pi$ ambiguity without using multiple frequencies [20].

Driven by consumer electronics needs, ToF technology is rapidly evolving. New modulation techniques that do not require fast and expensive electronics are driving the costs down while maintaining good performance. Due to the problems associated with how depth is estimated, we believe that new depth correction methods are still required to ensure ToF extends to high-accuracy applications.

C. Computational Imaging with Structured Light Techniques

Computational imaging (CI) has been around since the early 1990s, but only until recently has it been merging with other forms of imaging or techniques such as machine learning (ML) [130,131]. CI systems typically start from an imperfect physical measurement (often of lower dimensionality) and prior knowledge about the scene or object being imaged and deliver an estimate of the presented scene or object. In conventional imaging, the optics always maps the luminance at every point in the object space to a point in the image space. In contrast, in CI, there is no one-to-one mapping; instead, an algorithm constructs the output image or spatial map, typically from a few structured measurements. The premise is that the appearance of most objects in a scene has spatial correlations that, if discovered, could reduce the uncertainty in the recovery of the object’s appearance.

Illuminating the scene with structured illumination provides the means to probe in a precise and controlled way, and thus the CI problem is better posed mathematically. Probably, one of the most remarkable achievements of CI was the single-pixel camera [132], in which with a small number of spatially correlated measurements, a full-resolution image of a larger number of points was recovered. Eventually, using similar principles, a 3D ToF single-pixel camera was proposed [133]. Another interesting development of CI was the light field camera [134,135] that allows for image refocusing. Similar to digital holography (DH), the light field camera captures many perspectives of a scene typically using a lenslet array on top of a regular CCD sensor. The arrangement provides many low-resolution images of the scene that have sufficient redundancy to enable a computational approach to synthesize a high-resolution image focused at almost any depth. Again, this technique was boosted with the use of SL. Cai et al. [136,137] proposed the SL field 3D surface measurement aimed at overcoming the limitations in conventional passive light field imaging. The use of phase encoding instead of image structure provides a more reliable mechanism in retrieving accurate depth almost independently for the entire scene [138].

D. Artificial Intelligence for Structured Light Techniques

The rapid development of ML methods in the past two decades and the recent availability of sufficient computation resources have enabled a new approach in the field: data-driven system design [139,140]. The ultimate goal is to enhance the quality of measurement procedures beyond what traditional techniques can deliver.

Although all measurement systems rely on well-understood physical principles, often, the technological implementation of a reliable and stable system is too challenging to control the operating conditions in practical applications. Note that operating conditions may include ambient lighting, the type of objects or materials, instrument interference, or sensor temperature, among others. The traditional approaches of cascaded processing stages, such as modulated exposure, denoising, phase unwrapping, and 3D coordinate mapping, provide accurate deterministic outputs if the operating conditions are similar to those of the calibrated conditions. However, in general, the operating conditions often change, and accounting for all possible conditions may lead to extremely complex calibration procedures that are too challenging to handle with a single algorithm. In contrast, artificial intelligence (AI) techniques realize an intelligent data treatment that can often capture the behavior of a system without necessarily requiring a priori knowledge. The desired solution of a problem is “learned” through examples instead of being defined employing algorithmic statements [139].

The idea of AI is straightforward and can be described broadly in two stages. The first stage consists of gathering enough experimental input–output data under different experimental conditions (the input being, for example, raw sensor data, and the output being 3D surface coordinates [141]). In the second stage, ML architecture [typically a convolutional neural network (CNN)] is trained to obtain a mapping from the input domain to the output domain. The training stage attempts to reduce a global objective function, which could be declared in terms of 3D reconstruction error, phase error, noise reduction, or other quality metrics. While there is still some skepticism on how well these methods can generalize and produce reliable outputs on input data they have not previously “seen” [142], there are many successes to date that give confidence in its use.

Among many successes, AI has been demonstrated in 3D surface measurement for robust phase unwrapping [143,144], high-speed profilometry [145], residual lens distortion correction [146], single-shot profilometry [147,148], robust ToF 3D imaging [141,149], sensor fusion [128], and others. Furthermore, AI techniques shine through exceptional pattern recognition in the most challenging conditions including the identification of projected patterns [150,151] or in 3D recognition [152,153].

Despite these successes, it is still challenging to use AI techniques as the sole processing method for a 3D surface measurement system. Jiao et al. [154] demonstrated that the conventional linear-regression-based methods can outperform deep learning methods, especially when the number of training samples is low. Wang et al. [155] proposed a new middle-ground approach in which a physical model was incorporated in a deep neural network for phase imaging to avoid the training with tens of thousands of labeled data. This approach is already taking place, for instance, using deep neural networks to correct for residual lens distortions [146] that the conventional pinhole method does not account for. We believe that this novel hybrid approach may provide the best flexibility and performance in the design and operation of modern systems for practical applications.

E. Automation

The quality of acquired data depends largely on how it was acquired. Surprisingly, even in static scenes, optimal exposure is required to capture objects and scenes that have not been previously characterized. Currently, for most high-end 3D surface measurement instruments, optimal acquisition still requires the intervention of skillful/trained personnel.

Automatically adjusting the camera exposure based on the scene within the FOV has been used extensively in 2D imaging. Yet, the level of automation for advanced 3D surface measurement techniques is far lower than its 2D counterparts because of the involvement of a projection device. Ekstrand and Zhang [156] developed a single optimal exposure time determination method by analyzing a sequence of images with different exposure times. Though successful, such a method is very slow. Similarly, various 3D high dynamic range (HDR) techniques were also developed [157170]. To determine the desired optimal exposure(s) rapidly without human intervention, Zhang [171] developed a method that can determine the single global optimal exposure time by capturing image(s) with a single exposure for an arbitrary object, and also the HDR exposure times by capturing image(s) with the optimal exposure time.

The state-of-the-art optical surface measurement techniques are designed to work within a fixed focal depth range, and thus adaptively changing the focal plane of the system remains difficult. The recently developed electrically tunable lens (ETL) can control the focal plane of the imaging system in a “known” manner. Thus, it offers the promise to achieve autofocus for the 3D shape measurement system. Hu et al. [172] developed a single-camera and a single-projector system with an ETL lens being attached to the camera. The lens was mathematically modeled as a continuous function with respect to the electric input. Zhong et al. [173] developed a system with a single camera with an ETL lens and two projectors with standard pinhole lenses. The camera captures always in-focus fringe images to establish the correspondence between two projector points. Triangulation was realized by two calibrated projectors without using cameras.

F. Towards Large-Range Measurement

The effective depth measurement range of most systems is limited by several factors, including the DOF, the power of the light source, geometrical arrangement of components, speed of electronics, and type of projected pattern(s), among others. Moreover, most standard calibration techniques are best suited for a limited depth range. As a result, the accuracy quickly degrades when measurements go outside the calibrated range. We discuss here the recent developments attempting to overcome such limitations.

1. Microscopic Systems

There is an inherent trade-off between magnification and DOF when using interferometry-based optical microscopes for 3D surface metrology. Although CSI allows a unique identification of the zeroth order of the fringe pattern regardless of magnification, the need for mechanical scanning renders the technique very limited in terms of the measurement range. In the past two decades, DH has emerged as one of the most promising ways to overcome several of the limitations of conventional optical 3D imaging systems. The main advantage of DH, with respect to classical holography, is the direct access to phase maps by the numerical solution to the diffraction problem. As a result, it offers focus flexibility and 3D imaging properties, among others [174,175].

DH allows tackling the limited DOF for 3D surface reconstruction in the following way. Starting from a single digital hologram, through the reconstruction of numerical images at different image planes (i.e., at different depths $z$), it is possible to obtain an extended focus image with all surface details, without changing the physical distance between the object and the microscope [176]. One of the advantages of DH is the ability to recover the 3D shape of a surface by changing a parameter between recorded states in a known way. This procedure can be done by changing illumination direction, refractive index, or wavelength [177].

DH is a versatile metrological tool for quantitative analysis and inspection of a variety of materials, ranging from surfaces of industrial interest to biological samples. However, current DH still suffers from certain limitations such as the trade-off between the FOV and image resolution.

2. Triangulation-Based Techniques

One of the fundamental underlying principles of the calibration of a triangulation-based system is that of the pinhole camera model. However, this model has a major limitation: the optical center of an optical system with a fixed position does not generally exist, even for an ideal optical system. Therefore, the optical center can be defined unambiguously only for an ideal lens at one specific object distance in general [178]. If we add to this limitation the geometrical lens distortions and optical aberrations, then it follows that to extend the measurement range, we need active optical devices, more flexible calibration models, or a combination of both.

The inability, or the high cost, to manufacture large-scale calibration artifacts poses one of the calibration problems for a triangulation-based system with an extended depth range. That is, if a system is calibrated at a near distance, measuring an object at a far distance has large errors. To tackle this issue, An et al. [79] proposed a two-step approach in which first, the intrinsic parameters of the camera and projector are calibrated at a near distance while focused at a far distance. Second, the extrinsic parameters are calibrated with the aid of an additional 3D sensor. This calibration strategy could be promising to tackle this challenging problem.

The type of projected pattern also plays a role for extended range measurement. Salvi et al. [23] proposed to extend the DOF of the projector to use fringe patterns with more than one frequency. This approach allows reducing the projector’s defocusing effect, but precisely selecting the optimal frequency is often not feasible for a digital video projector. Zhang et al. [179] proposed a method that continuously updates speckle patterns according to the recovered depth map to extend the DOF. Unfortunately, the achievable resolution and accuracy for such a system based on speckle pattern projection are typically not high. Ultimately, due to the limited DOF of the projector optics, it is desirable to use patterns that are as invariant as possible to defocus. The use of phase shifting with defocused binary patterns has paved the way for extended-range triangulation-based systems [54,72]. Moreover, Ekstrand and Zhang [180] showed that going from perfectly defocused binary patterns to nearly focused ones has a negligible effect if sufficient phase shifts are used.

Often, triangulation-based systems are not considered a viable option for carrying 3D surface measurements at a far distance due to the low power of conventional projectors. However, this limitation could been largely solved by designing active stereo systems with mechanical projectors that use a rotating wheel coupled to projection optics and a powerful light source [181,182]. This setup enables the use of practically any light source, and due to the high radiant flux, it can measure objects within a large range with high SNR.

Recent advances in ETL have opened a new avenue for research in developing large DOF 3D measurement systems [172,183]. The idea is quite simple, but it did not come to fruition until ETLs became much more reliable in recent years [184]. The camera has an ETL that is controlled by and synchronized with the projector to capture consistently in-focus images of the projected patterns in the scene using different focal length settings. Through a special phase unwrapping method with geometric constraints, Hu et al. [183] obtained a high-quality measurement depth range on the order of 1000 mm (400–1400 mm) with an error of 0.05%. We expect this approach to continue to facilitate the design of robust 3D imaging systems.

3. Time of Flight

The simplest way to avoid the $2\pi$ ambiguity problem in a continuous wave modulation (CWM) ToF system is to reduce the modulation frequency $f$ such that the unambiguous depth range is increased. However, increase depth range with lower modulation frequency decreases the depth resolution. One of the most used methods for extending the unambiguous range while preserving a high depth resolution is through the multifrequency approach. However, it requires the acquisition of multiple frequencies, which could be prone to motion artifacts and/or increase the overall complexity of the system. Recently, spatiotemporal ToF promises to increase the limited range with a reduced number of observations, and simultaneously to address the MPI problem [20,129].

4. CHALLENGES

Ever-growing modern smart and flexible manufacturing drives the needs for better sensing and metrology tools that can be quickly reconfigurable and affordable for quality assurance. High-speed and high-accuracy 3D optical metrology based on structured optical probing has proven extremely valuable for the manufacturing industry because it does not require surface touch, yet achieves high measurement accuracy and speeds. Unfortunately, the state-of-the-art 3D optical metrology methods use primarily the one-size-fits-all approach or often require prohibitively expensive customizations. As such, challenges remain to make advanced 3D shape measurement techniques accessible and available to solve challenging problems in science, engineering, industry, and our daily lives. This section lists some of the challenging problems worth exploring to advance this field further.

A. Low Cost

Although the use of off-the-shelf components has brought down the cost of most 3D imaging devices, when high performance (e.g., accuracy, speed) is required and design customization is necessary, the cost often goes way beyond the affordable range. One of the possible approaches is modular design and manufacturing. Standard components can be mass produced with drastically reduced costs, and be easily “assembled” as an integrated 3D system for measurement. However, mass production requires a large quantity of the same part being manufactured. To achieve this, the metrology community has to work closely with business sectors to develop a large enough market, which is naturally more challenging because technological experts often cannot speak the same language as business professionals.

B. Miniaturization

System miniaturization is one of the most important yet challenging tasks for any sensing system. It is encouraging to see miniaturization happening every day: the triangulation technique using a statistical pattern, and ToF has been embedded into mobile devices. However, neither resolution nor accuracy achieved on these devices is comparable to that achieved by advanced 3D optical metrology methods. Efforts on miniaturizing accurate 3D surface measurement methods are highly needed.

C. Repeatability/Reproducibility

Measurement uncertainty and traceability are more concerned and studied in the development of surface measurement devices. Generally, if an instrument is well calibrated before a measurement is carried out and the related measurement good practices guide [185] of a particular instrument is followed, the measurement results are considered repeatable and reproducible within the given measurement uncertainty of the instrument. However, this is an area that has not been employed extensively, even studied in the optical metrology field. Further studies in terms of repeatability and reproducibility of 3D optical imaging instruments are needed, as the operating environment of an instrument usually is more hazardous, and the operation and data processing are more complex compared to interferometric-based surface measurement instruments.

D. Complex and Difficult to Measure Surfaces

Difficult to measure surfaces are always a challenge for optical 3D surface measurement methods. Although there are ways to circumvent these difficulties, they typically require the use of additional equipment such as polarizers or special arrangements [186,187]. The implementation of these additional procedures or equipment reduces flexibility for measuring other surfaces. As discussed earlier, recent developments have opted for specialized codification approaches to avoid the use of additional hardware [188]. The most prominent technique is the use of adaptive pattern projection [165], which works sufficiently well for the intended surfaces, albeit leading to slow acquisition, and it is not a general-purpose codification strategy. Many efforts have been made to handle shiny, transparent, high-dynamic-range, or discontinuous parts [158162,164171,187191]. For instance, SL in the UV or IR range has been used for 3D surface reconstruction of transparent objects [192,193]; however, the measurement errors are typically much higher than the typical errors obtained in the visible range.

The most troublesome aspect is that these techniques have been developed in research labs and have not been tested in industrial settings. Ultimately, they are optical methods that will face challenges to measure optically unfriendly surfaces. Translating these developments to practice requires extensive validation to ensure optimal performance.

E. Metrology-in-the-Loop System

The needs for system metrology-in-the-loop become critical for an industry such as additive manufacturing because each layer should be inspected before moving to the next layer. It is even better if the machine can make adjustments on the next layer based on the current layer’s information. This goal may be achieved if 3D surface metrology is embedded into the manufacturing process such that in situ measurement, in situ data analytics, and in situ decision making can occur while the part is being made. In situ measurement requires robustness and ruggedness of the sensors with a deeper understanding of the impact of noise and vibration on system performance. Inferring the state of the part at each given manufacturing stage requires robust and rapid algorithms for data analytics. In situ control requires robust and efficient algorithms to make the machine adapt appropriately without slowing down the manufacturing process. One of the most challenging issues is that the software/hardware latency could undesirably slow down the production process.

A metrology-in-the-loop system could be even more valuable to cyber manufacturing. The current practice is that each machine makes its parts independently, and relies heavily on the ensured part quality with desired parameters to make the entire system work. Though successful, such practice can be improved if the metrology is brought into the loop. For example, if one part is made even without meeting design specifications, can the following parts be adjusted such that the part can still be used? This ambitious goal requires data-driven design and manufacturing to be in the loop as well. Consequently, enormous challenges would emerge because the entire manufacturing process has to be drastically revolutionized.

F. Interface Between Sensing and Application

The available 3D sensors, especially those designed for consumer electronics, are somewhat automated and easy to use due largely to the tremendous effort made towards automation. However, the accuracy and resolution performances of those sensors are not high, making it easier to perform measurement without requiring the system to be operated under optimal conditions. Yet, most high-accuracy optical 3D surface measurement systems are still not plug-and-play or compliant with industry standards for automation and control. The general-purpose “point-and-shoot” high-accuracy 3D surface measurement tools are rare. Interfacing 3D imaging systems requires the development of a middleware that is often an insurmountable barrier for many applications. In order for 3D technologies to reach their full potential, 3D systems have to be as easy to use as their 2D counterparts. The interface has to be simple such that users can develop applications without expert knowledge in 3D surface measurement system development. One way to achieve this is automation: the system is fully automated, such that there are no training requirements for someone to capture the best quality data. The automation includes auto-exposure, auto-focus, auto-calibration, etc. Of course, achieving all together will be a long journey for this field, yet the community could advance this field by drawing inspirations from the historical breakthroughs in 2D imaging.

G. Design Optimization

Design optimization is a complex problem without unique solutions. For example, many can follow open knowledge and build a single camera–projector system with a commercial projector and camera. However, not many can achieve the full potential of the hardware permits because (1) the hardware components are not designed or thus optimized for metrology purpose; (2) the driving software is not designed or thus optimized for a non-expert to use easily; (3) the geometric configuration optimization is not studied in the literature; and (4) the calibration remains difficult for non-experts, along with others. As such, only experts can design and develop optimal solutions. Design optimization involves multiple stages, hardware component design, hardware system design, and software algorithm optimization. Achieving this goal is challenging because of different and sometimes conflicting interests from various parties.

H. Self-Calibration

Accurate calibration is difficult and requires sufficient expertise and controlled settings. There is a need for developing self-calibration approaches that require minimal user input or a rough calibration. Ideally, the system automatically optimizes the calibration parameters to meet specific metrological criteria using affordable standard calibration artifacts.

As discussed earlier, calibrating 3D surface measurement systems is typically an elaborate and lengthy task that requires multiple acquisitions of calibration artifacts, and in some cases, independent pre-calibration of each component. Despite recent developments [194197], system self-calibration remains very challenging because all calibration parameters need to be estimated simultaneously.

In a general sense, the self-calibration problem is often cast as a constrained optimization problem [196]. Early works realized that by considering the projector as an inverse camera, a multiview approach with bundle adjustment could be used to carry out system self-calibration [198,199]. However, we should distinguish between fully self-calibrating methods that estimate all calibration parameters, such as in those discussed by various teams [194197,200], and methods that estimate the relative poses of the components with precisely calibrated intrinsic information [201,202]. Nonetheless, achieving a successful calibration with high 3D reconstruction accuracy depends largely on the underlying assumptions. For example, assuming a known 3D geometry [195,200,203], or a good guess of the intrinsic parameters [194] tends to produce satisfactory calibration results. However, those strong assumptions require a priori knowledge that is not too far from the conventional calibration approach. Alternatively, Li et al. [196] reduced the number of assumptions, with the strongest one being a non-planar scene. They achieved a reliable 3D reconstruction with acceptable errors. However, the requirement of precise projector intrinsic parameters poses practical challenges, mainly because most of the available projectors are manufactured for purposes other than metrology, and thus the intrinsic parameters (e.g., principle point) could vary dramatically from one device to another. We believe that when the design of projectors for 3D surface measurement systems is standardized, self-calibration could become easier in practice.

Nevertheless, there are cases where the system can be pre-calibrated, but due to the conditions of the operating environment (e.g., mechanical vibrations), calibration parameters change solely because components move relative to each other. This condition is better posed, and several of the existing methods could successfully re-calibrate the system assuming the new calibration parameters are not too far from the initial calibration. However, the entire self-calibration with little to no a priori knowledge is still too challenging.

I. Data Management

With 3D surface metrology tools being integrated into mobile devices, manufacturing production lines, surveillance, and others, capturing 3D images becomes increasingly easier; consequently, storing and managing those historical data become increasingly critical. Effectively representing 3D data in the form of standard meshes (e.g., STL, OBJ, PLY) will soon become an issue because of their large storage requirements. In fact, most standard mesh formats do not take advantage of the inherent structure of 3D surface metrology tools and thus store redundant information. For example, the area 3D surface measurement system has natural connectivity information; therefore, the mapping to color or normals can be computed at the time of demand. Therefore, standard data structures should be designed and tailored for this community.

Even more urgent is to develop methods to further compress 3D data in a lossy or lossless format (such as the counterparts for 2D image representations). Large strides have been made for variations of 3D data representation and compression [204209], yet none of them has been widely accepted as a common practice. As such, the entire community needs to work together to address the challenging questions: how can we come out with methods to effectively store and deliver such enormously large 3D data?

5. SUMMARY

Active structured probing techniques have proven to be one of the most powerful concepts in the design of 3D optical measurement systems. This paper endeavored to review and provide a critical summary of the state-of-the-art techniques for 3D surface measurement. We have shown that probing a surface with an intentionally manipulated light beam is likely the most reliable way to perform non-contact 3D measurements today. While there are still many persisting challenges, we believe the time has come to consolidate best practices into standards and to push forward the integrated design of modern systems. We encourage readers to refer to the original work of each referenced paper to carefully evaluate them before spending effort adopting any for practical applications.

Funding

Fulbright Colombia (Cohort 2019–2020); Directorate for Computer and Information Science and Engineering (IIS-1637961, IIS-1763689); Engineering and Physical Sciences Research Council (EP/P006930/1, EP/T024844/1).

Acknowledgment

A.G. Marrugo thanks Universidad Tecnológica de Bolívar for a Research Leave Fellowship, and acknowledges support from the Fulbright Commission in Colombia and the Colombian Ministry of Education within the framework of the Fulbright Visiting Scholar Program. F. Gao thanks the EPSRC of the UK for the funding of the EPSRC Future Advanced Metrology Hub and A Multiscale Digital Twin-Driven Smart Manufacturing System for High Value-Added Products. S. Zhang thanks the NSF for its support. The views expressed in this paper are those of the authors and not necessarily those of the sponsors.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. R. Won, “Structured light spiralling up,” Nat. Photonics 11, 619–622 (2017). [CrossRef]  

2. J.-A. Beraldin, B. Carrier, D. MacKinnon, and L. Cournoyer, “Characterization of triangulation-based 3D imaging systems using certified artifacts,” NCSLI Meas. 7, 50–60 (2016). [CrossRef]  

3. K. Creath, “Phase-measurement interferometry techniques,” Prog. Opt. 26, 349–393 (1988). [CrossRef]  

4. D. Malacara, ed., Optical Shop Testing, 3rd ed. (Wiley, 2007).

5. D. C. Ghiglia and M. D. Pritt, eds., Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software (Wiley, 1998).

6. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Laser Eng. 42, 245–261 (2004). [CrossRef]  

7. Y.-Y. Cheng and J. C. Wyant, “Two-wavelength phase shifting interferometry,” Appl. Opt. 23, 4539–4543 (1984). [CrossRef]  

8. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase shifting interferometry,” Appl. Opt. 24, 804–807 (1985). [CrossRef]  

9. J. Schmit, K. Creath, and J. C. Wyant, Optical Shop Testing, 3rd ed. (Wiley, 2007), Chap. Surface profilers, multiple wavelength, and white light interferometry, pp. 667–755.

10. R. Windecker, P. Haible, and H. Tiziani, “Fast coherence scanning interferometry for measuring smooth, rough and spherical surfaces,” J. Mod. Opt. 42, 2059–2069 (1995). [CrossRef]  

11. T. Dresel, G. Häusler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). [CrossRef]  

12. L. Deck and P. De Groot, “High-speed noncontact profiler based on scanning white-light interferometry,” Appl. Opt. 33, 7334–7338 (1994). [CrossRef]  

13. A. Harasaki, J. Schmit, and J. C. Wyant, “Improved vertical-scanning interferometry,” Appl. Opt. 39, 2107–2115 (2000). [CrossRef]  

14. F. Gao, R. K. Leach, J. Petzing, and J. M. Coupland, “Surface measurement errors using commercial scanning white light interferometers,” Meas. Sci. Technol. 19, 015303 (2007). [CrossRef]  

15. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9, 405–407 (1966). [CrossRef]  

16. J. Wyant and V. Bennett, “Using computer generated holograms to test aspheric wavefronts,” Appl. Opt. 11, 2833–2839 (1972). [CrossRef]  

17. J. H. Burge, “Applications of computer-generated holograms for interferometric measurement of large aspheric optics,” Proc. SPIE 2576, 258–269 (1995). [CrossRef]  

18. H. Shen, R. Zhu, Z. Gao, E. Pun, W. Wong, and X. Zhu, “Design and fabrication of computer-generated holograms for testing optical freeform surfaces,” Chin. Opt. Lett. 11, 032201 (2013). [CrossRef]  

19. P. Zanuttigh, G. Marin, C. D. Mutto, F. Minto, and G. M. Cortelazzo, Time-of-Flight and Structured Light Depth Cameras (Springer, 2016).

20. T. Kushida, K. Tanaka, T. Aoto, T. Funatomi, and Y. Mukaigawa, “Phase disambiguation using spatio-temporally modulated illumination in depth sensing,” IPSJ Trans. Comput. Vis. Appl. 12, 1 (2020). [CrossRef]  

21. M. Hansard, S. Lee, O. Choi, and R. Horaud, Time-of-Flight Cameras, Principles, Methods and Applications (Springer, 2013).

22. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11, 1917–1926 (2011). [CrossRef]  

23. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010). [CrossRef]  

24. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

25. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002). [CrossRef]  

26. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

27. S. Rusinkiewicz, O. Hall-Holt, and M. Levoy, “Real-time 3D model acquisition,” ACM Trans. Graph. 21, 438–446 (2002). [CrossRef]  

28. O. Hall-Holt and S. Rusinkiewicz, “Stripe boundary codes for real-time structured-light range scanning of moving objects,” in 8th IEEE International Conference on Computer Vision (2001), pp. 359–366.

29. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Laser Eng. (to be published). [CrossRef]  

30. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22, 3977–3982 (1983). [CrossRef]  

31. Q. Kemao, “Windowed Fourier transform for fringe pattern analysis,” Appl. Opt. 43, 2695–2702 (2004). [CrossRef]  

32. Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: principles, applications and implementations,” Opt. Laser. Eng. 45, 304–317 (2007). [CrossRef]  

33. K. Qian, “Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Laser Eng. 48, 141–148 (2010). [CrossRef]  

34. K. Qian, “Applications of windowed Fourier fringe analysis in optical measurement: a review,” Opt. Laser Eng. 66, 67–73 (2015). [CrossRef]  

35. L. Guo, X. Su, and J. Li, “Improved Fourier transform profilometry for the automatic measurement of 3D object shapes,” Opt. Eng. 29, 1439–1444 (1990). [CrossRef]  

36. H. Guo and P. S. Huang, “Absolute phase technique for the Fourier transform method,” Opt. Eng. 48, 043609 (2009). [CrossRef]  

37. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a review,” Opt. Laser Eng. 48, 191–204 (2010). [CrossRef]  

38. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Laser Eng. 50, 1097–1106 (2012). [CrossRef]  

39. M. Takeda, “Fourier fringe analysis and its applications to metrology of extreme physical phenomena: a review,” Appl. Opt. 52, 20–29 (2013). [CrossRef]  

40. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38, 6565–6573 (1999). [CrossRef]  

41. Y. Wang and S. Zhang, “Novel phase coding method for absolute phase retrieval,” Opt. Lett. 37, 2067–2069 (2012). [CrossRef]  

42. C. Zuo, L. Huan, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

43. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Opt. Laser Eng. 107, 28–37 (2018). [CrossRef]  

44. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Laser Eng. 51, 1213–1222 (2013). [CrossRef]  

45. Z. Li, K. Zhong, Y. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38, 1389–1391 (2013). [CrossRef]  

46. Y. R. Huddart, J. D. R. Valera, N. J. Weston, and A. J. Moore, “Absolute phase measurement in fringe projection using multiple perspectives,” Opt. Express 21, 21119–21130 (2013). [CrossRef]  

47. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24, 18445–18459 (2016). [CrossRef]  

48. W. Cruz-Santos and L. Lopez-Garcia, “Implicit absolute phase retrieval in digital fringe projection without reference lines,” Appl. Opt. 54, 1688–1695 (2015). [CrossRef]  

49. S. Zhang and S.-T. Yau, “High-resolution, real-time 3-D absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14, 2644–2649 (2006). [CrossRef]  

50. X. Su, Q. Zhang, Y. Xiao, and L. Xiang, “Dynamic 3-D shape measurement techniques with marked fringes tracking,” in Fringe (2009), pp. 493–496.

51. D. Zheng, Q. Kemao, F. Da, and H. S. Seah, “Ternary gray code-based phase unwrapping for 3D measurement using binary patterns with projector defocusing,” Appl. Opt. 56, 3660–3665 (2017). [CrossRef]  

52. C. Zhou, T. Liu, S. Si, J. Xu, Y. Liu, and Z. Lei, “Phase coding method for absolute phase retrieval with a large number of codewords,” Opt. Express 20, 24139–24150 (2012). [CrossRef]  

53. X. Y. Su, W. S. Zhou, G. Von Bally, and D. Vukicevic, “Automated phase-measuring profilometry using defocused projection of a Ronchi grating,” Opt. Commun. 94, 561–573 (1992). [CrossRef]  

54. S. Lei and S. Zhang, “Flexible 3-D shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009). [CrossRef]  

55. S. Zhang, D. van der Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18, 9684–9689 (2010). [CrossRef]  

56. S. Lei and S. Zhang, “Digital sinusoidal fringe generation: defocusing binary patterns vs focusing sinusoidal patterns,” Opt. Laser Eng. 48, 561–569 (2010). [CrossRef]  

57. B. Li and S. Zhang, “Microscopic structured light 3D profilometry: binary defocusing technique vs sinusoidal fringe projection,” Opt. Laser Eng. 96, 117–123 (2017). [CrossRef]  

58. G. A. Ayubi, J. A. Ayubi, J. M. D. Martino, and J. A. Ferrari, “Pulse-width modulation in defocused 3-D fringe projection,” Opt. Lett. 35, 3682–3684 (2010). [CrossRef]  

59. Y. Wang and S. Zhang, “Optimal pulse width modulation for sinusoidal fringe generation with projector defocusing,” Opt. Lett. 35, 4121–4123 (2010). [CrossRef]  

60. T. Xian and X. Su, “Area modulation grating for sinusoidal structure illumination on phase-measuring profilometry,” Appl. Opt. 40,1201–1206 (2001). [CrossRef]  

61. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38,540–542 (2013). [CrossRef]  

62. J. Dai, B. Li, and S. Zhang, “High-quality fringe patterns generation using binary pattern optimization through symmetry and periodicity,” Opt. Laser Eng. 52, 195–200 (2014). [CrossRef]  

63. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3D surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016). [CrossRef]  

64. Y. Wang, C. Jiang, and S. Zhang, “Double-pattern triangular pulse width modulation technique for high-accuracy high-speed 3D shape measurement,” Opt. Express 25, 30177–30188 (2017). [CrossRef]  

65. Y. Wang and S. Zhang, “Comparison among square binary, sinusoidal pulse width modulation, and optimal pulse width modulation methods for three-dimensional shape measurement,” Appl. Opt. 51, 861–872 (2012). [CrossRef]  

66. M.-A. Drouin, G. Godin, M. Picard, J. Boisvert, and L.-G. Dicaire, “Structured-light systems using a programmable quasi-analogue projection subsystem,” Proc. SPIE 11294, 112940O (2020). [CrossRef]  

67. “Geometrical product specifications (GPS)—surface texture: profile method; measurement standards—part 1: material measures,” Standard ISO 5436-1: 2000 (International Organization for Standardization, 2000).

68. “Geometrical product specifications (GPS)—surface texture: profile method; measurement standards—part 1: material measures,” Standard ISO 5436-2:2012 (International Organization for Standardization, 2012).

69. R. K. Leach, C. Giusca, H. Haitjema, C. Evans, and X. Jiang, “Calibration and verification of areal surface texture measuring instruments,” CIRP Ann. 64, 797–813 (2015). [CrossRef]  

70. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]  

71. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45, 083601 (2006). [CrossRef]  

72. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014). [CrossRef]  

73. T. Bell and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55, 2346–2352 (2016). [CrossRef]  

74. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Novel method for large range structured light system calibration,” Appl. Opt. 55, 9563–9572 (2016). [CrossRef]  

75. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Laser Eng. 85, 53–64 (2016). [CrossRef]  

76. R. Vargas, A. G. Marrugo, J. Pineda, J. Meneses, and L. A. Romero, “Camera-projector calibration methods with compensation of geometric distortions in fringe projection profilometry: a comparative study,” Opt. Pura Appl. 51, 50305 (2018). [CrossRef]  

77. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37, 542–544 (2012). [CrossRef]  

78. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49, 1539–1548 (2010). [CrossRef]  

79. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range structured light system calibration,” Appl. Opt. 55, 9563–9572 (2016). [CrossRef]  

80. R. Vargas, A. G. Marrugo, S. Zhang, and L. A. Romero, “Hybrid calibration procedure for fringe projection profilometry based on stereo vision and polynomial fitting,” Appl. Opt. 59, D163–D167 (2020). [CrossRef]  

81. D. Lefloch, R. Nair, F. Lenzen, H. Schäfer, L. Streeter, M. J. Cree, R. Koch, and A. Kolb, “Technical foundation and calibration methods for time-of-flight cameras,” in Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications (Springer, 2013), pp. 3–24.

82. S. Fuchs and G. Hirzinger, “Extrinsic and depth calibration of ToF-cameras,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008), pp. 1–6.

83. A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. Dorrington, and R. Raskar, “Resolving multipath interference in time-of-flight imaging via modulation frequency diversity and sparse regularization,” Opt. Lett. 39, 1705–1708 (2014). [CrossRef]  

84. A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: a computer graphics and vision perspective,” Vis. Inf. 1, 65–79 (2017). [CrossRef]  

85. C. L. Koliopoulos, “Simultaneous phase-shift interferometer,” Proc. SPIE 1531, 119–127 (1992). [CrossRef]  

86. B. Ngoi, K. Venkatakrishnan, and N. Sivakumar, “Phase-shifting interferometry immune to vibration,” Appl. Opt. 40, 3211–3214 (2001). [CrossRef]  

87. J. E. Millerd, N. J. Brock, J. B. Hayes, and J. C. Wyant, “Instantaneous phase-shift point-diffraction interferometer,” Proc. SPIE 5531, 264–272 (2004). [CrossRef]  

88. H. Kihm and S.-W. Kim, “Fiber-diffraction interferometer for vibration desensitization,” Opt. Lett. 30, 2059–2061 (2005). [CrossRef]  

89. J. Huang, T. Honda, N. Ohyama, and J. Tsujiuchi, “Fringe scanning scatter plate interferometer using a polarized light,” Opt. Commun. 68, 235–238 (1988). [CrossRef]  

90. M. B. North-Morris, J. VanDelden, and J. C. Wyant, “Phase-shifting birefringent scatterplate interferometer,” Appl. Opt. 41, 668–677 (2002). [CrossRef]  

91. D.-C. Su and L.-H. Shyu, “Phase shifting scatter plate interferometer using a polarization technique,” J. Mod. Opt. 38, 951–959 (1991). [CrossRef]  

92. G. S. Kino and S. S. Chim, “Mirau correlation microscope,” Appl. Opt. 29, 3775–3783 (1990). [CrossRef]  

93. C. Gomez, R. Su, P. De Groot, and R. Leach, “Noise reduction in coherence scanning interferometry for surface topography measurement,” Nanomanuf. Metrol. 3, 68–76 (2020). [CrossRef]  

94. H. Altamar-Mercado, A. Patiño-Vanegas, and A. G. Marrugo, “Robust 3D surface recovery by applying a focus criterion in white light scanning interference microscopy,” Appl. Opt. 58, A101–A111 (2019). [CrossRef]  

95. M. Thomas, R. Su, N. Nikolaev, J. Coupland, and R. K. Leach, “Modeling of interference microscopy beyond the linear regime,” Opt. Eng. 59, 034110 (2020). [CrossRef]  

96. S. Kuwamura and I. Yamaguchi, “Wavelength scanning profilometry for real-time surface shape measurement,” Appl. Opt. 36, 4473–4482 (1997). [CrossRef]  

97. D. S. Mehta, S. Saito, H. Hinosugi, M. Takeda, and T. Kurokawa, “Spectral interference Mirau microscope with an acousto-optic tunable filter for three-dimensional surface profilometry,” Appl. Opt. 42, 1296–1305 (2003). [CrossRef]  

98. K. Hibino, B. F. Oreb, P. S. Fairman, and J. Burke, “Simultaneous measurement of surface shape and variation in optical thickness of a transparent parallel plate in wavelength-scanning Fizeau interferometer,” Appl. Opt. 43, 1241–1249 (2004). [CrossRef]  

99. X. Jiang, K. Wang, F. Gao, and H. Muhamedsalih, “Fast surface measurement using wavelength scanning interferometry with compensation of environmental noise,” Appl. Opt. 49, 2903–2909 (2010). [CrossRef]  

100. G. Bourdet and A. Orszag, “Absolute distance measurements by CO2 laser multiwavelength interferometry,” Appl. Opt. 18, 225–227 (1979). [CrossRef]  

101. K.-H. Bechstein and W. Fuchs, “Absolute interferometric distance measurements applying a variable synthetic wavelength (mesures de distances absolues par interférométrie utilisant une longueur d’onde variable synthétique),” J. Opt. 29, 179 (1998). [CrossRef]  

102. H. Muhamedsalih, S. Al-Bashir, F. Gao, and X. Jiang, “Single-shot RGB polarising interferometer,” Proc. SPIE 10749, 1074909 (2018). [CrossRef]  

103. J. Kagami, T. Hatazawa, and K. Koike, “Measurement of surface profiles by the focusing method,” Wear 134, 221–229 (1989). [CrossRef]  

104. M. Visscher and K. Struik, “Optical profilometry and its application to mechanically inaccessible surfaces part I: principles of focus error detection,” Precis. Eng. 16, 192–198 (1994). [CrossRef]  

105. M. Visscher, C. Hendriks, and K. Struik, “Optical profilometry and its application to mechanically inaccessible surfaces part ii: application to elastometer/glass contacts,” Precis. Eng. 16, 199–204 (1994). [CrossRef]  

106. M. Minsky, “Memoir on inventing the confocal scanning microscope,” Scanning 10, 128–138 (1988). [CrossRef]  

107. D. Hamilton and T. Wilson, “Surface profile measurement using the confocal microscope,” J. Appl. Phys. 53, 5320–5322 (1982). [CrossRef]  

108. H.-J. Jordan, M. Wegner, and H. Tiziani, “Highly accurate non-contact characterization of engineering surfaces using confocal microscopy,” Meas. Sci. Technol. 9, 1142 (1998). [CrossRef]  

109. R. Windecker, M. Fleischer, and H. J. Tiziani, “Three-dimensional topometry with stereo microscopes,” Opt. Eng. 36, 3372–3377 (1997). [CrossRef]  

110. C. Zhang, P. S. Huang, and F.-P. Chiang, “Microscopic phase-shifting profilometry based on digital micromirror device technology,” Appl. Opt. 41, 5896–5904 (2002). [CrossRef]  

111. K.-P. Proll, J.-M. Nivet, K. Körner, and H. J. Tiziani, “Microscopic three-dimensional topometry with ferroelectric liquid-crystal-on-silicon displays,” Appl. Opt. 42, 1773–1778 (2003). [CrossRef]  

112. R. Rodriguez-Vera, K. Genovese, J. Rayas, and F. Mendoza-Santoyo, “Vibration analysis at microscale by Talbot fringe projection method,” Strain 45, 249–258 (2009). [CrossRef]  

113. A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, “Fringe projection based quantitative 3D microscopy,” Optik 124, 5052–5056 (2013). [CrossRef]  

114. C. Quan, X. Y. He, C. F. Wang, C. J. Tay, and H. M. Shang, “Shape measurement of small objects using LCD fringe projection with phase shifting,” Opt. Commun. 189, 21–29 (2001). [CrossRef]  

115. C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang, “Microscopic surface contouring by fringe projection method,” Opt. Laser Technol. 34, 547–552 (2002). [CrossRef]  

116. J. Chen, T. Guo, L. Wang, Z. Wu, X. Fu, and X. Hu, “Microscopic fringe projection system and measuring method,” Proc. SPIE 8759, 87594U (2013). [CrossRef]  

117. D. S. Mehta, M. Inam, J. Prakash, and A. Biradar, “Liquid-crystal phase-shifting lateral shearing interferometer with improved fringe contrast for 3D surface profilometry,” Appl. Opt. 52, 6119–6125 (2013). [CrossRef]  

118. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23, 6846–6857 (2015). [CrossRef]  

119. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Laser Eng. 51, 538–541 (2013). [CrossRef]  

120. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22, 31826–31835 (2014). [CrossRef]  

121. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23, 25795–25803 (2015). [CrossRef]  

122. R. Whyte, L. Streeter, M. J. Cree, and A. A. Dorrington, “Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods,” Opt. Eng. 54, 113109 (2015). [CrossRef]  

123. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: a generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34, 1–18 (2015). [CrossRef]  

124. T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27, 18858–18868 (2019). [CrossRef]  

125. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32, 1–10 (2013). [CrossRef]  

126. S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image Vis. Comput. 43, 27–38 (2015). [CrossRef]  

127. K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering transparent shape from time-of-flight distortion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 4387–4395.

128. M. Poggi, G. Agresti, F. Tosi, P. Zanuttigh, and S. Mattoccia, “Confidence estimation for ToF and stereo sensors and its application to depth data fusion,” IEEE Sens. J. 20, 1411–1421 (2020). [CrossRef]  

129. G. Agresti and P. Zanuttigh, “Combination of spatially-modulated ToF and structured light for MPI-free depth estimation,” in Proceedings of the European Conference on Computer Vision (ECCV) (2018).

130. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photon. 10, 409–475 (2018). [CrossRef]  

131. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019). [CrossRef]  

132. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

133. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]  

134. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

135. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2011). [CrossRef]  

136. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24, 20324–20334 (2016). [CrossRef]  

137. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26, 7598–7613 (2018). [CrossRef]  

138. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth estimation in structured light fields,” Opt. Express 27, 13532–13546 (2019). [CrossRef]  

139. C. Alippi, A. Ferrero, and V. Piuri, “Artificial intelligence for instruments and measurement applications,” IEEE Instrum. Meas. Mag. 1(2), 9–17 (1998). [CrossRef]  

140. A. Halevy, P. Norvig, and F. Pereira, “The unreasonable effectiveness of data,” IEEE Intell. Syst. 24, 8–12 (2009). [CrossRef]  

141. S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6383–6392.

142. D. Weichert, P. Link, A. Stoll, S. Rüping, S. Ihlenfeldt, and S. Wrobel, “A review of machine learning for the optimization of production processes,” Int. J. Adv. Manuf. Technol. 104, 1889–1902 (2019). [CrossRef]  

143. W. Yin, Q. Chen, S. Feng, T. Tao, L. Huang, M. Trusiak, A. Asundi, and C. Zuo, “Temporal phase unwrapping using deep learning,” Sci. Rep. 9, 1–12 (2019). [CrossRef]  

144. K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27, 15100–15115 (2019). [CrossRef]  

145. S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning profilometry for high-speed 3D surface imaging,” Opt. Laser Eng. 121, 416–427 (2019). [CrossRef]  

146. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang, “Projector distortion correction in 3D shape measurement using a structured-light system by deep neural networks,” Opt. Lett. 45, 204–207 (2020). [CrossRef]  

147. S. Van der Jeught and J. J. J. Dirckx, “Deep neural networks for single shot structured light profilometry,” Opt. Express 27, 17091–17101 (2019). [CrossRef]  

148. J. Qian, S. Feng, Y. Li, T. Tao, J. Han, Q. Chen, and C. Zuo, “Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry,” Opt. Lett. 45, 1842–1844 (2020). [CrossRef]  

149. J. Marco, Q. Hernandez, A. Muñoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deep ToF: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36, 1–12 (2017). [CrossRef]  

150. S. Zhan, T. Suming, G. Feifei, S. Chu, and F. Jianyang, “DOE-based structured-light method for accurate 3D sensing,” Opt. Laser Eng. 120, 21–30 (2019). [CrossRef]  

151. Budianto and D. P. K. Lun, “Robust fringe projection profilometry via sparse representation,” IEEE Tran. Image Process. 25, 1726–1739 (2016). [CrossRef]  

152. H. Guo, “Face recognition based on fringe pattern analysis,” Opt. Eng. 49, 037201 (2010). [CrossRef]  

153. F. Liu, D. Zhang, and L. Shen, “Study on novel curvature features for 3D fingerprint recognition,” Neurocomputing 168, 599–608 (2015). [CrossRef]  

154. S. Jiao, Y. Gao, J. Feng, T. Lei, and X. Yuan, “Does deep learning always outperform simple linear regression in optical imaging?” Opt. Express 28, 3717–3731 (2020). [CrossRef]  

155. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light Sci. Appl. 9, 77 (2020). [CrossRef]  

156. L. Ekstrand and S. Zhang, “Auto-exposure for three-dimensional shape measurement with a digital-light-processing projector,” Opt. Eng. 50, 123603 (2011). [CrossRef]  

157. B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Laser Eng. 87, 83–89 (2016). [CrossRef]  

158. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48, 033604 (2009). [CrossRef]  

159. C. Waddington and J. Kofman, “Analysis of measurement sensitivity to illuminance and fringe-pattern gray levels for fringe-pattern projection adaptive to ambient lighting,” Opt. Laser Eng. 48, 251–256 (2010). [CrossRef]  

160. C. Jiang, T. Bell, and S. Zhang, “High dynamic range real-time 3D shape measurement,” Opt. Express 24, 7337–7346 (2016). [CrossRef]  

161. Y. Zheng, Y. Wang, V. Suresh, and B. Li, “Real-time high-dynamic-range fringe acquisition for 3D shape measurement with a RGB camera,” Meas. Sci. Technol. 30, 075202 (2019). [CrossRef]  

162. V. Suresh, Y. Wang, and B. Li, “High-dynamic-range 3D shape measurement utilizing the transitioning state of digital micromirror device,” Opt. Laser Eng. 107, 176–181 (2018). [CrossRef]  

163. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22, 10064–10071 (2014). [CrossRef]  

164. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Opt. Laser Eng. 91, 206–215 (2017). [CrossRef]  

165. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Opt. Express 22, 9887–9901 (2014). [CrossRef]  

166. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Opt. Laser Eng. 50, 1484–1493 (2012). [CrossRef]  

167. H. Zhao, X. Liang, X. Diao, and H. Jiang, “Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector,” Opt. Laser Eng. 54, 170–174 (2014). [CrossRef]  

168. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement,” Opt. Commun. 410, 694–702 (2017). [CrossRef]  

169. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Laser Eng. 59, 56–71 (2014). [CrossRef]  

170. S. Ri, M. Fujigaki, and Y. Morimoto, “Intensity range extension method for three-dimensional shape measurement in phase- measuring profilometry using a digital micromirror device camera,” Appl. Opt. 47, 5400–5407 (2008). [CrossRef]  

171. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Laser Eng. 128, 106029 (2020). [CrossRef]  

172. X. Hu, G. Wang, J.-S. Hyun, Y. Zhang, H. Yang, and S. Zhang, “Autofocusing method for high-resolution three-dimensional profilometry,” Opt. Lett. 45, 375–378 (2020). [CrossRef]  

173. M. Zhong, X. Hu, F. Chen, C. Xiao, D. Peng, and S. Zhang, “Autofocusing method for digital fringe projection system with dual projectors,” Opt. Express 28, 12609–12620 (2020). [CrossRef]  

174. M. K. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev. 1, 018005 (2010). [CrossRef]  

175. M. Paturzo, V. Pagliarulo, V. Bianco, P. Memmolo, L. Miccio, F. Merola, and P. Ferraro, “Digital holography, a metrological tool for quantitative analysis: trends and future applications,” Opt. Laser Eng. 104, 32–47 (2018). [CrossRef]  

176. P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital Holography,” Opt. Express 13, 6738–6749 (2005). [CrossRef]  

177. T. Kreis, “Application of digital holography for nondestructive testing and metrology: a review,” IEEE Trans. Ind. Inf. 12, 240–247 (2016). [CrossRef]  

178. A. Mikš and J. Novák, “Analysis of the optical center position of an optical system of a camera lens,” Appl. Opt. 57, 4409–4414 (2018). [CrossRef]  

179. Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Visual Commun. Image Represent. 25, 649–658 (2014). [CrossRef]  

180. L. Ekstrand and S. Zhang, “Three-dimensional profilometry with nearly focused binary phase-shifting algorithms,” Opt. Lett. 36, 4518–4520 (2011). [CrossRef]  

181. J.-S. Hyun, G. T. C. Chiu, and S. Zhang, “High-speed and high-accuracy 3D surface measurement using a mechanical projector,” Opt. Express 26, 1474–1487 (2018). [CrossRef]  

182. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Laser Eng. 87, 90–96 (2016). [CrossRef]  

183. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-field 3D shape measurement using an electrically tunable lens,” Opt. Express 27, 29697–29709 (2019). [CrossRef]  

184. W. Torres-Sepúlveda, J. Henao, J. Morales-Marn, A. Mira-Agudelo, and E. Rueda, “Hysteresis characterization of an electrically focus-tunable lens,” Opt. Eng. 59, 044103 (2020). [CrossRef]  

185. R. Leach, L. Brown, J. Jiang, R. Blunt, M. Conroy, and D. Mauger, Guide to the Measurement of Smooth Surface Topography using Coherence Scanning Interferometry (2008).

186. T. Chen, H. P. Lensch, C. Fuchs, and H.-P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

187. R. M. Kowarschik, J. Gerber, G. Notni, W. Schreiber, and P. Kuehmstedt, “Adaptive optical 3D measurement with structured light,” Opt. Eng. 39, 150–158 (2000). [CrossRef]  

188. H. Lin, J. Gao, G. Zhang, X. Chen, Y. He, and Y. Liu, “Review and comparison of high-dynamic range three-dimensional shape measurement techniques,” J. Sens. 2017, 9576850 (2017). [CrossRef]  

189. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24, 7703–7718 (2016). [CrossRef]  

190. G.-H. Liu, X.-Y. Liu, and Q.-Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50, 4557–4565 (2011). [CrossRef]  

191. P. Lutzke, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50, 063601 (2011). [CrossRef]  

192. R. Ran, C. Stolz, D. Fofi, and F. Meriaudeau, “Non contact 3D measurement scheme for transparent objects using UV structured light,” in 20th International Conference on Pattern Recognition (ICPR) (IEEE, 2010), pp. 1646–1649.

193. A. Brahm, C. Rößler, P. Dietrich, S. Heist, P. Kühmstedt, and G. Notni, “Non-destructive 3D shape measurement of transparent and black objects with thermal fringes,” Proc. SPIE 9868, 98680C (2016). [CrossRef]  

194. S. Yamazaki, M. Mochimaru, and T. Kanade, “Simultaneous self-calibration of a projector and a camera using structured light,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops) (IEEE, 2011), pp. 60–67.

195. R. Orghidan, J. Salvi, M. Gordan, C. Florea, and J. Batlle, “Structured light self-calibration with vanishing points,” Mach. Vis. Appl. 25, 489–500 (2014). [CrossRef]  

196. F. Li, H. Sekkati, J. Deglint, C. Scharfenberger, M. Lamm, D. Clausi, J. Zelek, and A. Wong, “Simultaneous projector-camera self-calibration for three-dimensional reconstruction and projection mapping,” IEEE Trans. Comput. Imaging 3, 74–83 (2017). [CrossRef]  

197. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marn-Jiménez, “Simultaneous reconstruction and calibration for multi-view structured light scanning,” J. Visual Commun. Image Represent. 39, 120–131 (2016). [CrossRef]  

198. W. Schreiber and G. Notni, “Theory and arrangements of self-calibrating whole-body 3-D-measurement systems using fringe projection technique,” Opt. Eng. 39, 159–169 (2000). [CrossRef]  

199. J. Tian, Y. Ding, and X. Peng, “Self-calibration of a fringe projection system using epipolar constraint,” Opt. Laser Technol. 40, 538–544 (2008). [CrossRef]  

200. C. Resch, P. Keitler, C. Menk, and G. Klinker, “Semi-automatic calibration of a projector-camera system using arbitrary objects with known geometry,” in IEEE Virtual Reality (VR) (2015), pp. 271–272.

201. H. Kawasaki, R. Sagawa, Y. Yagi, R. Furukawa, N. Asada, and P. Sturm, “One-shot scanning method using an uncalibrated projector and camera system,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops (2010), pp. 104–111.

202. B. Zhang and Y. Li, “Dynamic calibration of the relative pose and error analysis in a structured light system,” J. Opt. Soc. Am. A 25, 612–622 (2008). [CrossRef]  

203. D. D. Lichti, C. Kim, and S. Jamtsho, “An integrated bundle adjustment approach to range camera geometric self-calibration,” ISPRS J. Photogramm. Remote Sens. 65, 360–368 (2010). [CrossRef]  

204. N. Karpinsky and S. Zhang, “Holovideo: real-time 3D video encoding and decoding on gpu,” Opt. Laser Eng. 50, 280–286 (2012). [CrossRef]  

205. Z. Hou, X. Su, and Q. Zhang, “Virtual structured-light coding for three-dimensional shape data compression,” Opt. Laser Eng. 50, 844–849 (2012). [CrossRef]  

206. S. Zhang, “Three-dimensional range data compression using computer graphics rendering pipeline,” Appl. Opt. 51, 4058–4064 (2012). [CrossRef]  

207. T. Bell and S. Zhang, “Multi-wavelength depth encoding method for 3D range geometry compression,” Appl. Opt. 54, 10684–10961 (2015). [CrossRef]  

208. A. Maglo, G. Lavoué, F. Dupont, and C. Hudelot, “3D mesh compression: survey, comparisons, and emerging trends,” ACM Comput. Surv. 47, 1–41 (2015). [CrossRef]  

209. T. Bell, B. Vlahov, J. P. Allebach, and S. Zhang, “Three-dimensional range geometry compression via phase encoding,” Appl. Opt. 56, 9285–9292 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Performance of various optical surface measurement techniques. Image was recreated based on the image in Ref. [2].
Fig. 2.
Fig. 2. Basic principle of CSI.
Fig. 3.
Fig. 3. Basic principle of ToF.
Fig. 4.
Fig. 4. ToF depth measurement using phase offset. Copyright [2011] IEEE. Reprinted, with permission, from Ref. [22].
Fig. 5.
Fig. 5. Basic principle of triangulation-based SL.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( x , y ) = tan 1 [ k = 1 N I k ( x , y ) sin ( 2 π k / N ) k = 1 N I k ( x , y ) cos ( 2 π k / N ) ] ,
I k ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + 2 π k / N ] .
d = c τ 2 ,
Δ φ = tan 1 ( m 3 m 1 m 0 m 2 ) .
d = c Δ φ 4 π f m ,
[ u c , v c , 1 ] T = A [ R , t ] [ x w , y w , z w , 1 ] T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.