Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed three-dimensional profilometry for multiple objects with complex shapes

Open Access Open Access

Abstract

This paper describes an easy-to-implement three-dimensional (3-D) real-time shape measurement technique using our newly developed high-speed 3-D vision system. It employs only four projection fringes to realize full-field phase unwrapping in the presence of discontinuous or isolated objects. With our self-designed pattern generation hardware and a modified low-cost DLP projector, the four designed patterns can be generated and projected at a switching speed of 360 Hz. Using a properly synchronized high-speed camera, the high-speed fringe patterns distorted by measured objects can be acquired and processed in real-time. The resulting system can capture and display high-quality textured 3-D data at a speed of 120 frames per second, with the resolution of 640 × 480 points. The speed can be trebled if a camera with a higher frame rate is employed. We detail our shape measurement technique, including the four-pattern decoding algorithm as well as the hardware design. Some evaluation experiments have been carried out to demonstrate the validity and practicability of the proposed technique.

©2012 Optical Society of America

1. Introduction

Optical non-contact three-dimensional (3-D) shape measurement techniques have been employed for many years to measure 3-D profile of objects [1]. The 3-D profilometry based on fringe pattern projection offers advantageous of the non-scanning nature and full-field performance, which is widely employed and tested in research or industrial inspection, quality control, machine vision, entertainment, and biomedicine etc. [1, 2]. With the recent advancement in digital display devices such as liquid crystal display (LCD) and digital mirror device (DMD), 3-D shape measurement based on digital fringe projection (DFP) units has been rapidly expanding [2, 3]. However, traditional DFP profilometers are usually designed to measure static object shapes under non-time-critical measurement conditions, their measurement of moving objects is limited [3]. Besides, it is difficult to retrieve the absolute phase for spatially isolated surfaces simultaneously and rapidly since in the phase unwrapping, fringe orders will be ambiguous [4, 5], making the depth difference between spatially isolated surfaces indiscernible.

The goal of this paper is to introduce a novel real-time 3-D shape measurement technique which takes full advantages of high-speed, high-quality, and low-cost in 3-D shape measurements. More specifically, we first introduce a novel pattern strategy for real-time 3-D shape measurement which employs only four patterns to retrieve the absolute phase in the presence of surface discontinuities and isolated objects without ambiguity. By the novel fringe pattern strategy, two phase maps can be obtained simultaneously by using only four fringe images: one is the wrapped phase with 2π ambiguity, another one is called base phase which is used to assist phase unwrapping. Then we demonstrate the proposed pattern sequence can be generated and projected at a switching speed of 360 Hz through the combination of a modified single-chip DLP projector with our self-developed pattern generation circuit. By properly synchronized a high-speed monochrome camera with the projector, the fringe patterns distorted by measured objects can be acquired and processed in real-time. Finally, we apply the proposed four-pattern algorithm to our real-time 3-D shape measurement system, and present some experimental results at the speed of 120 frames per second (fps), with resolution of 640 × 480 points.

2. Related work

In this section, we briefly review the phase calculation and absolute phase retrieval methods. We also review some of the existing high-speed 3-D measurement systems.

2.1 Phase calculation

Two traditional methods for phase calculation are Fourier transform profilometry (FTP) [6] and phase-shifting profilometry (PSP) [7]. FTP techniques provide phase information from a single fringe pattern so they are well-suited to single snapshot 3-D shape data acquisition [8]. Essentially, a band-pass filter is applied in Fourier domain to extract the fundamental spectrum components. However, due to the global character of the Fourier transform, the phase estimate calculated at an arbitrary pixel depends on the whole recorded fringe pattern, making FTP methods suffer from frequency band overlaps problems caused by variations in object texture and excessive object slops [9]. The measured precision and variation range can be improved by the π phase-shifting FTP [10]. However, an additional fringe pattern must be captured, which influences the instantaneous characteristic of FTP. In addition, FTP techniques are not easy to fully automate, as some filtering in the Fourier domain has to be done, that may depend on the information about the noise and bandwidth of the modulating signals of the pattern to be analyzed.

PSP algorithms typically require multiple phase-shifted fringe patterns to calculate the wrapped phase [7]. A periodic intensity pattern is projected several times by shifting it in every projection and the calculated phase at a given pixel depends only on the intensities recorded at the same pixel. The main advantages of the PSP techniques are their high spatial resolution, accuracy, and insensitivity to surface albedo and ambient illuminations [11]. Generally, the more patterns PSP employs, the higher accuracy can be achieved [12]. However, acquisition of a large number of patterns is unacceptable for real-time measurement since the same point in the projector pattern sequence can map to different points within the captured images due to depth changes over time. Thus, to minimize total scan time to reach real-time 3-D measurements, three-step phase-shifting algorithm is typically used to reconstruct a single 3-D shape rapidly [13]. In order to further alleviate the measurement errors caused by motion, Zhang and Yau [14] proposed the 2 + 1 phase-shifting algorithm where the phase is only encoded with two sinusoidal patterns with a phase-shift of π/2 and the third uniform pattern is less sensitive to motion than the image with fringe stripes. Jia et al. [15] proposed the phase shift triangular pattern strategy which employs only two patterns. However, this technique is highly sensitive to surface reflectivity since the actual intensity modulation of the captured patterns cannot be obtained with just two pattern projections.

2.2 Absolute phase retrieval

Over the years, a host of absolute phase evaluation algorithms have been proposed and tested [4, 12, 1622]. To fulfill the goal of high-speed 3-D capture of complex scene, an algorithm must employ fewer patterns to ensure precise calculation of the absolute 3-D coordinates of complicated objects characterized with large depth variation and discontinuities such as steps, holes. Spatial phase unwrapping algorithms cannot resolve the phase ambiguity in discontinuous surfaces and large step height changes where the phase changes larger than π [5]. Increasing the wavelength of phase-shifting can solve this problem; however, the measurement accuracy will be affected and the system will suspect to noise [12].

The Gray-code method uses a series of fringes according to the binary mode to define the fringe order [18]. However, to unwrap a high-frequency phase, the minimum number of binary patterns is INT(log2F)+1, where F is the total number of periods in the field and INT is the ‘integer part’ function. Besides, the phase unwrapping will easily fall into an error, when an improper value of Gray coding is caused by mistake at the partial boundary of two adjacent binary words [20]. To improve the coding efficiency and overcome the problems of the Gray-code method, pseudo-random sequences, such as De Brujin sequences are widely used to identify the fringe order in the unwrapped phase map [17]. In these methods, the codeword of the primitive is specified by its own value and the values of its neighborhooding, which complicate the decoding processes. Temporal phase unwrapping is a well-established technique to retrieve the absolute phase [4]. The method makes use of a sequence of at least two phase maps with varied fringe pitches. Specifically, in two frequency temporal phase unwrapping [12], the high-frequency phase provides the primary phase value, and the low-frequency one provides information for unwrapping. Since each of the phase map is obtained from phase-shift patterns, six is the minimal number of frames necessary.

The common drawback of the above absolute phase retrieval methods is the increased number of required patterns, which is undesirable under dynamic conditions, where it is preferable to minimize the acquisition time to avoid the effect of motion. So recently, a number of researchers proposed several absolute phase retrieval techniques without projecting extra patterns by embedding additional signal/phase information into the waveforms of the conventional PSP methods [16, 21, 22]. Liu et al. [16] propose a dual-frequency pattern scheme that combined a group of unit-frequency pattern signals into a high-frequency six-step phase-shifting sequence. Wang et al. [22] embedded a period cue and Wissmann et al. [21] embedded binary De Brujin sequence into a four-step phase-shifting sequence. By the additional information decoded from the captured patterns, the fringe order of the high-frequency phase can be obtained and the final absolute phase can be retrieved using the same procedure as conventional ones mentioned above.

2.3 High-speed 3-D measurement system

One major issue in developing a high-speed 3-D measurement system is to control and project desired fringe patterns at a high frame rate. For this propose, Huang et al. [23] modified a DLP projector to project three phase-shifted, sinusoidal gray-scale fringe patterns with a switching speed of 240 Hz. A high-speed camera synchronized with the projector was used to capture the three patterns rapidly for real-time 3-D shape acquisition. With this system, together with the fast three-step phase-shifting algorithm and quality guided phase unwrapping, Zhang et al. [13] realized high-resolution, real-time 3-D shape measurements at a frame rate of 40 fps. However, the spatial unwrapping method employed relies on entirely continuous and connected phase maps so that this system is unsuitable in the presence of isolated regions or phase discontinuities greater than one period. To overcome this problem, Wang and Zhang [24] proposed a superfast phase-shifting technique for 3-D measurement using defocused binary patterns. Combining with the multi-frequency temporal unwrapping [19] and Gray-coding [20], the phase ambiguity problem can be solved, at the cost of many more additional pattern projections. Furthermore, the high-speed patterns are generated and controlled by using a DLP discovery kit, which is prohibitively expensive, significantly increasing the cost of the whole system. Li et al. [25] developed a method for measuring spatially isolated objects using three defocused binary patterns. They also implemented their technique to a measurement system which comprise a modified a development kit of DLP projector and a high-speed camera. Because their method is based on FTP and time-consuming pseudo-random sequence codification, they realized a speed of 60 fps just for 3-D surface acquisition, not including reconstruction and display. Liu et al. [16] developed a real-time, 120 Hz 3-D measurement system using a dual-frequency pattern scheme along with look-up-table (LUT) based algorithms, which significantly reduce the computational cost of phase and 3-D coordinate without introducing distortion. But it still needs six pattern projections and the involving of the DLP discovery kit is expensive.

3.Principle

3.1 Projection and refection process of phase-shifting profilometry

A typical phase-shifting profilometry system is shown in Fig. 1 . The fringe patterns composing of horizontal/vertical patterns sinusoidal stripes are sent to a video projector. The projector projects the fringe images onto the surface of the measured object, and captured by a triangularly mounted CCD camera. The camera sends captured fringe images into the computer, and the computer then analyzes the fringe images to obtain 3-D shape information based on the deformation of the captured fringe patterns (in the form of phase values) using triangulation. The whole process mainly includes the following three stages: fringe projection, fringe reflection, and fringe acquisition.

 figure: Fig. 1

Fig. 1 Schematic experimental setup for phase-shifting profilometry.

Download Full Size | PDF

Fringe projection: The conventional phase-shifting profilometry technique employs a set of sinusoidal fringe images {In:n=0,1,2,,N1}in project space with a phase shift of 2π/N

Inp(xp,yp)=Ap(xp,yp)+Bp(xp,yp)cos(2πfxxp+2πn/N),
where Ap is the temporal DC value, and Bp is the amplitude of the temporal AC signal, Ap and Bp are some constants. (xp,yp) is the pixel coordinates of the projector, fx is the frequency of the sine wave, n represents the phase-shift index, and N is the total number of phase-shift. All the generated fringe images are sent to a projector and then projected onto the object. Without loss of generality, we assume that the fringes are perpendicular to the xp-axis in this paper.

Fringe reflection and acquisition: The reflected light from the object can be attributed to two light sources, i.e., projector light as well as the ambient light. The projected fringe pattern combining with the ambient light β1 is distorted and reflected by the object. The reflected light is then captured by the camera with some ambient light β2 directly entering to the camera. So the fringe image actually captured by the camera is [26]:

In(x,y)=α(x,y){Ap(x,y)+Bp(x,y)cos[ϕ(x,y)+2πn/N]+β1(x,y)}+β2(x,y),
where (x,y) is the pixel coordinates of the camera, α(x,y)is the reflectivity or albedo of the scanned object for the camera. ϕ(x,y) is the desired phase information. For simplicity, In(x,y) is commonly denoted as
In(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)+2πn/N],
where A(x,y) is average intensity which equals α(x,y)[Ap(x,y)+β1(x,y)]+β2(x,y), B(x,y) is the intensity modulation and equals α(x,y)Bp(x,y). The desired phase information can be solved from these equations:
ϕ(x,y)=tan1n=1NIn(x,y)sin(2πn/N)n=1NIn(x,y)cos(2πn/N).
It can be seen that the phase-shifting algorithms are insensitive to surface reflectivity and ambient illumination. The extracted phase has principle values ranging from π to +π. It needs phase unwrapping process to recover the absolute phase with one single period. However, the depth ambiguity in this process represents a major obstacle which spatial unwrapping methods cannot tackle.

3.2 Proposed four-pattern strategy

Our research focus is to develop a low-cost real-time 3-D shape measurement system that is able to produces high-speed and high accuracy profilometry of general objects. Shortening the pattern sequence required for unwrapping phase images while maintaining its key benefits such as high accuracy and data density is one crucial step toward this goal. To achieve this, we developed a four-pattern strategy to decode fewer pattern sets while retaining high resolution and object independence. The intensity of the designed fringe images in the camera space are:

I1p(xp,yp)=Ap(xp,yp)+Bp(xp,yp)sin[πF(2xp/X1)],
I2p(xp,yp)=Ap(xp,yp)+Bp(xp,yp)cos[πF(2xp/X1)],
I3p(xp,yp)=Ap(xp,yp)+Bp(xp,yp)(2xp/X1),
I4p(xp,yp)=Ap(xp,yp)Bp(xp,yp)(2xp/X1).
Where F is the number of periods, X is the pattern width (total number of pixels for each row of the pattern). Figure 2 illustrates the proposed patterns along with its profile with F = 10, Ap = 127.5, Bp = 127.5 for 8-bit per pixel gray-scale intensity. The final captured patterns are
I1(x,y)=α(x,y)[Ap(x,y)+Bp(x,y)sinϕ(x,y)+β1(x,y)]+β2(x,y),
I2(x,y)=α(x,y)[Ap(x,y)+Bp(x,y)cosϕ(x,y)+β1(x,y)]+β2(x,y),
I3(x,y)=α(x,y)[Ap(x,y)+Bp(x,y)ϕ(x,y)+β1(x,y)]+β2(x,y),
I4(x,y)=α(x,y)[Ap(x,y)Bp(x,y)ϕ(x,y)+β1(x,y)]+β2(x,y).
Solving Eqs. (9)–(12), and omitting (x,y) for simplicity, we have

 figure: Fig. 2

Fig. 2 An example of the proposed patterns and their 8-bit (F = 10, Ap= 127.5, Bp = 127.5) gray-scale intensity distributions.

Download Full Size | PDF

ϕ=tan12I1I3I42I2I3I4,
α=(2I1I3I4)2+(2I2I3I4)22Bp,
ϕ=I3I42αBp.

Where ϕ is the so-called base phase with values ranging from [-1, 1), ϕ is the high-frequency phase to be unwrapped. Besides, the absolute phase Φwithout the 2π ambiguity unwrapped from ϕ should have a range between [πF,πF). If we generalized the base phase map by scaling its dynamic range by a factor of πF so that it will have the same dynamic range as the absolute phase, the two phase maps (πFϕ and Φ) should be identical ideally. Therefore, the absolute phase Φ and base phase ϕ should have the following relationship under ideal conditions:

Φ=U(ϕ,ϕ)=πFϕ=ϕ+2πM,

where M is an integer indicating the phase order. So once ϕ and ϕare obtained, the final absolute phase Φcan be calculated by

Φ=ϕ+2π×NINT[(πFϕϕ)/2π],

where NINT represents a rounding to the nearest integer. Once the absolute phase is obtained, it can be converted to 3-Dcoordinates of the object surface through calibration.

3.3. Absolute phase-to-coordinate conversion

To determine the 3-D coordinate of each point of the object, we adopt the system calibration method in [27]. Once the system calibration is done, the camera parameter matrix of the camera (Pc) and the projector (Pp) can be obtained based on the calibrated intrinsic and extrinsic parameters of the camera (Ac and Mc) and the projector (Ap and Mp):

Pc=AcMc=(p11cp12cp13cp14cp21cp22cp23cp24cp31cp32cp33cp34c),andPp=ApMp=(p11pp12pp13pp14pp21pp22pp23pp24pp31pp32pp33pp34p).

The relationship between the camera coordinates and the projector coordinates can be established via the absolute phase for each arbitrary coordinate (x,y) on the camera plane Φ(x,y)=Φ(xp). The absolute phase Φ(xp) can be easily converted to the projector coordinate xp. Finally, the world coordinate (xwywzw) is given by

(xwywzw)=(p11cp31cxp12cp32cxp13cp33cxp21cp31cyp22cp32cyp23cp33cyp11pp31pxpp12cp31pxpp13pp33pxp)1(p34cxp14cp34cyp24cp34pxpp14p).

3.4 Accuracy, speed, and other features

In this section, the proposed four-pattern strategy is compared with other algorithms from various aspects. For a high-speed 3-D measurement system, many factors such as nonlinear response of the projector, random noise, system calibration errors, image defocusing, may affect the measurement accuracy [3, 12, 15, 20]. Besides, since we aim to develop an algorithm for high-speed, real-time measurement of multiply objects, the processing speed and the error caused by object motion [13, 14, 16] should also be considered. Among these factors, the system calibration errors and nonlinear response of the projector can be reduced by some off-line calibration procedures [26]. So here we just focus on the effect of random noise, image defocusing, motion error, and speed. The nonlinearity calibration method we used will be presented in Sec 5.1.

During the phase calculation process, the reconstructed phase is affected by random noise in the captured images. The random noise sources include instable ambient light, projector illumination noise, camera noise, and quantization error. To simplify our analysis, we assume that the combination of noise is additive Gaussian random variable ni~N(0,σn), and by adding the noise, the captured images Iin(x,y) can be rewritten as:

Iin=Ii+ni.
Therefore, the measured phase ϕn can be calculated by substituting Eq. (20) into Eq. (13):
ϕn=tan1(N1+2αBpsinϕN2+2αBpcosϕ),
where N1=2n1n3n4 and N2=2n2n3n4. Auxiliary variables N1 andN2 are also Gaussian, i.e. N1~N(0,6σn2) and N2~N(0,6σn2). The measured phase can be considered as the sum of the actual phase and the phase error caused by the noise. Thus, we have
Δϕ=ϕnϕ=tan1(N1cosϕN2sinϕN1sinϕ+N2cosϕ+2αBp).
Because the noise is usually much smaller than the intensity modulation αBp, Eq. (22) can be further approximated to
ΔϕN1cosϕN2sinϕ2αBp.
According to the analysis in [12], the variance of the phase error can be represented as
σΔϕ2(x,y)6σn2(cos2ϕ+sin2ϕ)(2αBp)2=(6σn2αBp)2.
From Eq. (24), we can see the phase error standard deviation (STD) σΔϕ(x,y) linearly increases with σn, and decreases with α and Bp. Followed by a similar analysis, we can obtain the phase error STD for 2 + 1 phase-shifting algorithm, which is 2σn/(αBp). From [12], we know the phase error STD of the N-step phase-shifting algorithm is 2/Nσn/(αBp). So the resistibility of random noise in wrapped-phase of our method is 2/3 of the 2 + 1 phase-shifting algorithm, 2/3 of the three-step phase-shifting algorithm, and 1/3 of the four-step phase-shifting algorithm. However, if one wants to obtain an unambiguous phase using conventional N-step phase-shifting algorithm, only unit-frequency pattern can be employed. By employing the proposed pattern strategy, the phase unambiguous range can be extend from 2π to 2Fπ. In another words, when the final absolute phase Φ is scaled into the same dynamic range [0,2π), its phase error STD can be further reduced by a factor of F(i.e. 6σn/(2αBpF)). On the other hand, if two-frequency temporal phase unwrapping is used to increase the phase resolution for N-step phase-shifting algorithm, the number of patterns will be doubled, i.e., six patterns are needed for a two-frequency 2 + 1 or three-step phase-shifting algorithm and eight patterns are needed for a two-frequency four-step phase-shifting algorithm. The extra patterns make them less applicable for moving scenes or time-critical application fields. Using a two-frequency triangular pattern phase-shifting algorithm can reduce the number of patterns to four, but as stated before, the two-frequency triangular pattern does not eliminate the effect of the surface reflectivity, thus the measurement result may be unreliable especially when the object shapes are complex or the surface reflectivities are various.

Instead of projecting additional patterns, the coded phase-shifting method proposed by Wissmann et al. [21], and the period coded phase-shifting (PCPS) method proposed by Wang et al. [22] both embedded additional information in conventional four-step phase-shifting patterns to assist phase unwrapping. Especially in PCPS, the period cue is designed via optimization and embedded without affecting the amplitude of the primarily sinusoidal component in the original pattern, so it has the same wrapped phase error STD as the conventional four-step phase-shifting algorithm, which is 1/3 of our method. This means for the wrapped phase, PCPS is more robust than our method to the random noise. However, the modified PCPS pattern loss the smoothness of the original sinusoidal pattern, containing several abrupt changes in its cross-section. These high-frequency components are easily attenuated by the out of focus blurring and inherent low-pass characteristics of the projector lens. Therefore, PCPS is more sensitive to defocus than our method.

Another important issue is related to the phase-unwrapping process. Both our method and the two-frequency temporal phase unwrapping share a similar equation (Eq. (17)), from which it is easily to understand that the number of periods of the high-frequency pattern F and the quality of the base phase/unit-frequency phase are curial to the unwrapping process. On the other hand, increasing F can reduce the error STD of the final absolute phase. Therefore, the base phase/unit-frequency phase is closely related to the STD of final absolute phase as well. Comparing the phase-unwrapping performance of our solution to two-frequency temporal phase unwrapping methods may not be straightforward, but it is reasonable to understand that two-frequency temporal phase unwrapping method is more robust than our method due to extra pattern projections. Besides, compared to PCPS, our method use two individual patterns to encode the base phase, whose dynamic range is much larger than that of the period cue in PCPS. This means our method is superior in terms of phase-unwrapping robustness. Moreover, using two full patterns to encode the base phase provides more flexibilities of improving the quality of the base phase. For example, if we replace I3 and I4 by two unit-frequency triangular waveforms, and use triangular pattern phase-shifting algorithm [15] to obtain the base phase, the base phase noise can be further reduced by half at the cost of higher sensitivity to defocus. Besides, since the surface reflectivity can be obtained in our method, the susceptibility to object reflectivity problem in conventional triangular pattern phase-shifting algorithm can be overcome. Evidently, this point is beyond the scope of this work and needs more detailed investigation.

Besides the above-mentioned points, our four-pattern encoding strategy has the following advantages for real-time 3-D shape measurement:

Less error caused by motion: Similar with 2 + 1 phase-shifting algorithm [14], in our method, the high-frequency phase to be unwrapped is only encoded with two fringe images. And the intensity the third and fourth pattern is gradually increasing/decreasing; they are less sensitive to motion than the patterns with high-frequency fringe stripes. Therefore, our method is less sensitive to motion-induced measurement error than phase-shifting algorithms where the high-frequency phase in encoded in more than two fringe images, e.g. three/four-step methods.

High quality texture acquisition: For the proposed four-pattern scheme, (I3+I4)/2 is actually a flat image that can be used for texture mapping purposes. In many other phase-shifting algorithms, the average intensity or the intensity modulation are used as texture, However, the quality of texture is easily distorted by object motion and the nonlinear response of the projector. While in our method, these problems have less effect on the texture.

Lossless LUT-based processing: Since we want to apply our method to a real-time 3-D shape measurement system, simple image processing of the acquired patterns is preferable. There is no time-consuming image processing algorithms, such as edge/corner detection, identifying centers of stripes, and gray-scale thresholding/matching in our method. The time-consuming arctangent in Eq. (13) and square root operation in Eq. (14) can be pre-performed and the double-precision results can be obtained by LUT indexing. And the matrix multiplication and inversion in Eq. (19) can also be accelerated by LUT-based implementation [16]. Therefore, the real-time implementation on a regular personal computer does not pose any problem for our method.

4. High-speed pattern projection with a low-cost DLP projector

To realize a real-time 3-D shape measurement system, the patterns should be switched and projected at a high speed. Texas Instruments has made available DMD discovery kits to facilities experiments with DMD. While these kits can provide more flexibilities of manipulating the fringe patterns, their price as well as the cost of necessary component (e.g. light source, optical modules, and lenses) is much higher than an off-the-shelf DLP projector. Hence, we chose to re-engineer an off-the-shelf DLP projector (X1161PA from Acer) into a flexible, programmable, high-speed one for our use. We will first briefly introduce projection mechanism of a single-chip DLP projector, and then explain our hardware design and the synchronization between the camera and the projector.

4.1 Projection mechanism of a single-chip DLP projector

Every single-chip DLP projector contains two important components: a DMD chip and a color wheel. The DMD chip modulates the projected light after it is reflected off an array of tiny mirrors. Any displayed intensity is made up of pulses of light created by these mirrors switching on and off. The DLP video processor system separates color images into their RGB components for display, employing a rotating color wheel synchronized to RGB image separations displayed by the DMD. The color wheel spins at 120 Hz (in some order types of DLP projector, the color wheel may operate at 60 or 80 Hz) and is divided into multiple sectors: the primary colors: red, green, and blue. In our X1161PA, the 6-segment color-wheel comes with primary (Red, Green and Blue) and secondary color segments (Cyan, Yellow and White) as well. The secondary color segments are designed to deliver richer color saturation and alleviate the rainbow effect for people who are extremely sensitive to color refreshing. The synchronization between the DMD and the color-wheel is controlled by a 120 Hz feedback trigger signal generated by a photodiode located inside the color-wheel carrier. Once the DMD is triggered, it will project the RGB channels of the input color image sequentially by binary temporal pulse-width-modulation.

With the help of the projection mechanism of a single-chip DLP projector, Huang et al. [23] modified an 80 Hz DLP projector to realize high-speed three phase-shifted, sinusoidal gray-scale fringe patterns projection at 240 Hz. They removed the color wheel of the projector and made the projector display one static color pattern which consists of three phase-shifted patterns encoded in RGB channels. The projector was connected to a personal computer which used it as the monitor and displayed the static color pattern. By properly synchronized a high-speed monochrome camera with this modified projector, three high-speed phase-shifted patterns, rather than a static one could be captured sequentially and repeatedly. However, there are four patterns for one measurement cycle in our method; these four patterns cannot be combined into one single color pattern because one color image consists of only three color channels. Therefore, we cannot directly apply Huang et al.’s method to realize high-speed projection for proposed four patterns.

4.2 High speed pattern generation

After understanding the projection mechanism of the single-chip DLP, we found that if we want to project the four patterns sequentially and repeatedly at a high speed, we must create a sequence of color encoded pattern images, and sent them sequentially and repeatedly at 120 Hz. When the projector receives this pattern sequence, the four patterns can be decoded by the DLP video processor system and then projected them in a correct order. We illustrate this process in Fig. 3(a) . However, to our knowledge, switching the patterns by repeatedly loading patterns into the projector at such a high-speed is difficult to be achieved by an ordinary personal computer.

 figure: Fig. 3

Fig. 3 (a) Schematic of the high-speed pattern projection mechanism. (b) Synchronization signal HSYNC and VSYNC of VGA. The values of TS, TPW, TBP, TDP, and TFP for HSYNC and VSYNC can be observed in Table 1 for an 800 × 600 resolution at 120 Hz.

Download Full Size | PDF

To overcome the above problem, we use a self-developed Field-Programmable Gate Array (FPGA) board to generate and send the desired patterns to the projector rapidly. Instead of using a computer, the VGA signal is generated by a high-speed RGB digital-to-analog converter (DAC) (Analog Devices ADV7123) controlled by the FPGA (ALTERA Cyclone® IV EP4CE15). Inside the ADV7123, there are three channels of 10-bit-precision video DAC, which are respectively used to implement the digital-to-analog converting of three channels of RGB digital signals. Besides the three video signals, two main signals are used in VGA to synchronize the information that is sent to the display: The Horizontal Synchronization (HSYNC) is responsible for indicating that a new line is starting, while the Vertical Synchronization (VSYNC) is responsible for indicating that a new screen is starting. The HSYNC and VSYNC representations are shown in Fig. 3(b). In order to make the projector synchronize and displays the pattern correctly without blinks, the VGA signals must obey some timing restrictions know as industrial standards. Considering the resolution and the maximal frame rate of the projector, we design the VGA signals according to the timing specifications for 800 × 600 resolution at 120 Hz which are listed in Table 1 . The FPGA’s Phase Locked Loop (PLL) is used to generate the 73.250 MHz pixel clock. Since all columns of one pattern are identical, we use the xp coordinates as the addresses feed into a pixel ROM for pattern data. Since just one row of pixel values are needed to store for one pattern, the memory demand is not so great that the ROM can be implemented inside the FPGA chip.

Tables Icon

Table 1. VGA Timing Specifications for an 800 × 600 Resolution at 120 Hz [28]

4.3 Camera and projector synchronization

With our self-designed FPGA board, the pattern signal can be easily controlled, but still two more procedures are needs to realize a high-speed pattern projection and acquisition system: (1) remove the color wheel of the projector to make the objector operate in a monochrome mode; (2) properly synchronize the camera and projector. The high-speed camera we used is GE680 (Allied Vision Technologies), with a maximal frame rate of 205 fps for 640 × 480 image resolution. In this subsection, we focus on verify the feasibility of the synchronization and the whole system, especially its maximal frame rate. So the camera is working with a 320 × 240 region of interest (ROI). At such resolution, the maximal frame rate of our camera can reach 388 fps.

After removing the color filter of the projector, the feedback trigger signal from the photodiode is no longer available. To keep the DMD working normally, a substitute of feedback trigger signal is generated directly by the FPGA. We use the VSYNC signal as the benchmark to generate the trigger pulses for the camera and the feedback trigger signal for the DMD. The timing signals of the whole system are shown in Fig. 4 . The waveform at the top is the VSYNC signal of VGA, whose frequency is identical with the feedback trigger signal to of projector shown in the second row. The third row is the trigger signal to the camera, which is carefully designed in accordance with the projection timing, which is shown at the bottom row. In the projection timing, the G, B, and R, represent the green, blue, and red channels of the projector. The time constants listed in Fig. 4 are obtained in our experimental measurement. Since the exposure of our camera cannot be adjusted during capture, we set the exposure time at a fixed time of 2 ms for all channels to make sure that all channels are captured properly. Figure 5 shows four fringe patterns on a uniform white board captured sequentially by the camera. It can be note that, despite some cross-talk and nonuniform distribution of three color channels (which may results from the projector’s specific on-board video processing mechanism related to the secondary color segments of the color wheel) the four patterns are successfully projected and captured at 360 Hz, indicating the proposed method is valid and the synchronization works well.

 figure: Fig. 4

Fig. 4 Timing diagram of the whole system. The darker regions represent the exposure period of the camera.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Four fringe patterns sequentially captured at 360 Hz

Download Full Size | PDF

5. Experiments

Figure 6 shows two photographs of the real-time 3-D measurement system we developed. The present configuration includes an off-the-shelf DLP projector, a high-speed CCD camera, a self-developed pattern generation FPGA board, and a power supply board. The details of the projector, camera, and the FPGA board have been mentioned earlier. With our current hardware setup, the measurement speed is limited by the CCD camera, while the projector can provide fringe patterns up to 360 fps. To simplify the intensity calibration procedure and avoid the color cross-talk problem of the projector, in the following experiments, only the red channel of the projector was used (We found there is no cross-talk for our projector when only one color channel is used). Thus the camera was operated at a frame-rate of 120 fps, and the four red-channel-only patterns were switched at 120 Hz and controlled by the FPGA board. We applied a four-pattern sliding window along the whole captured pattern sequence, thus the 3-D acquisition speed is also 120 fps. The number of pattern periods of the sinusoidal patterns was chosen as 10. Due to the short exposure time, the images captured were smoothed by a 3 × 3 Gaussian filter to reduce the random noise. It should be noted that if a higher-speed camera is used, the pattern acquisition speed can be up to 360 fps, and a speed of 90 fps for 3-D shape acquisition can be achieved (the pseudo frame rate can reach 360 fps).

 figure: Fig. 6

Fig. 6 Our real-time 3-D measurement system (a) and its key components (b).

Download Full Size | PDF

5. 1 Nonlinearity correction

The commercial DLP projectors are generally fabricated to be nonlinear devices for a better visual effect. However, the nonlinearity decreases the accuracy and resolution of the measurement. When considering the nonlinear response of the project, the captured pattern image should be rewritten as [26]

In(x,y)=α(x,y){f[Iip(x,y)]+β1(x,y)}+β2(x,y),
where f is the response function of the projector. We assume that the response curves of all pixels are identical, which means there is no nonuniformity in the project. This assumption is valid for most commercial video projectors. It is not difficult to be found that when f is a linear function, the calculated wrapped phase ϕ and base phase ϕ are unaffected. However, when f is nonlinear, phase errors would be induced. Generally, the response function f a monotonic increasing function of the input gray-scale, which means there always exists an inverse function f-1 for response f. Once f-1 is obtained, we can pre-distort the ideal pattern using f-1, then store the distorted pattern data to the on-chip ROM of the FPGA. When the projector projects this pattern, the response function f will be counteracted by the previous encodedf-1, thus an undistorted fringe pattern with ideal intensity can be obtained.

To obtain the response curve of the projector, a sequence of images with uniform gray-scale were projected sequentially onto a white board, the camera captured the reflected images and sent them to the computer. The sequence of uniform patterns was generated by the FPGA board and their gray-scales range from 0 to 1023.The aperture of the camera was carefully adjusted to prevent any part of the captured images from saturating. The captured pattern sequence is denoted as An,wheren=0,1,...,1023. To precisely obtain the response curves of the projector, the influence of ambient light and the reflectivity of the scanned board should be eliminated. So we converted the responses of respective pixels to offset-free, normalized signals:

An(i,j)=1023×An(i,j)min[A(i,j)]max[A(i,j)]min[A(i,j)],
where max[A(i,j)] and min[A(i,j)] represent the maximum and minimum values for each pixel among the captured 1024 images. The subtraction unconditionally removes any background offset while the division normalizes the pattern to remove the reflectivity variations across the surface. By applying Eq. (26) to the intensity value of each pixel, the final output range of An(i,j) was scaled to 0-1023, which guarantees the value range of f-1 can still fall in 0-1023. To eliminate the interference of random noise, all the An(i,j) for each pixel of the image were averaged to obtain the response curve of the projector. The function f was then obtained with the least square polynomial fit method. We found that a 20th-order polynomial is sufficient to precisely represent the response curve. Figure 7(a) shows the overlay between the fitted polynomial and the original response curve. They seem to be well aligned with each other. The inverse function curve was calculated and the final compensated response curve is shown in Fig. 7(b). Note that the final compensated response curve was measured by the same procedure as described above. The result clearly shows the significant improvement in linearity of the responses. We found that after calibrating the projector response, the nonlinearity of captured pattern was negligible and no post-processing was needed.

 figure: Fig. 7

Fig. 7 The gray-scale response curve of the projector before (a) and after (b) compensation.

Download Full Size | PDF

5. 2 Static complex scene measurement

After nonlinearity calibration, we examined the viability of the proposed absolute phase retrieval. First, we measured two paper boxes with different sizes and colors. The captured fringe images are shown in Figs. 8(a) -8(d). From these four images, the wrapped phase can be calculated, as shown in Fig. 8(e). It can be seen from the wrapped phase that the fringe order was confused by the gap between the two isolated objects. However, with the help of the base phase shown in Fig. 8(f), we could find the correct fringe order and the phase could be unwrapped unambiguously. With the absolute phase map shown in Fig. 8(g) and the system calibration parameters, the 3-D coordinates of the objects were calculated. The final 3-D shape reconstruction result can be seen in Fig. 8(h).

 figure: Fig. 8

Fig. 8 Measurement result for two isolated paper box; (a)-(d) The four images of tested objects with deformed fringe patterns. (e) Wrapped phase map; (f) Base phase map; (g) Absolute phase map; (h) The reconstructed 3-D result; (i) Albedo map; (j) Wrapped phase, base phase, and unwrapped phase for the row 365.

Download Full Size | PDF

In Fig. 8(i), the obtained albedo of the scanned objects are shown, from which it can be seen the box on the right hand side has a higher albedo than the one on the left and the background has a albedo around zero. This is quite understandable because from the captured fringe images, we can observe that the right box is much brighter than the left one and the background is always dark. To illustrate the phase unwrapping process in a simplified manner, Fig. 8(j) shows the wrapped phase, the base phase (multiplied by πF as in Eq. (17)), and the unwrapped phase for the row 365, which is highlighted by a while line in Fig. 8(i). Note the base phase signal contains some noise, but the noise does not affect the fringe order identification since the its maximal amplitude is obviously no more than π. Once the fringe order is correctly quantized into integers M using Eq. (17), the noise in the base phase would not affect the measurement accuracy since the final absolute phase results from the high-frequency unwrapped phase. From Fig. 8(j), we can easily notice the significant improvement in the signal quality of the unwrapped phase.

Figure 9 shows another example: two sculptures with different materials were simultaneously measured. Figure 9(a) is one captured fringe pattern, Figs. 9(b)-9(c) shows the base phase, wrapped phase, and the unwrapped absolute phase, respectively. The final 3-D shape reconstruction result is shown in Fig. 9(e). We also map the texture onto the 3-D coordinates and view them from another angle, as shown in Fig. 9(f). From these results, we can see that the 3-D shapes were recovered very well and the wrapped phase could be successfully unwrapped by the proposed approach despite the different textural properties of the objects.

 figure: Fig. 9

Fig. 9 Measurement result for two isolated structures; (a) One captured fringe image. (b) Wrapped phase map; (c) Base phase map; (d) Absolute phase map; (e) Reconstructed 3-D result; (f) 3-D result with texture from another angle.

Download Full Size | PDF

5. 3 Accuracy analysis

To verify the accuracy of the measurement system, we measured a cylinder shape object with a radius of 111.82 mm. Figure 10(a) shows the measurement result. The final measurement result obtained successfully from the proposed algorithm, was then approximated with a cylinder equation. The quantities used to qualify the acquisition were the average radius of the approximating cylinder and the mean square error (MSE) between the acquired points and approximating points. Figure 10(b) shows one measured cross section of the cylinder compared with the fitted shape. The maximum and minimum fitted radiuses of the cylinder were 111.24 and 112.41 mm, respectively. This yields a difference of 0.59 mm from the actual value and reveals a high accuracy of measurement with an error of 0.527%. Besides, the MSE between the acquired points and approximating points are largely due to the random noise of the system, which measured approximately 0.36 mm. The result verifies that the proposed technique is capable of measuring the 3-D shape of objects with high accuracy in a fast manner.

 figure: Fig. 10

Fig. 10 Measurement results of a cylinder shape object. (a) Reconstructed 3-D result; (b) Plot of one cross section (blue) and approximating points (red) about the cylinder.

Download Full Size | PDF

5. 4 Real-time measurement analysis

The next two experiments were carried out to verify the real-time measurement performance of our system. As our processing unit, we used a Dell OptiPlex 990 with an Intel Core i7-2600 processor running at 3.4 GHz. Three threads - the acquisition thread, the reconstruction thread, and the display thread are controlled by three CPU cores to realize real-time 3-D shape acquisition, reconstruction, and display, separately. For simplicity, we utilized the reference-plane-based phase-to-height conversion algorithm in the following two experiments [6].

We first measured a moving human hand and the real-time video was presented (Media 1). Figure 11 shows selected photographs of measurement results with depth pseudo color (Fig. 11(a)) or with textures (Figs. 11(b)-11(d)). On the left 3-D view window, the world coordinate axis xwywzw axes labeled as red, green, and blue lines, respectively. We also displayed the captured fringe patterns, phase of reference plane, albedo of the object, and wrapped phase on the right window. To show the robustness of the proposed four-pattern method, we varied the illumination intensities by adjusting the aperture of the camera lens. It can be seen that the high-frequency phase can be correctly unwrapped, and the shape of the hand could be well reconstructed when the illumination intensities were either weak (Fig. 11(c)) or strong (Fig. 11(d)). When the illumination intensities were rather weak, some parts of the hand were excluded from processing due to their low albedo, but the fringe orders of recovered region could still be correctly identified, which has verified that the proposed method can perform robustly absolute phase recovery under different illumination conditions.

 figure: Fig. 11

Fig. 11 Real-time measurement results of a moving hand (Media 1). Selected frames of measured results with depth pseudo color (a) and texture (b). The 3-D shape could be successfully reconstructed when the illumination intensities were either weak (c) or strong (d).

Download Full Size | PDF

Secondly, we measured a scene which consists of two spatially isolated objects - a stationary power box and a moving hand. A resultant movie (Media 2) is presented and some selected measurement results from different views are shown in Fig. 12 . It can be seen that all surface discontinuities and orientation discontinuities were well preserved. These results demonstrate that our system can simultaneously measure multiple objects in real-time, obtaining satisfactory results.

 figure: Fig. 12

Fig. 12 Real-time measurement results of spatially isolated objects (Media 2).

Download Full Size | PDF

6. Conclusions, discussions, and future works

A novel four-pattern strategy for recovering absolute phase and measuring spatially isolated objects has been presented. This method uses two π/2 phase-shifted sinusoidal patterns and two linear increasing/decreasing ramp patterns to reconstruct the 3-D profile for multiply objects. Compared with traditional method combining phase-shifting with temporal phase unwrapping, the number of required patterns is reduced to four and the high-frequency phase is encoded only in two fringe patterns so that the measurement time and errors caused by inter-frame motion can be greatly reduced. In order to implement our four-pattern strategy under a real-time, high-speed environment, we also developed a high-resolution, real-time 3-D shape measurement system by modifying an inexpensive off-the-shelf DLP projector. Since there are four patterns for one measurement cycle in our method, the projection mechanism of the single-chip DLP projector prevent using one static color-encoded fringe pattern to achieve fast pattern switching. So we developed a pattern generation hardware using a FPGA which generated 120 Hz VGA signals directly. We demonstrated that the desired four patterns could be projected and acquired at 360 Hz, provided that the camera can keep up with the speed. After linearity compensation of the projector response, several experiments on the proposed method and the developed system have been conducted. Experimental results have demonstrated its success in its ability to produce fast, dense, and precise depth information for complex scene with a relatively low cost. Because of its favorable characteristics, we believe the proposed technique will have great potentials in many application fields where fast and accurate 3-D image or shape construction is required, such as robotic vision, human-machine interaction, reverse engineering, rapid prototyping, product inspection, and quality control.

Finally, we have to mention that the effectiveness of the proposed method is based on that the quality of the base phase is good enough to unwrap the high-frequency wrapped phase. And the success rate in phase unwrapping is closely related to the period number of the sinusoidal pattern, the noise level of the system, and the influence of unstable ambient light (e.g. fluorescent lamp blinks). Besides, although the projector defocus will have no impact on the high-frequency wrapped phase, we found that when the period number of the sinusoidal pattern exceeds twenty, the effect of defocus cannot be ignored and will attenuate the amplitude of the sinusoidal pattern, making the phase wrapping procedure unstable. In the future we will improve our method in this direction.

Acknowledgments

This project was supported by the Research and Innovation Plan for Graduate Students of Jiangsu Higher Education Institutions, China (Grant No. CXZZ11_0237).

References and links

1. F. Chen, G. M. Brown, and M. M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

3. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

4. J. M. Huntley and H. O. Saldner, “Shape measurement by temporal phase unwrapping and spatial light modulator-based fringe projector,” in Proceedings of Sensors, Sensor Systems, and Sensor Data Processing, O. Loffeld, ed. (SPIE, 1997), pp. 185-192.

5. T. R. Judge and P. J. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21(4), 199–239 (1994). [CrossRef]  

6. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

7. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

8. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]   [PubMed]  

9. X. Y. Su and W. J. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

10. L. Guo, X. Su, and J. Li, “Improved Fourier transform profilometry for the automatic measurement of 3D object shapes,” Opt. Eng. 29(12), 1439–1444 (1990). [CrossRef]  

11. X.-Y. Su, G. von Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Opt. Commun. 98(1-3), 141–150 (1993). [CrossRef]  

12. J. L. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003). [CrossRef]   [PubMed]  

13. S. Zhang and P. S. Huang, “High-resolution, real-time three-dimensional shape measurement,” Opt. Eng. 45(12), 123601 (2006). [CrossRef]  

14. S. Zhang and S. T. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng. 46(11), 113603 (2007). [CrossRef]  

15. P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 083201, (2007).

16. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]   [PubMed]  

17. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

18. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]   [PubMed]  

19. Y. J. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]   [PubMed]  

20. Y. J. Wang, S. Zhang, and J. H. Oliver, “3D shape measurement technique for multiple rapidly moving objects,” Opt. Express 19(9), 8539–8545 (2011). [CrossRef]   [PubMed]  

21. P. Wissmann, R. Schmitt, and F. Forster, “Fast and accurate 3D scanning using coded phase shifting and high speed pattern projection,” in 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2011 International Conference, 108–115.

22. Y. Wang, K. Liu, Q. Hao, D. L. Lau, and L. G. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011). [CrossRef]   [PubMed]  

23. P. S. S. Huang, C. P. Zhang, and F. P. Chiang, “High-speed 3-D shape measurement based on digital fringe projection,” Opt. Eng. 42(1), 163–168 (2003). [CrossRef]  

24. S. Zhang, D. Van Der Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]   [PubMed]  

25. Y. Li, C. F. Zhao, Y. X. Qian, H. Wang, and H. Z. Jin, “High-speed and dense three-dimensional surface acquisition using defocused binary patterns for spatially isolated objects,” Opt. Express 18(21), 21628–21635 (2010). [CrossRef]   [PubMed]  

26. S. Zhang and P. S. Huang, “Phase error compensation for a 3-D shape measurement system based on the phase-shifting method,” Opt. Eng. 46(6), 063601 (2007). [CrossRef]  

27. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]   [PubMed]  

28. http://www.vesa.org/vesa-standards/

Supplementary Material (2)

Media 1: MOV (3384 KB)     
Media 2: MOV (3538 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Schematic experimental setup for phase-shifting profilometry.
Fig. 2
Fig. 2 An example of the proposed patterns and their 8-bit (F = 10, Ap = 127.5, Bp = 127.5) gray-scale intensity distributions.
Fig. 3
Fig. 3 (a) Schematic of the high-speed pattern projection mechanism. (b) Synchronization signal HSYNC and VSYNC of VGA. The values of TS, TPW, TBP, TDP, and TFP for HSYNC and VSYNC can be observed in Table 1 for an 800 × 600 resolution at 120 Hz.
Fig. 4
Fig. 4 Timing diagram of the whole system. The darker regions represent the exposure period of the camera.
Fig. 5
Fig. 5 Four fringe patterns sequentially captured at 360 Hz
Fig. 6
Fig. 6 Our real-time 3-D measurement system (a) and its key components (b).
Fig. 7
Fig. 7 The gray-scale response curve of the projector before (a) and after (b) compensation.
Fig. 8
Fig. 8 Measurement result for two isolated paper box; (a)-(d) The four images of tested objects with deformed fringe patterns. (e) Wrapped phase map; (f) Base phase map; (g) Absolute phase map; (h) The reconstructed 3-D result; (i) Albedo map; (j) Wrapped phase, base phase, and unwrapped phase for the row 365.
Fig. 9
Fig. 9 Measurement result for two isolated structures; (a) One captured fringe image. (b) Wrapped phase map; (c) Base phase map; (d) Absolute phase map; (e) Reconstructed 3-D result; (f) 3-D result with texture from another angle.
Fig. 10
Fig. 10 Measurement results of a cylinder shape object. (a) Reconstructed 3-D result; (b) Plot of one cross section (blue) and approximating points (red) about the cylinder.
Fig. 11
Fig. 11 Real-time measurement results of a moving hand (Media 1). Selected frames of measured results with depth pseudo color (a) and texture (b). The 3-D shape could be successfully reconstructed when the illumination intensities were either weak (c) or strong (d).
Fig. 12
Fig. 12 Real-time measurement results of spatially isolated objects (Media 2).

Tables (1)

Tables Icon

Table 1 VGA Timing Specifications for an 800 × 600 Resolution at 120 Hz [28]

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

I n p ( x p , y p ) = A p ( x p , y p ) + B p ( x p , y p ) cos ( 2 π f x x p + 2 π n / N ) ,
I n ( x , y ) = α ( x , y ) { A p ( x , y ) + B p ( x , y ) cos [ ϕ ( x , y ) + 2 π n / N ] + β 1 ( x , y ) } + β 2 ( x , y ) ,
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) + 2 π n / N ] ,
ϕ ( x , y ) = tan 1 n = 1 N I n ( x , y ) sin ( 2 π n / N ) n = 1 N I n ( x , y ) cos ( 2 π n / N ) .
I 1 p ( x p , y p ) = A p ( x p , y p ) + B p ( x p , y p ) sin [ π F ( 2 x p / X 1 ) ] ,
I 2 p ( x p , y p ) = A p ( x p , y p ) + B p ( x p , y p ) cos [ π F ( 2 x p / X 1 ) ] ,
I 3 p ( x p , y p ) = A p ( x p , y p ) + B p ( x p , y p ) ( 2 x p / X 1 ) ,
I 4 p ( x p , y p ) = A p ( x p , y p ) B p ( x p , y p ) ( 2 x p / X 1 ) .
I 1 ( x , y ) = α ( x , y ) [ A p ( x , y ) + B p ( x , y ) sin ϕ ( x , y ) + β 1 ( x , y ) ] + β 2 ( x , y ) ,
I 2 ( x , y ) = α ( x , y ) [ A p ( x , y ) + B p ( x , y ) cos ϕ ( x , y ) + β 1 ( x , y ) ] + β 2 ( x , y ) ,
I 3 ( x , y ) = α ( x , y ) [ A p ( x , y ) + B p ( x , y ) ϕ ( x , y ) + β 1 ( x , y ) ] + β 2 ( x , y ) ,
I 4 ( x , y ) = α ( x , y ) [ A p ( x , y ) B p ( x , y ) ϕ ( x , y ) + β 1 ( x , y ) ] + β 2 ( x , y ) .
ϕ = tan 1 2 I 1 I 3 I 4 2 I 2 I 3 I 4 ,
α = ( 2 I 1 I 3 I 4 ) 2 + ( 2 I 2 I 3 I 4 ) 2 2 B p ,
ϕ = I 3 I 4 2 α B p .
Φ = U ( ϕ , ϕ ) = π F ϕ = ϕ + 2 π M ,
Φ = ϕ + 2 π × NINT [ ( π F ϕ ϕ ) / 2 π ] ,
P c = A c M c = ( p 11 c p 12 c p 13 c p 14 c p 21 c p 22 c p 23 c p 24 c p 31 c p 32 c p 33 c p 34 c ) , and P p = A p M p = ( p 11 p p 12 p p 13 p p 14 p p 21 p p 22 p p 23 p p 24 p p 31 p p 32 p p 33 p p 34 p ) .
( x w y w z w ) = ( p 11 c p 31 c x p 12 c p 32 c x p 13 c p 33 c x p 21 c p 31 c y p 22 c p 32 c y p 23 c p 33 c y p 11 p p 31 p x p p 12 c p 31 p x p p 13 p p 33 p x p ) 1 ( p 34 c x p 14 c p 34 c y p 24 c p 34 p x p p 14 p ) .
I i n = I i + n i .
ϕ n = tan 1 ( N 1 + 2 α B p sin ϕ N 2 + 2 α B p cos ϕ ) ,
Δ ϕ = ϕ n ϕ = tan 1 ( N 1 cos ϕ N 2 sin ϕ N 1 sin ϕ + N 2 cos ϕ + 2 α B p ) .
Δ ϕ N 1 cos ϕ N 2 sin ϕ 2 α B p .
σ Δ ϕ 2 ( x , y ) 6 σ n 2 ( cos 2 ϕ + sin 2 ϕ ) ( 2 α B p ) 2 = ( 6 σ n 2 α B p ) 2 .
I n ( x , y ) = α ( x , y ) { f [ I i p ( x , y ) ] + β 1 ( x , y ) } + β 2 ( x , y ) ,
A n ( i , j ) = 1023 × A n ( i , j ) min [ A ( i , j ) ] max [ A ( i , j ) ] min [ A ( i , j ) ] ,
Select as filters


    Select Topics Cancel
    © Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.