Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement

Open Access Open Access

Abstract

In fringe-projection 3D surface-shape measurement, image saturation results in incorrect intensities in captured images of fringe patterns, leading to phase and measurement errors. An adaptive fringe-pattern projection (AFPP) method was developed to adapt the maximum input gray level in projected fringe patterns to the local reflectivity of an object surface being measured. The AFPP method demonstrated improved 3D measurement accuracy by avoiding image saturation in highly-reflective surface regions while maintaining high intensity modulation across the entire surface. The AFPP method can avoid image saturation and handle varying surface reflectivity, using only two prior rounds of fringe-pattern projection and image capture to generate the adapted fringe patterns.

© 2014 Optical Society of America

1. Introduction

Fringe-Projection Profilometry (FPP) has been commonly used for measuring the three-dimensional (3D) surface shape of an object [1]. Phase-shifted fringe patterns are projected onto an object surface and the 3D coordinates at every camera pixel are computed by triangulation from captured images of the fringe patterns that appear deformed on the surface [2], as shown in Fig. 1. One problem with FPP is image saturation, which occurs when the incoming intensity exceeds the maximum capturing capacity of the camera sensor, due to high luminance [3]. The true intensity is truncated to the maximum quantization level of the camera sensor [Fig. 2], resulting in incorrect intensities and thus significant measurement error.

 figure: Fig. 1

Fig. 1 Configuration of a camera-projector system for 3D surface-shape measurement showing a fringe-pattern intensity profile.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Truncated fringe-pattern intensity profile (flat regions) due to image saturation.

Download Full Size | PDF

Image saturation caused by specular light reflection has been handled using multiple camera viewpoints [4] and projection directions [5] to ensure that at least one projector-lighting and camera-viewing angle combination permits measurement without saturation in previously saturated image regions. These approaches have worked well because specular light reflection is highly dependent on lighting and viewing angles, although greater complexity in system hardware, data acquisition, and data processing has been necessary.

Phase-shifting methods can overcome image saturation if the number of phase shifts is sufficiently high to guarantee at least three unsaturated phase-shifted intensities at the same camera pixel to permit phase retrieval [6,7]. However, this method has limited accuracy because of non-optimal phase shifts that are ultimately used in the phase computation [8].

A large range in surface reflectivity across an object surface remains a challenging problem. A small camera aperture or exposure time can prevent image saturation at highly-reflective surface regions; however, it simultaneously reduces the intensity modulation at surface regions with low reflectivity, resulting in poor measurement accuracy. Accurate measurement of such a surface can thus be difficult using a single aperture setting or global exposure time.

High dynamic range (HDR) imaging techniques have been applied in FPP [9,10] where images of the deformed fringe-patterns were repeatedly captured at different exposure times. The brightest unsaturated intensity [9] or the intensity that generates the largest intensity modulation [10] from the sequence of images was then used to generate a composite fringe-pattern image pixel-by-pixel. FPP using the composite images could avoid image saturation and achieve a high intensity modulation across the surface. Alternatively, fringe patterns can be projected at multiple maximum input gray levels (MIGL) [11] toward generating composite fringe-pattern images. Although all of the HDR 3D imaging methods above have handled a surface with a large range in reflectivity, one common drawback is the large number of sets of fringe patterns projected and images captured (where all images within a set are acquired at the same MIGL or exposure time). (In this paper, a “set” thus refers to all fringe patterns at multiple phase shifts, fringe frequencies, and fringe directions that are used in the shape measurement, depending on the specific phase-measurement based technique). The drawback of a large number of sets of fringe patterns is partly due to the use of global projector and camera settings that are uniform across the entire pattern or image during fringe pattern projection and image acquisition, respectively.

Adjustment of camera exposure time in HDR 3D imaging above is performed using a single global setting across the entire image. This requires that multiple images be acquired at multiple exposure times, and only in a post process can the pixel-wise composite images be constructed. Alternatively, optical techniques have been used to control the camera exposure time at individual pixels, for example, using a controllable optical attenuator [12]; however, the final exposure setting at saturated pixels was also determined from multiple iterations to generate the final composite images, and additional optical and control hardware was required.

An advantage of changing the projected MIGL instead of the camera exposure time is that the MIGL adjustment can be performed locally on a pixel-to-pixel basis at the initial fringe-pattern projection stage of FPP without additional hardware. This paper presents an adaptive fringe-pattern projection (AFPP) method that modifies the MIGL locally in the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. The method enables only three sets of fringe-pattern images to be required for phase computation and 3D surface-shape measurement. The AFPP method thus improves over existing HDR 3D imaging methods by requiring many fewer sets of fringe patterns projected and images captured to achieve image saturation avoidance.

2. Principle

Calibration of a camera-projector system [Fig. 1] for fringe-projection 3D surface-shape measurement can be performed as for a two-camera system where the projector is treated as an inverse camera [13,14]. The matching projector point Pp(xp,yp) to a camera point Pc(xc,yc) can be determined from two orthogonal absolute phase maps computed from vertical and horizontal phase-shifted sinusoidal fringe patterns [14]. The 3D reconstruction of an object-surface point P(XW,YW,ZW) can then be determined from the known camera coordinates (xc,yc) and matching projector coordinates (xp,yp)and the system-calibration parameters.

2.1 Absolute phase-map generation

In phase shifting profilometry [15], a sequence of N phase-shifted sinusoidal intensity-profile fringe patterns are projected onto an object surface. The gamma-corrected input intensities to the projector [16] are given by:

Ii(xp,yp)=IMin+(IMaxIMin)×{0.5+0.5×cos[Φ(xp,yp)+δi]}1/γ,i=1,2,...,N
where (xp,yp) are the projector coordinates, IMin and IMax are the lower and upper bounds on Ii(xp,yp), respectively, Φ(xp,yp) is the phase, δi=2πi/N are the phase-shifts between patterns, γ is the projector gamma and N is the number of phase shifts. A phase map is computed by:
Φ(xc,yc)=tan1(i=1NIi(xc,yc)sinδii=1NIi(xc,yc)cosδi),
where Ii(xc,yc) represents the captured intensity at camera coordinates (xc,yc). Equation (2) computes a wrapped phase map Φ(xc,yc) and heterodyne phase unwrapping [17], employing three sequences (at three fringe frequencies) of N phase-shifted patterns, is used to remove the 2π periodicity and thus eliminate fringe ambiguity in the wrapped phase map and generate an absolute phase map ϕ(xc,yc).

2.2 Adaptive Fringe-Pattern Projection (AFPP)

Using a high maximum input gray level (MIGL) to the projector may result in image saturation, where the camera-captured image intensities Ii(xc,yc) of the fringe patterns would be truncated to the maximum gray level of the camera (255 for an 8-bit camera), shown by flat regions in Fig. 2. The resulting incorrect intensities would generate phase error and associated 3D surface-shape measurement error. Image saturation should thus be compensated for in the fringe-pattern projection and image capture procedures before phase computation.

Previous approaches of using global parameters that are constant for the entire image (e.g. global MIGL or exposure time) require the acquisition of multiple sets of phase-shifted images with a different value of global parameter for each set of phase-shifted images, in order to compose composite images for phase computation. In this paper, an adaptive fringe-pattern projection (AFPP) method locally modifies MIGL parameter values across the image to avoid image saturation, instead of employing a single global value that is constant across the image. This allows local pixel-level fringe pattern adaptation in regions that would otherwise be saturated, and the method is performed at the initial stage of fringe-projection profilometry. The method enables only three sets of fringe-pattern images to be required for phase computation and surface-shape measurement.

The adaptive fringe-pattern projection (AFPP) method achieves image saturation avoidance using three rounds of fringe-pattern projection with 1) a high global MIGL (constant globally across all pixels of the entire pattern) to identify the saturated camera pixels and determine the corresponding projector pixels to modify, 2) a low global MIGL to determine the local adaptation of MIGLs, and 3) pixel-wise locally adapted MIGLs to perform adaptive fringe-pattern projection for 3D surface-shape measurement, respectively, as detailed below.

1) Round one: determination of saturated camera pixels and projector pixels to modify

Identification and clustering of saturated pixels is performed using a high global MIGL set to 255 in all fringe patterns as it is the highest possible gray level input to the projector (for an 8-bit projector). The camera exposure time is adjusted to ensure high intensity modulation of captured fringe patterns in dark image regions, regardless of image saturation occurring in bright image regions. For each camera pixel, if the captured intensity Iijq(xc,yc) from any fringe-pattern image (for any phase shift i[1,N], fringe frequency j[1,J], or fringe direction q[1,Q]) reaches 255, the pixel is identified as a saturated pixel. A camera mask image MC(xc,yc) is then generated by:

MC(xc,yc)={255,foranyIijq(xc,yc)=2550,otherwise.

In the camera mask image, connected saturated pixels form saturated-pixel clusters. Contours just outside of each saturated-pixel cluster, cluster-bounding contours, are determined by border following [18]. Contour pixels are unsaturated and can thus be used to determine matching contour pixels in a projector image, and thus determine projector-image matching cluster-bounding contours. For each contour pixel with camera coordinates (xc,yc), the matching contour pixel projector coordinates (xp,yp) can be computed by:

{xp=T×ϕV(xc,yc)/2πyp=T×ϕH(xc,yc)/2π,
where absolute phase maps ϕV(xc,yc) and ϕH(xc,yc) are computed from vertical and horizontal fringe patterns, respectively, and T is the projected fringe period. As done for the camera, a projector mask image MP(xp,yp) can be generated to include all matching saturated-pixel clusters, where pixels on and inside all matching cluster-bounding contours are labeled 255 (saturated), and the remaining pixels are labeled 0 (unsaturated). The matching clusters in the projector mask thus indicate where MIGLs should be adapted in all fringe patterns, to specially illuminate surface regions that lead to image saturation when a high global MIGL is used.

2) Round two: determination of local MIGL adaptation

The adapted MIGLs are determined using a low global MIGL set to 120 in all fringe patterns. The low global MIGL 120 is an approximate median value over the projector input gray level range and was selected to be sufficiently low to ensure that saturation does not occur at any of the K saturated-pixel clusters in the camera mask image determined in Round One. Too high a MIGL value in Round Two would cause some pixels to be saturated in Round Two, and one or more additional rounds would be required to obtain phase values at those remaining saturated pixels. For all saturated pixels that belong to the kth (k=1,2,...,K) cluster, the matching projector coordinates (xp,yp) corresponding to the camera coordinates (xc,yc) are determined using Eq. (4) from two absolute phase maps ϕV(xc,yc) and ϕH(xc,yc) newly computed from the vertical and horizontal fringe patterns, respectively, using the low global MIGL. To determine the relationship of camera-captured intensities to projector-input intensities for each cluster, the captured intensities Iijq,k(xc,yc) at saturated camera pixels (xc,yc), wherever MC(xc,yc)=255, are mapped to their corresponding input intensities Iijq,k(xp,yp) at projector pixels (xp,yp), wherever MP(xp,yp)=255, by:

Iijq,k(xp,yp)=a1,k×Iijq,ka2,k(xc,yc)+a3,k,
where a1,k,a2,k,a3,k are the coefficients that define the intensity mapping for the kth cluster and need to be determined. The coefficients a1,k,a2,k,a3,k are determined for the kth cluster through weighted nonlinear least-squares regression, which iteratively optimizes the three coefficients using the Levenberg-Marquardt algorithm [19]:

argmin{a1,k,a2,k,a3,k}{i[1,N],j[1,J],q[1,Q][[a1,k1×(Iijq,k(xp,yp)a3,k)]1/a2,kIijq,k(xc,yc)]2}.

The MIGL that maintains the largest intensity modulation without image saturation for the kth cluster is thus determined by:

IMax,k=a1,k×254a2,k+a3,k.
Adapted MIGLs IMax,k(k=1,2,...,K) can thus be obtained to account for all clusters.

In Round Two, the determination of coefficients by Eq. (6) essentially fits a curve to map camera-captured intensities to projector-input intensities in order to estimate the best adapted MIGL corresponding to camera-captured intensity 254 by Eq. (7). Too low a MIGL value in Round Two would cause intensity values to be distributed lower on the curve (farther from 254) and could result in a poorer estimated MIGL corresponding to camera intensity 254.

3) Round three: adaptive fringe-pattern projection

Adapted phase-shifted fringe patterns for 3D surface-shape measurement are generated by using the high MIGL 255 (employed in Round One) at projector pixels wherever MP(xp,yp)=0, and the adapted MIGLs IMax,k(k=1,2,...,K) at projector pixels wherever MP(xp,yp)=255. Note that if a MIGL value lower than 255 were used in Round One, the system would not be able to detect at which pixels saturation would occur, when MIGL is later set to 255 in Round Three for all pixels that did not saturate in Round One.

Such-generated fringe patterns permit high intensity modulation across each captured fringe-pattern image as in the composite fringe-pattern images obtained by high dynamic range 3D imaging methods, but with additional advantages. Instead of capturing a set of fringe patterns repeatedly at many camera exposure times or MIGLs, the AFPP method captures one set of adapted fringe patterns that locally adjust the illumination according to the reflectivity characteristics of local surface regions after only two preliminary rounds. The AFPP method also does not require any additional optical and control hardware.

3. Experiments

To demonstrate the effectiveness of the AFPP method, a board with a large range in surface reflectivity, a black planar board with a 10 × 8 grid of white circles, was measured in three tests: with no AFPP, simplified AFPP, and AFPP, respectively. For no AFPP, two tests were performed, the first with a high global MIGL set to 255 for all fringe patterns, the second with a low global MIGL set to 120. For simplified AFPP, the adapted fringe patterns were generated using the high MIGL in Round One of AFPP (MIGL = 255) at pixels labeled 0 (unsaturated) in the projector mask image, and the low MIGL in Round Two of AFPP (MIGL = 120) at pixels labeled 255 (saturated) in the projector mask image. There was no further adaptation of MIGLs for the simplified AFPP. For AFPP, the full method described in Section 2.2 was followed. A wooden mask with different surface colors was also measured using 255, 120, and adaptive MIGL methods.

For all tests, the measurement system used a monochrome 1024 × 1024 8-bit CCD camera and a 1024 × 768 8-bit digital-light-processing (DLP) projector. A four-step phase-shifting algorithm, N=4 in Eq. (2), was used for phase computation. Three frequencies corresponding to periods 24, 26, and 28, were used for phase unwrapping.

4. Results and discussion

4.1 Planar board measurement

1) Measurement with no AFPP

For the first test, one of the vertical fringe patterns using a high global MIGL 255 is shown in Fig. 3(a), with the corresponding captured fringe-pattern image in Fig. 3(b). With the high MIGL 255, image saturation occurred in some of the white-circle regions in Fig. 3(b), resulting in errors seen as discontinuities in the absolute phase map in Fig. 3(c). For the second test, one of the vertical fringe patterns using a low global MIGL 120 is shown in Fig. 3(d), and the corresponding captured fringe-pattern image in Fig. 3(e). With the low MIGL 120, intensity modulation of the captured fringe pattern is very low in the black region, as seen in Fig. 3(e), with resulting errors slightly visible as discontinuities in the absolute phase map in Fig. 3(f). Image saturation and low intensity modulation will both result in large measurement errors, as demonstrated below.

 figure: Fig. 3

Fig. 3 A single vertical fringe pattern and corresponding captured image in planar board measurement with no AFPP: (a) fringe pattern with global MIGL 255, (b) captured image of planar board with projection of the pattern in (a), (c) absolute phase map of planar board from (b), (d) fringe pattern with global MIGL 120, (e) captured image of planar board with projection of the pattern in (d), (f) absolute phase map of planar board from (e).

Download Full Size | PDF

Results of the planar board measurement using a global MIGL 255 are shown in Fig. 4: Figs. 4(a)-4(c) for the black region and Figs. 4(d)-4(f) for the white-circle regions. Measured points in the black and white-circle regions are shown in Figs. 4(a) and 4(d), respectively. A plane was fit to all measured points excluding the white-circle regions, which had large errors. The measurement error at each point was calculated based on the distance between the measured point and the fitted plane. The 3D reconstruction from the measurement can be clearly seen to properly represent the black region of the board in Figs. 4(a)-4(b), with most errors in the range [-0.25, +0.25] mm, seen in Figs. 4(b)-4(c). However, the measurement in the white-circle regions of the board had extremely large errors, the extent of which can be observed in Figs. 4(d) and 4(e), where there is a scatter of 3D points very far from the concentration of points close to the board, seen in Z and in X and Y, respectively, and in Fig. 4(f), where a concentration of large errors up to 1000 mm and more extreme outlier errors are seen. The much larger errors in the white-circle regions [Figs. 4(e)-4(f)] compared to the black regions [Figs. 4(b)-4(c)] are due to image saturation that occurred in the white-circle regions [Fig. 3(b)]. The root mean square (RMS) error was 0.287 mm in the black region, 558.277 mm in the white-circle regions, and 14.035 mm over the entire surface, respectively.

 figure: Fig. 4

Fig. 4 Results of planar board measurement with global MIGL 255 for: (a)-(c) black region: (a) 3D point cloud data, (b) point cloud view of board showing measurement errors in colour, and (c) measurement errors; and (d)-(f) white regions: (d) 3D point cloud data, (e) point cloud view of board showing measurement errors in colour, and (f) measurement errors. Note that scales are different for black and white regions because of the size of errors.

Download Full Size | PDF

Results of the planar board measurement using a global MIGL 120 are shown in Fig. 5: Figs. 5(a)-5(c) for the black region and Figs. 5(d)-5(f) for the white-circle regions. Measured points in the black and white-circle regions are shown in Figs. 5(a) and 5(d), respectively. A new plane was fit to all the measured points excluding the black region, which had large errors. As before, the measurement error at each point was calculated based on the distance between the measured point and the fitted plane. The 3D reconstruction from the measurement can be clearly seen to properly represent the white-circle regions of the board in Figs. 5(d)-4(e), with most errors in the range [-0.25, +0.25] mm, seen in Figs. 5(e)-5(f). However, the measurement in the black region of the board had large errors, the extent of which can be observed in Figs. 5(a) and 5(b), where there are two large groups of 3D points approximately 250 mm and 500 mm in Z away from the concentration of points close to the board and as high as approximately 100 mm away in X and Y, respectively, and in Fig. 5(c), where large errors are also seen. Even greater errors are seen as scattered 3D points in Figs. 5(a) and 5(b) and as outliers in 5(c). The larger measurement errors in the black region [Figs. 5(b)-5(c)] compared to those in the white-circle regions [Figs. 5(e)-5(f)] are due to the low intensity modulation in the black region [Fig. 3(e)], although these errors were not as large as the errors due to image saturation with MIGL 255 [Figs. 4(e)-4(f)]. The larger errors due to image saturation compared to those from low intensity modulation are also seen in the absolute phase map discontinuities in Figs. 3(c) and 3(f) described earlier. The RMS error was 58.275 mm in the black region, 0.260 mm in the white-circle regions, and 52.235 mm over the entire surface, respectively. Measurement results with no AFPP demonstrate that using a single global MIGL, whether high or low, with a single camera aperture setting and exposure time, accurate measurement cannot be achieved simultaneously at both black and white-circle regions.

 figure: Fig. 5

Fig. 5 Results of planar board measurement with global MIGL 120 for: (a)-(c) black region: (a) 3D point cloud data, (b) point cloud view of board showing measurement errors in colour, and (c) measurement errors; and (d)-(f) white regions: (d) 3D point cloud data, (e) point cloud view of board showing measurement errors in colour, and (f) measurement errors. Note that scales are different for black and white regions because of the size of errors. Scales for Figs. 5(d)-5(f) are the same as in Figs. 4(a)-4(c), where the errors were low.

Download Full Size | PDF

2) Measurement with simplified AFPP

Results of the planar board measurement with simplified AFPP are shown in Fig. 6. In all the adapted fringe patterns, the high MIGL 255 was used for illuminating the black region, and the low MIGL 120 was used for illuminating the white-circle regions. Errors in both black and white-circle regions were reduced to similar low levels. Plane fitting was thus performed to all measured points. The RMS error was 0.277 mm in the black region, 0.253 mm in the white-circle regions, and 0.273 mm over the entire surface. The simplified AFPP was an improvement over no AFPP with 99.52% and 99.95% error reductions in the black and white-circle regions, respectively. However, the low MIGL 120 for all white-circle regions may not be ideal. Firstly, the local luminance is, in general, affected by the local surface normal, and the lighting and viewing angles in addition to the local surface reflectivity. The MIGL should ideally be locally adapted for each specific white-circle region to avoid saturation. Secondly, the MIGL 120 may be too low to ensure high intensity modulation without saturation. The improvement by local adaptation was demonstrated in the AFPP (described below).

 figure: Fig. 6

Fig. 6 Results of planar board measurement with simplified AFPP: (a) 3D point cloud data of entire surface, (b) point cloud view of board showing measurement errors in colour for entire surface. Scales for Figs. 6(a)-6(b) are the same as in Figs. 5(d)-5(e) and Figs. 4(a)-4(b), where the errors were low.

Download Full Size | PDF

3) Measurement with AFPP

One of the adapted vertical fringe patterns used in AFPP is shown in Fig. 7(a). With adapted MIGLs computed in Eq. (7) ranging from 151 to 185 for illuminating different white-circle regions, large intensity modulation without image saturation can be achieved in both black and white-circle regions in the corresponding captured image of the fringe-pattern on the board [Fig. 7(b)]. The absolute phase map of the board for AFPP in Fig. 7(c) has no apparent discontinuity or visible error, and is an improvement compared to the discontinuities seen in Fig. 3(c) due to MIGL 255 in the white regions, and Fig. 3(f) due to MIGL 120 in the dark regions. Results of the planar board measurement with AFPP are shown in Fig. 8. The RMS error was 0.276 mm in the black region, 0.229 mm in the white-circle regions, and 0.269 mm over the entire surface. Compared to measurement with no AFPP, 99.53% and 99.96% error reductions were achieved in the black and white-circle regions, respectively. Compared to measurement with the simplified AFPP, similar measurement accuracy was achieved in the black region, while an error reduction of 9.49% was achieved in the white-circle regions. This is because the adapted MIGLs ranging from 151 to 185 were higher than the MIGL 120 used in the simplified AFPP, generating greater intensity modulation and thus better associated measurement accuracy.

 figure: Fig. 7

Fig. 7 A single adapted vertical fringe pattern and corresponding captured image of the fringe pattern on the board using AFPP: (a) fringe pattern with adapted MIGLs, (b) captured image of the planar board with projection of the pattern in (a), (c) absolute phase map of the planar board from (b).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Results of planar board measurement with AFPP: (a) 3D point cloud data of entire surface, (b) point cloud view of board showing measurement errors in colour for entire surface. Scales for Figs. 8(a)-8(b) are the same as in Figs. 6(a)-6(b), Figs. 5(d)-5(e), and Figs. 4(a)-4(b), where the errors were low.

Download Full Size | PDF

A comparison of the measurement accuracy using three methods, without AFPP, simplified AFPP, and AFPP is presented in Table 1. The AFPP method not only solves the problem of large measurement errors due to image saturation at highly-reflective white-circle regions, but also achieves the best measurement accuracy in the white-circle regions by employing locally adapted high intensity-modulation, thus handling the varying luminance across the surface due to projector-lighting and camera-viewing angles. The simplified AFPP uses a low MIGL value set globally across the pattern at previously saturated regions, and thus has reduced computation compared to the AFPP method. Since the AFPP adapts the MIGL locally and ensures the highest MIGL that avoids saturation in all regions, it would be better than the simplified AFPP, especially in handling variation in surface reflectivity or illumination across the surface.

Tables Icon

Table 1. Comparison of Measurement Accuracy in Fringe-Pattern Projection Methods

The AFPP method is a comprehensive approach to saturation avoidance that adaptively adjusts the MIGL to the highest value that will not cause saturation for all previously saturated regions, and sets the MIGL to the highest possible value, 255, for all regions that do not saturate at this level. Both bright and dark regions are thus optimally handled.

4.2 Wooden mask measurement

Results of the fringe-pattern projection and image capture procedures in measuring the wooden mask using the AFPP method are shown in Fig. 9. The wooden mask has both bright and dark colors across the surface [Fig. 9(a)]. The camera exposure time was set to have high intensity modulation at surface regions with dark color, resulting in image saturation at certain surface regions with bright color, as seen in Fig. 9(b). From a camera mask image, containing saturated-pixel clusters [Fig. 9(c)], contours of saturated-pixel clusters are extracted, as shown in Fig. 9(d). The matching contours of saturated-pixel clusters are shown in a projector image [Fig. 9(e)]. A projector mask image with matching clusters is shown in Fig. 9(f). One of the adapted fringe patterns for 3D measurement is shown in Fig. 9(g). Using adapted MIGLs at matching clusters in Fig. 9(f) to illuminate the corresponding saturated-pixel clusters in Fig. 9(c), image saturation can be avoided in the corresponding camera-captured image of the fringe pattern on the wooden mask, as seen in Fig. 9(h).

 figure: Fig. 9

Fig. 9 Fringe-pattern projection and camera-image capture in measurement of wooden mask using AFPP: (a) wooden-mask image, (b) camera-captured image of vertical fringe pattern on wooden mask showing saturation, (c) camera mask image with saturated-pixel clusters, (d) camera image contours of saturated-pixel clusters, (e) projector image matching contours corresponding to contours in (d), (f) projector mask image with matching clusters corresponding to saturated-pixel clusters in (c), (g) vertical fringe pattern with adapted MIGLs at matching projector-image clusters in (f), (h) camera-captured image of fringe pattern on wooden mask showing no saturation with projection of adapted MIGLs in (g).

Download Full Size | PDF

Results of the wooden mask measurements are also shown in Fig. 10 as views of raw 3D point cloud data [Figs. 10(a)-10(d)] and a range image showing depth (Z) values of all measured points [Fig. 10(e)]. No smoothing was applied for all figures. For global MIGL 255 [Fig. 10(a)] holes are seen where large errors occurred due to image saturation. With MIGL 255 at unsaturated regions and MIGL 120 at previously saturated regions [Fig. 10(b)] and with MIGL 255 at unsaturated regions and adapted MIGLs by AFPP at previously saturated regions [Fig. 10(c)], detailed surface geometry measurement is achieved without the large errors in regions that were previously saturated. The improvement in accuracy of 0.024 mm in the brighter regions with the AFPP method compared to the simplified AFPP method seen for the planar board in Figs. 6(b) and 8(b), respectively, and Table 1, is not readily seen for the wooden mask displayed as a 3D cloud of points. However, absolute phase maps of the wooden mask in Figs. 11(a)-11(c) show visible improvement from MIGL 255 to 120 to adapted MIGLs by AFPP, seen as decreased discontinuities in the phase maps.

 figure: Fig. 10

Fig. 10 Results of the wooden mask measurement: view of raw 3D point cloud data: (a) with global MIGL 255, (b) with MIGL 255 at unsaturated regions and MIGL 120 at previously saturated regions (black holes) in (a), (c) with MIGL 255 at unsaturated regions and adapted MIGLs by AFPP at previously saturated regions in (a), (d) alternate view with AFPP, and (e) colour representation of the wooden mask range image showing depth (Z) values of all measured points with AFPP. (No smoothing was applied in all figures).

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Absolute phase maps of the wooden mask: (a) for global MIGL 255, (b) for global MIGL 120, and (c) for AFPP adapted MIGLs.

Download Full Size | PDF

4.3 Extended use of AFPP

The goal of the presented AFPP method was to use minimal number of sets of phase-shifted patterns (images) in saturation avoidance, and not to minimize the number of phase-shifted patterns within a set. The AFPP method is independent of the phase analysis technique employed and is thus applicable using various known techniques that would reduce the number of patterns within a set, such as epipolar techniques to use only horizontal or only vertical fringe patterns, spatial phase unwrapping to use only a single fringe frequency (period), and single image (one-shot) methods [1]. While the AFPP method aims to achieve saturation avoidance using minimal number of sets of phase shifted patterns, there was no attempt to optimize other aspects of the AFPP method. In addition to the above methods that reduce the number of images required, even to a single image per AFPP round, a variety of known techniques could be used to minimize computation such as closed-form curve fitting and use of many fewer points (projector input and camera-captured intensities) to determine equation coefficients, parallel processing, and GPU processing.

5. Conclusions

An adaptive fringe pattern projection (AFPP) method that adapts the projector maximum input gray levels (MIGLs) to local surface reflectivity was developed to overcome the image saturation problem in fringe-projection 3D surface-shape measurement. By locally modifying the MIGLs in projected fringe patterns, highly-reflective surface regions can be illuminated with adaptively-lowered MIGLs to avoid image saturation, and surface regions with low reflectivity can be simultaneously illuminated with the highest MIGL to maintain high intensity modulation. Experiments verified that the AFPP method can attain higher 3D measurement accuracy across a surface with a large range in reflectivity than measurement without AFPP. Experiments also demonstrated that the AFPP local adaptation of projected fringe patterns was better than simpler adaptation using a single low MIGL for saturated regions. The AFPP method was able to handle varying luminance across the surface due to lighting and viewing angles in addition to surface reflectivity. The AFPP method uses only two prior fringe-pattern projection and image-capture rounds to generate the adapted fringe patterns for 3D measurement, without the many fringe-pattern projection and image-capture sets used in previous high dynamic range 3D imaging methods, and without the added complexity of multiple camera viewpoints, projection directions, and optical and control hardware.

Acknowledgments

This research has been funded by the Natural Sciences and Engineering Research Council of Canada.

References and links

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are,” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. J. Salvi, J. Pagés, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37(4), 827–849 (2004). [CrossRef]  

3. C. Waddington and J. Kofman, “Analysis of measurement sensitivity to illuminance and fringe-pattern gray levels for fringe-pattern projection adaptive to ambient lighting,” Opt. Lasers Eng. 48(2), 251–256 (2010). [CrossRef]  

4. G. H. Liu, X. Y. Liu, and Q. Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50(23), 4557–4565 (2011). [CrossRef]   [PubMed]  

5. R. M. Kowarschik, J. Gerber, G. Notni, and W. Schreiber, “Adaptive optical three-dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000). [CrossRef]  

6. Y. Chen, Y. He, and E. Hu, “Phase deviation analysis and phase retrieval for partial intensity saturation in phase-shifting projected fringe profilometry,” Opt. Commun. 281(11), 3087–3090 (2008). [CrossRef]  

7. E. Hu, Y. He, and W. Wu, “Further study of the phase-recovering algorithm for saturated fringe patterns with a larger saturation coefficient in the projection grating phase-shifting profilometry,” Optik (Stuttg.) 121(14), 1290–1294 (2010). [CrossRef]  

8. D. W. Phillion, “General methods for generating phase-shifting interferometry algorithms,” Appl. Opt. 36(31), 8098–8115 (1997). [CrossRef]   [PubMed]  

9. S. Zhang and S. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2009). [CrossRef]  

10. H. Z. Jiang, H. J. Zhao, and X. D. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

11. C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surface-shape measurement,” InProceedings of IEEE Symposium on Optomechatronic Technologies (Institute of Electrical and Electronics Engineers, New York, 2010). [CrossRef]  

12. J. Jeong, D. Hong, and H. Cho, “Measurement of partially specular objects by controlling imaging range,” Proc. SPIE 6718, 671808 (2007). [CrossRef]  

13. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2009). [CrossRef]  

14. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 83601 (2006). [CrossRef]  

15. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

16. C. R. Coggrave and J. M. Huntley, “High-speed surface profilometer based on a spatial light modulator and pipeline image processor,” Opt. Eng. 38(9), 1573–1581 (1999). [CrossRef]  

17. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

18. S. Suzuki and K. Abe, “Topological structural analysis of digitized binary images by border following,” Comput. Vision Graph 30(1), 32–46 (1985). [CrossRef]  

19. J. J. Moré, “The Levenberg-Marquardt Algorithm, Implementation, and Theory,” in Numerical Analysis, G. A. Watson, ed. (Springer, Berlin, 1977).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Configuration of a camera-projector system for 3D surface-shape measurement showing a fringe-pattern intensity profile.
Fig. 2
Fig. 2 Truncated fringe-pattern intensity profile (flat regions) due to image saturation.
Fig. 3
Fig. 3 A single vertical fringe pattern and corresponding captured image in planar board measurement with no AFPP: (a) fringe pattern with global MIGL 255, (b) captured image of planar board with projection of the pattern in (a), (c) absolute phase map of planar board from (b), (d) fringe pattern with global MIGL 120, (e) captured image of planar board with projection of the pattern in (d), (f) absolute phase map of planar board from (e).
Fig. 4
Fig. 4 Results of planar board measurement with global MIGL 255 for: (a)-(c) black region: (a) 3D point cloud data, (b) point cloud view of board showing measurement errors in colour, and (c) measurement errors; and (d)-(f) white regions: (d) 3D point cloud data, (e) point cloud view of board showing measurement errors in colour, and (f) measurement errors. Note that scales are different for black and white regions because of the size of errors.
Fig. 5
Fig. 5 Results of planar board measurement with global MIGL 120 for: (a)-(c) black region: (a) 3D point cloud data, (b) point cloud view of board showing measurement errors in colour, and (c) measurement errors; and (d)-(f) white regions: (d) 3D point cloud data, (e) point cloud view of board showing measurement errors in colour, and (f) measurement errors. Note that scales are different for black and white regions because of the size of errors. Scales for Figs. 5(d)-5(f) are the same as in Figs. 4(a)-4(c), where the errors were low.
Fig. 6
Fig. 6 Results of planar board measurement with simplified AFPP: (a) 3D point cloud data of entire surface, (b) point cloud view of board showing measurement errors in colour for entire surface. Scales for Figs. 6(a)-6(b) are the same as in Figs. 5(d)-5(e) and Figs. 4(a)-4(b), where the errors were low.
Fig. 7
Fig. 7 A single adapted vertical fringe pattern and corresponding captured image of the fringe pattern on the board using AFPP: (a) fringe pattern with adapted MIGLs, (b) captured image of the planar board with projection of the pattern in (a), (c) absolute phase map of the planar board from (b).
Fig. 8
Fig. 8 Results of planar board measurement with AFPP: (a) 3D point cloud data of entire surface, (b) point cloud view of board showing measurement errors in colour for entire surface. Scales for Figs. 8(a)-8(b) are the same as in Figs. 6(a)-6(b), Figs. 5(d)-5(e), and Figs. 4(a)-4(b), where the errors were low.
Fig. 9
Fig. 9 Fringe-pattern projection and camera-image capture in measurement of wooden mask using AFPP: (a) wooden-mask image, (b) camera-captured image of vertical fringe pattern on wooden mask showing saturation, (c) camera mask image with saturated-pixel clusters, (d) camera image contours of saturated-pixel clusters, (e) projector image matching contours corresponding to contours in (d), (f) projector mask image with matching clusters corresponding to saturated-pixel clusters in (c), (g) vertical fringe pattern with adapted MIGLs at matching projector-image clusters in (f), (h) camera-captured image of fringe pattern on wooden mask showing no saturation with projection of adapted MIGLs in (g).
Fig. 10
Fig. 10 Results of the wooden mask measurement: view of raw 3D point cloud data: (a) with global MIGL 255, (b) with MIGL 255 at unsaturated regions and MIGL 120 at previously saturated regions (black holes) in (a), (c) with MIGL 255 at unsaturated regions and adapted MIGLs by AFPP at previously saturated regions in (a), (d) alternate view with AFPP, and (e) colour representation of the wooden mask range image showing depth (Z) values of all measured points with AFPP. (No smoothing was applied in all figures).
Fig. 11
Fig. 11 Absolute phase maps of the wooden mask: (a) for global MIGL 255, (b) for global MIGL 120, and (c) for AFPP adapted MIGLs.

Tables (1)

Tables Icon

Table 1 Comparison of Measurement Accuracy in Fringe-Pattern Projection Methods

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I i ( x p , y p )= I Min +( I Max I Min )× { 0.5+0.5×cos[ Φ( x p , y p )+ δ i ] } 1/γ ,i=1,2,...,N
Φ( x c , y c )= tan 1 ( i=1 N I i ( x c , y c )sin δ i i=1 N I i ( x c , y c )cos δ i )
M C ( x c , y c )={ 255, forany I ijq ( x c , y c )=255 0, otherwise
{ x p = T× ϕ V ( x c , y c ) / 2π y p =T× ϕ H ( x c , y c ) / 2π
I ijq,k ( x p , y p )= a 1,k × I ijq,k a 2,k ( x c , y c )+ a 3,k
argmin { a 1,k , a 2,k , a 3,k } { i[1,N],j[1,J],q[1,Q] [ [ a 1,k 1 ×( I ijq,k ( x p , y p ) a 3,k ) ] 1/ a 2,k I ijq,k ( x c , y c ) ] 2 }
I Max,k = a 1,k × 254 a 2,k + a 3,k
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.