Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Novel method for increasing accuracy of projection moiré contouring of large surfaces

Open Access Open Access

Abstract

Projection moiré is a high resolution, non-contact, full field method for measuring out-of-plane displacements. Here, we develop a novel model for projection moiré system and derive a universal formula expressing the relation between phase variation and out-of-plane displacement. In order to eliminate the error caused by mismatching of pixels and changing of sensitivity coefficient, an iterative algorithm is presented which expands measurements to the magnitude of depth of field. Computer simulations and actual experiments prove the validity of the proposed method.

© 2016 Optical Society of America

1. Introduction

Moiré topography was introduced as a quantitative optical measurement technique almost 40 years ago [1,2]. Due to its advantages of non-contacting, full-field measuring, and high resolution, moiré topography is now widely used in many fields [3–7]. For example, projection moiré interferometry has been used to measure the structural deformation of micro air vehicle (MAV) wings during a series of wind tunnel tests [3]. Moiré topography was successfully used also for studying the mechanical behavior of the eardrum and the middle ear under static pressure [4]. The moiré method is based on the fact that by superposing two gratings of similar spatial frequency a low frequency pattern of iso-displacement fringes forms. The most remarkable advantage of moiré topography is that it can magnify the deformation without distortion so that high measurement resolution can be achieved.

In a projection moiré system, a picture of pattern of lines is projected on the reference plane, and then imaged onto a standard grating where the moiré is formed. The pattern of lines modulated by the reference surface will form moiré fringes with the standard grating placed before the sensor. Using phase-shifting technique [8–11] we can easily calculate full-field’s phase before and after the deformation, and their difference is the phase variation caused by deformation. To extract out-of-plane displacement, a theoretical model must be established and the relationship between phase variations and out-of-plane displacement should be derived. However, most existing models aims to solve the relationship in a certain optical setup configuration instead of a universal situation, and regard the sensitivity coefficient as a constant [10–12]. Some people noticed these peoblems. For example, C. Cosola et al. developed a 3D moiré set up combining intrinsic moiré (IM) and projection moiré (PM) [13]. The in-plane displacements measured with IM were corrected using the information on the object curvature provided by PM. Sciammarella and his collaborators gave thoughts to spatial variations of sensitivity, spatial variations of sensitivity, non collimated projection of line patterns, etc [14–16]. They performed the correction of x-coordinates by implementing a calibration process. And a number of displacements were given to the object in order to estimate taylor coefficients and finally correct z-coordinates [14]. This paper is going to put forward a novel method dealing with mismatching of pixels and changing of sensitivity coefficient.

In our previous work, we have preliminarily build up a model and put forward a formula depicting phase-height relatioeqnship [17]. Yet, it is theoretically inadequate. In the formula, the real displacement of a point in vertical direction is substitute by an oblique line segment. The model and the formula works well if the out-of-plane displacement is very small compared with the distance from the object to the receiving system. Nevertheless, we cannot neglect the phenomenon of mismatching pixels in the two sequence of pictures when the displacement is large enough. Moreover, as the phase change and out-of-plane displacement do not satisfy the proportionality relationship, it is inappropriate to carry out a calibration experiment to work out the sensitivity coefficient of the whole field and consider it as a constant. This study aims to eliminate the error caused by mismatching of pixels and changing of sensitivity coefficient, a general model for an arbitrarily arranged projection moiré system is derived, and a universal formula depicting the relationship between phase variation and out-of-plane displacement has been derived. In order to compute actural deformation, we should define all those system parameters. Some can be easily measured while other parameters need to be calculated. The present study includes numerical simulations of PM measurements and actual experiments. The former demonstrate the feasibility of the proposed moiré model while the latter prove that it is possible to reduce measurement errors on out-of-plane displacements up to 85%.

2. Theory

2.1 Optical setup and fringe formation

Let us consider the projection moiré setup that is schematically presented in Fig. 1. The system can be easily divided into three parts: a projection system in the lower left, a receiving system in the lower right, and the measurement space. The projection system is usually a projector, but here we separate it into a light source, a lens L1, and a standard grating G1. The receiving system includes a lens, a standard grating G2 placed on a high-precision PZT moving stage, and a CCD behind them. We establish three right-handed coordinate systems here, the world coordinate system (OXYZ) fixed on the reference plane, and two other systems (O1X1Y1Z1,O2X2Y2Z2) fixed on the lenses, with the center of lenses O1 and O2 being the origin, while lenses’ axis stand for Z1 and Z2 axis. The gratings are positioned at the focal points behind the lenses. The high-precision PZT moving stage will allow us to move G2 for phase shifting. The illumination systems casts light through grating G1 onto the object’s surface after being amplified by lens L1. The grid projection on the object surface is then imaged by L2 onto G2 where the moiré is formed, and the moiré pattern is recorded by a CCD camera.

 figure: Fig. 1

Fig. 1 Schematic illustration of a generalized projection moiré system.

Download Full Size | PDF

2.2 Relashionship between phase variation and out-of-plane displacement

The following relationships between origins of coordinate systems can be written:

OO1=T1=(e1,e2,e3)(T11,T12,T13)T
OO2=T2=(e1,e2,e3)(T21,T22,T23)T
T1 and T2 can be seen as translation vectors. The base vectors of the three coordinate systems satisfy:
(e11,e21,e31)=(e1,e2,e3)R1
(e12,e22,e32)=(e1,e2,e3)R2
where, R1 and R2are unit orthogonal rotation matrices. The coordinates of an arbitrary point (X,Y,Z)T in the world coordinate system OXYZ would be in the other two coordinate systems:
(X1,Y1,Z1)T=R1T[(X,Y,Z)TT1]
(X2,Y2,Z2)T=R2T[(X,Y,Z)TT2]
Using the traditional pin-hole camera model, it is possible to express the phase of a generic point with respect to the two gratings. The phase of the moiré pattern is determined by subtracting these two phases:
φ1(X,Y,Z)=2πf1p1×X1Z12πf2p2×X2Z2
where f1 and f2 are focal lengths, p1 and p2 are the pitches of gratings G1 and G2, respectively.

Putting an object on the reference plane, the point (X,Y,Z)Ton the reference plane becomes (X,Y,Z+h(X,Y))Ton the surface of the object. In two other coordinate systems, it becomes:

(X1,Y1,Z1)T=R1T(XT11,YT12,Z+h(X,Y)T13)T
(X2,Y2,Z2)T=R2T(XT21,YT22,Z+h(X,Y)T23)T
The phase of the new moiré pattern is
φ2(X,Y,Z,h(X,Y))=2πf1p1×X1'Z1'2πf2p2×X2'Z2'
By subtracting φ1 from φ2 it is possible to determine the change of phase caused by the deformation:
Δφ=φ2(X,Y,Z,h(X,Y))φ1(X,Y,Z)=2πf2p2×(X2'Z2'X2Z2)2πf1p1×(X1'Z1'X1Z1)
In order to simplify notation, R1 and R2 matrices are expressed as:
R1=(R11,R12,R13)
R2=(R21,R22,R23)
where Rij (i=1,2;j=1,2,3) is a column vector. Replacing X1,Z1,X2,Z2 with the results of Eqs. (8) and (9), and substituting R1,R2 in Eq. (11) by Eqs. (12) and (13), we can rewrite Eq. (11) as:
Δφ(X,Y,Z,h(X,Y))=2πf2p2×[(XT21YT22Z+h(X,Y)T23)R21(XT21YT22Z+h(X,Y)T23)R23(XT21YT22ZT23)R21(XT21YT22ZT23)R23]2πf1p1×[(XT11YT12Z+h(X,Y)T13)R11(XT11YT12Z+h(X,Y)T13)R13(XT11YT12ZT13)R11(XT11YT12ZT13)R13]
Equation (14) is a universal formula expressing the relationship between phase difference Δφ and displacement h(X,Y) for a generic optical set up. When the reference plane coincides with the Z = 0, the out-of-plane displacement equals to height. It is obvious that the phase-displacement relationship for the overall 3D space is nonlinear with the unknown h(X,Y) in both numerator and denominator. The phase variation can be easily calculated via phase shifting experiments. As long as all these parameters in Eq. (14) have been determined, we obtain the relationship between the phase difference and out-of-plane displacement in the whole measurement space.

2.3 Determination of the out-of-plane displacement field

If h(X,Y) is an infinitesimal constant h0, Δφ is proportional to h0. The proportionality coefficient k(X,Y,Z) is the sensitivity coefficient that depends on many parameters and changes with spatial coordinates. However, the linear relationship does not always hold true: in particular, larger displacements h(X,Y) yield larger changes in sensitivity k(X,Y,Z). This results in larger measurement errors. Furthermore, subtracting the two phase maps directly lead to determine Δφ in a wrong way. As shown in Fig. 1, an arbitrary point P(X,Y,Z) on the reference plane is captured in the first sequence of phase-shifting pictures at the position P1. After deformation P(X,Y,Z) moves to P'(X,Y,Z+h(X,Y)), which is captured at the position of P2 in the second sequence of phase-shifting pictures. Subtracting these two phase maps directly, the Δφ represents the phase change from P'' to P', whereas what we truly want is the difference between P and P'. Here, the phase difference between P and P'' caused by mismatching of pixels is a main source of error. According to the Eq. (14), the phase difference is not only concerned with system parameters, but also related to coordinates. As a result, the same out-of plane displacement at different position will bring in different error. It is confirmed by the the first simulation later in this artical. The farther the point is from O4, the larger the error will be. Hence, in order to deal with large out-of-plane displacement, the following algorithm was developed.

First, the phase of the reference plane as well as the measured surface, denoted by φ1 and φ2, should be determined from experimental measurements. Coordinates of lens center (uC,vC) in the recorded image and the pixel dimension c, as well as all system parameters mentioned above, also must be determined. In order to simplify the relation between the pixel coordinates (u,v) of an arbitrary point on the reference plane and its world coordinates (X,Y), we make the direction O2O4 orthogonal to the reference plane and set the the corresponding point of the pixel ordinate’s origin on the reference plane as the origin of the world coordinate system. The relation is simplified to simple proportional relationship, as is written in Eqs. (15) and (16). The proportional coefficient c is a constant under such conditions. Conversely, the expression of c becomes very complicated. For this reason, in the simulations and experiments carried out in this study, the proposed algorithm for determining height h(X,Y) is simplified by assuming that the optical axis of the sensor is orthogonal to the reference plane.

X=cu
Y=cv
Since every pixel has constant integer pixel coordinates, which means P' and P'', the corresponding points of the same pixel P2, share the same pixel coordinates. According to Eqs. (15) and (16), X, Y are also constant representing the world coordinates of every point’s corresponding point on the reference plane.

Knowing those parameters it is possible to build a MATLAB model reproducing the experimental set up. We can simulate a translation of the reference plane by a unit distance to calculate the phase map of the reference plane before and after this translation. The inverse of the phase difference is equal to the sensitivity coefficient. By multiplying the sensitivity coefficient by the phase difference it is possible to determine an initial value h0 of the height to be measured.

k0=1φ2(X,Y,Z,h(X,Y))|Z=0,h(X,Y)=1φ1(X,Y,Z)|Z=0
h0=k0×(φ2-φ1)

In order to acquire the real phase difference between P' and P and calculate the true height of P', we should find the real corresponding point of P' by correct the coordinates of P''. Since O2O4 and PP' are perpendicular to the reference plane, we can easily derive the iterative formulas of Eq. (19) and (20).

Xn+1=Xc×(XcuC)×hnT23
Yn+1=Yc×(YcvC)×hnT23

The coordinates X and Y in the phase map of reference plane are iteratively updated. Through interpolation, we can get a new phase. An average sensitivity coefficient kn is computed until we converge to the final value of height h.

φ1n+1=φ1(Xn+1,Yn+1)
kn+1=hnφ2(Xn+1,Yn+1,Z,hn)|Z=0φ1(X,Y,Z)|Z=0
hn+1=kn+1×(φ2φ1n+1)

3. Simulation of contouring measurements

In order to verify the feasibility of the proposed approach, we simulated PM measurements assuming that the reference plane coincides with the plane Z = 0. So the out-of-plane displacement equals to height. The objects to be contoured are shown in Fig. 2. The first displacement map − Fig. 2(a) − corresponds to a translation of the reference plane along the Z axis. The second object − Fig. 2(b) − contains 12 identical pyramids arranged into three rows of 4 pyramids. The maximum height to be measured is 200 mm for both cases.

 figure: Fig. 2

Fig. 2 Contour shapes utilized for simulating PM measurements.

Download Full Size | PDF

The Z2 axis was assumed to be perpendicular to reference plane XOY while the angle between Z1 and Z2 is 20°. All the parameters are listed in Table. 1. First, we generated a phase map of the reference plane according to all system parameters, and then the modulated phase map of the object. In real experiments, parameters may not be ideal as we listed. In particular, the grid line of the viewing system may not be parallel to the Y axis. Hence, phase map would not be like the patterns shown in Figs. 3 and 4. In order to obtain more reliable simulations, a small rotation, which is 1 degree, about Z2 was introduced to generate phase maps. However, R2 was still considered a unit matrix when the real displacement field was computed. Furthermore, the position of lens center with respect to reference plane was artificially modified from the true value uc=800,vc=610 to uc=795,vc=612. System parameters selected for simulation are listed in Table 1.

Tables Icon

Table 1. System parameters selected for simulating projection moiré measurements

 figure: Fig. 3

Fig. 3 Simulation results for the displacement field of Fig. 2(a): (a) Wrapped phase of reference plane; (b) Wrapped phase of shifted plane; (c) Comparison of displacement maps determined with conventional method and iterative algorithm; (d) Comparison of profiles extracted from the 612nd row of the image.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Simulation results for the displacement field of Fig. 2(b): (a) Displacement map determined with conventional approach; (b) Displacement map determined with iterative algorithm; (c) Comparison of profiles extracted from the 200th row of the image.

Download Full Size | PDF

Figure 3 shows the simulation results for the displacement field of Fig. 2(a). The wrapped phase of the reference plane is presented in Fig. 3(a) while the wrapped phase modulated by the shifted plane is presented in Fig. 3(b). The out-of-plane displacement maps are compared in Fig. 3(c): the surface on the top corresponds to the conventional method while the blue surface beneath is the result of the new algorithm. The profiles compared in Fig. 3(d) indicate that the present method could reduce displacement errors by more than 90%.

Errors entailed by conventional method derive from pixel mismatching and having neglected sensitivity variations. Both effects are related to the out-of-plane displacement itself. However, pixel mismatching also is related with: (i) the angle made between the optical axis of the lens and the line connecting the lens center with the point on the reference plane; (ii) frequency of moiré fringes. Considering the low frequency moiré fringes in the middle of the moiré pattern, the low frequency will surely lead to a small phase difference between P and P''. Since the sensitivity coefficient wouldn’t experience sharp variance and the frequency of lines in the moiré pattern is always much higher in the region near it, it will result in very small error in height calculation comparing with the region near it. In the region near O4, only variation of sensitivity coefficient contributes to the measurement error which hence becomes minimum. Conversely, boundaries of displacement map host much larger errors because mismatched pixels also play an important role there. As the distance from O4 increases, the larger out-of-plane displacements and higher frequency of moiré fringes concur to increase errors on out-of-plane displacements.

Figure 4 shows the simulation results for the pattern of Fig. 2(b). The displacement maps reconstructed with the conventional method and the new approach are shown in Figs. 4(a) and 4(b), respectively. The corresponding maximum height of the pyramids reconstructed by the two methods is 234 mm and 203 mm: hence, the present method was significantly more accurate as it almost matched the target value of 200 mm. Figure 4(c) compares profiles extracted from the 200th row of the displacement map images. Similar to the previous case, pyramids near the edge are affected by the largest measurement error. For example, the computed heights for a set of four pyramids are 223.5, 210.7, 213.1 and 233.6 mm, respectively. The present algorithm was instead much more accurate obtaining 198.7, 198.4, 199.6 and 202.5 mm, very close to the target height of 200 mm.

4. Experimental results

4.1 Optical setup configuration

To make it easier to measure system parameters, we take a wall as reference plane, and set the receiving system perpendicular to the wall. That means R2is an identity matrix. The optical setup configuration is shown in Fig. 5. Both projector and short focal lens have a complex zoom lens system. We simplify the lens system as a traditional pin-hole camera model and accordingly the equivalent optical centers as well as focal lengths should be determind. Only the first lens toward the reference plane matters, because the first lens determines the range of visual angle while the others only play a role of re-imaging. The most important feature of the simplification is visual angle remain unchanged. We can make use of this feature to determine the position of O1 and O2. In other words, based on the similarity principle, the distance of optical center from the first lens can be calculated providing the widths of the illuminated area AB and its facula on the first lens, as well as the distance between the reference plane and the first lens.

 figure: Fig. 5

Fig. 5 Optical set up used in the experimental tests.

Download Full Size | PDF

4.2 Determination of system parameters

Some of the parameters can be easily acquired, like T1,T2,c,p2,uc,vc, while some other parameters such as, for example, the rotation matrix R1 couldn’t be measured directly. If the projector is placed straight towards the reference plane without any rotation about the Z2-axis, the R1 matrix can be decomposed into two rotation matrices Q1 and Q2, as is shown in Eq. (24).

R1=Q1Q2=[cosα0sinα010sinα0cosα][1000cosβsinβ0sinβcosβ]
Where α is the angle between axis Z and the shadow of O1O3 in the XOZ plane, and β is the angle between O1O3 and its the shadow in the XOZ plane. As is shown in Fig. 6, O1M//OZ, MN//OX, O3N//OY, α is ∠MO1N, β is ∠O3O1N. Since the reference system of the viewing is lined up with the global reference system, R2 is an identity matrix.

 figure: Fig. 6

Fig. 6 Schematic plot of two involved angles (α = ∠MO1N,β = ∠O3O1N) to determine R1.

Download Full Size | PDF

R2=[100010001]

As f1 and p1, f2 and p2always show up in the form of fraction, f1,f2 always being numerators, and p1,p2 denominators, only their proportionality coefficients will count. According to the geometric similarity, it’s easy to calculate equivalent focal length, and feel free to assign p1 an arbitrary value. The equivalent focal length can be expressed as:

f1=p1×O1O3pO3×cosα
f2=LCD×O2O4LAB
where pO3is the grating pitch at O3, LAB is the width of illuminated area on reference plane, LCD is the width of the picture of illuminated area on grating G2. System parameters selected for one of the experiments are listed in Table. 2.

Tables Icon

Table 2. System parameters selected for a projection moiré experiment

4.3 Results

The area imaged by the CCD sensor in the contouring experiments carried out in this study is 1.4 m × 1.0 m but the actual region of interest considered in the experiments is about 1.1 m × 0.8 m.

In the first experiment, we put a 203.7 mm tall paper-made pyramid on the wall to simulate the presence of a real object. The pyramid was intentionally located in the corner of the field of view in order to fully exploit the potentiality of the proposed algorithm. Figure 7(a) shows the moiré pattern recorded by the CCD sensor. The white point in Fig. 7(a) corresponds to O4, the intersection between the optical axis of the viewing system and the reference plane. The out-of-plane displacement field in the region of interest (limited by a rectangle in Fig. 7(a)) is displayed in Fig. 7(b). Figure 7(c) compares the profiles obtained by the proposed method and the classical approach along a control segment extracted from the 186th row of the image and limited by pixels 770 and 1150. As expected, reconstruction errors were larger for the right side of the pattern. The new algorithm corrected the maximum height from 222.4 mm to 202.7 mm, thus reducing the reconstruction error from 9.18% to 0.49% with respect to the nominal value of 203.7 mm.

 figure: Fig. 7

Fig. 7 Out-of-plane displacement of a pyramid placed near the edge of field of view: (a) Projection moiré pattern recorded by CCD for a pyramid located near the edge of the field of view; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 186th row of the image.

Download Full Size | PDF

The pyramid was then placed closer to O4. The shape error for the conventional method became smaller and the measured height of the pyramid was now 214.3 mm. The present algorithm obtained a value of 204.6 mm for the pyramid height, very close to the nominal value of 203.7 mm, thus reducing the reconstruction error from 5.20% to 0.44%. The moiré pattern, 3D reconstructed shape and profile comparison are shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Out-of-plane displacement of the pyramid: (a) Projection moiré recorded by CCD for a pyramid located closer to O4; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 651st row of the image.

Download Full Size | PDF

After that, a 161.4 mm tall paper-made circular cone was placed on the reference plane so that the projection of the cone tip onto the reference plane is very close to O4. Measurement errors were smaller than in the previous experiments: the reconstructed cone height was 165.5 mm using the conventional method, and 161.9 mm using the new algorithm: measurement error was hence reduced from 2.54% to 0.31%. The results of this experiment are shown in Fig. 9.

 figure: Fig. 9

Fig. 9 Out-of-plane displacement of a circular cone: (a) Projection moiré recorded by CCD for a cone with projected tip very close to O4; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 421st row of the image.

Download Full Size | PDF

Table 3 summarizes the experimental results. It is confirmed that larger heights and greater distances from O4 lead to higher errors on reconstructed shape. Remarkably, the present method always yield reconstruction errors below 0.5%.

Tables Icon

Table 3. Results of real projection moiré measurements

Last, a complex unknown surface is reconstructed using the iterative algorithm. The model is shown in Fig. 10(a). It is an 103 mm tall paper-made platform with an embossed paper sticked on its upper surface. In order to detect the details, the distance between the reference plane and the projecting and receiving systems are shortened to 1.6 m. As a result, the area imaged by the CCD sensor in the contouring experiments is reduced to 0.9 m × 0.6 m. Moiré pattern recorded by the CCD sensor is displayed in Fig. 10(b). Figure 10(c) is a close-up shot of the embossed paper. The reconstructed surface of the embossed paper (limited by a white rectangle in Fig. 10(b)) is shown in Fig. 10(d). Figure 10(e) shows the 3D-surface of the small “n” marked with a tiny orange square in the upper left corner of Figs. 10(c) and 10(d).

 figure: Fig. 10

Fig. 10 Out-of-plane displacement of an unknown surface: (a) The paper-made platform with an embossed paper sticked on the upper suface of the platform; (b) Moiré pattern recorded by the CCD for the unkown surface; (c) close-up shot of the embossed paper; (d) Displacement map of the embossed paper reconstructed with the new algorithm; (e) The reconstructed surface with the new algorithm of a small region limited by a square in (c) and (d)

Download Full Size | PDF

5. Conclusion

This paper presented a general model for projection moiré measurements and derived a new formula relating phase variation and surface height. The new model applies to different optical setup configurations and provides a new way to calculate sensitivity coefficient instead of calibration through experiments which requires using expensive high precision motorized translation stages. Then, we proposed an iterative algorithm to eliminate the error caused by mismatching of pixels and the variable sensitivity coefficient to deal with large surface heights. Projection moiré simulations and real experiments confirmed the feasibility and validity of the proposed method which made it possible to reduce measurement errors by 85%. Remarkably, reconstruction errors never exceeded 0.5%. Proposed method is valid to objects with continuous surfaces, step-like surfaces reconstruction is still open problem.

Funding

National Natural Science Foundation of China (NSFC) (No. 11372182).

References and links

1. D. M. Meadows, W. O. Johnson, and J. B. Allen, “Generation of surface contours by moiré patterns,” Appl. Opt. 9(4), 942–947 (1970). [CrossRef]   [PubMed]  

2. H. Takasaki, “Moiré topography,” Appl. Opt. 9(6), 1467–1472 (1970). [CrossRef]   [PubMed]  

3. G. A. Fleming, S. M. Bartram, M. R. Waszak, and L. N. Jenkins, “Projection moiré interferometry measurements of micro air vehicle wings,” Proc. SPIE 4448, 90–101 (2001). [CrossRef]  

4. J. J. J. Dirckx and W. F. Decraemer, “Optoelectronic moire projector for real-time shape and deformation studies of the tympanic membrane,” J. Biomed. Opt. 2(2), 176–185 (1997). [CrossRef]   [PubMed]  

5. T. Laulund, J. O. Søjbjerg, and E. Hørlyck, “Moiré topography in school screening for structural scoliosis,” Acta Orthop. Scand. 53(5), 765–768 (1982). [CrossRef]   [PubMed]  

6. M. Ramulu, P. Labossiere, and T. Greenwell, “Elastic–plastic stress/strain response of friction stir-welded titanium butt joints using moiré interferometry,” Opt. Lasers Eng. 48(3), 385–392 (2010). [CrossRef]  

7. K. S. Lee, C. J. Tang, H. C. Chen, and C. C. Lee, “Measurement of stress in aluminum film coated on a flexible substrate by the shadow moiré method,” Appl. Opt. 47(13), C315–C318 (2008). [CrossRef]   [PubMed]  

8. J. A. N. Buytaert and J. J. J. Dirckx, “Design considerations in projection phase-shift moiré topography based on theoretical analysis of fringe formation,” J. Opt. Soc. Am. A 24(7), 2003–2013 (2007). [CrossRef]   [PubMed]  

9. J. A. N. Buytaert and J. J. J. Dirckx, “Moiré profilometry using liquid crystals for projection and demodulation,” Opt. Express 16(1), 179–193 (2008). [CrossRef]   [PubMed]  

10. Y.-B. Choi and S.-W. Kim, “Phase-shifting grating projection moiré topography,” Opt. Eng. 37(3), 1005–1010 (1998). [CrossRef]  

11. M.-S. Jeong and S.-W. Kim, “Phase-shifting projection moiré for out-of-plane displacement measurement,” Proc. SPIE 4317, 170–179 (2001). [CrossRef]  

12. A. Boccaccio, F. Martino, and C. Pappalettere, “A novel moiré-based optical scanning head for high-precision contouring,” Int. J. Adv. Manuf. Technol. 80(1-4), 47–63 (2015). [CrossRef]  

13. E. Cosola, K. Genovese, L. Lamberti, and C. Pappalettere, “A general framework for identification of hyper-elastic membranes with moiré techniques and multi-point simulated annealing,” Int. J. Solids Struct. 45(24), 6074–6099 (2008). [CrossRef]  

14. C. A. Sciammarella, L. Lamberti, and F. M. Sciammarella, “High accuracy contouring using projection moiré,” Opt. Eng. 44(9), 093605 (2005). [CrossRef]  

15. C. A. Sciammarella, L. Lamberti, and A. Boccaccio, “A general model for moiré contouring. Part I: Theory,” Opt. Eng. 47(3), 033605 (2008). [CrossRef]  

16. C. A. Sciammarella, L. Lamberti, A. Boccaccio, E. Cosola, and D. Posa, “A general model for moiré contouring. Part II: Applications,” Opt. Eng. 47(3), 033606 (2008). [CrossRef]  

17. J. Yao, Y. Tang, and J. Chen, “Three-dimensional shape measurement with an arbitrarily arranged projection moiré system,” Opt. Lett. 41(4), 717–720 (2016). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic illustration of a generalized projection moiré system.
Fig. 2
Fig. 2 Contour shapes utilized for simulating PM measurements.
Fig. 3
Fig. 3 Simulation results for the displacement field of Fig. 2(a): (a) Wrapped phase of reference plane; (b) Wrapped phase of shifted plane; (c) Comparison of displacement maps determined with conventional method and iterative algorithm; (d) Comparison of profiles extracted from the 612nd row of the image.
Fig. 4
Fig. 4 Simulation results for the displacement field of Fig. 2(b): (a) Displacement map determined with conventional approach; (b) Displacement map determined with iterative algorithm; (c) Comparison of profiles extracted from the 200th row of the image.
Fig. 5
Fig. 5 Optical set up used in the experimental tests.
Fig. 6
Fig. 6 Schematic plot of two involved angles (α = ∠MO1N,β = ∠O3O1N) to determine R1.
Fig. 7
Fig. 7 Out-of-plane displacement of a pyramid placed near the edge of field of view: (a) Projection moiré pattern recorded by CCD for a pyramid located near the edge of the field of view; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 186th row of the image.
Fig. 8
Fig. 8 Out-of-plane displacement of the pyramid: (a) Projection moiré recorded by CCD for a pyramid located closer to O4; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 651st row of the image.
Fig. 9
Fig. 9 Out-of-plane displacement of a circular cone: (a) Projection moiré recorded by CCD for a cone with projected tip very close to O4; (b) 3D shape reconstructed with the new algorithm; (c) Profile comparison for a control path located on the 421st row of the image.
Fig. 10
Fig. 10 Out-of-plane displacement of an unknown surface: (a) The paper-made platform with an embossed paper sticked on the upper suface of the platform; (b) Moiré pattern recorded by the CCD for the unkown surface; (c) close-up shot of the embossed paper; (d) Displacement map of the embossed paper reconstructed with the new algorithm; (e) The reconstructed surface with the new algorithm of a small region limited by a square in (c) and (d)

Tables (3)

Tables Icon

Table 1 System parameters selected for simulating projection moiré measurements

Tables Icon

Table 2 System parameters selected for a projection moiré experiment

Tables Icon

Table 3 Results of real projection moiré measurements

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

O O 1 = T 1 =( e 1 , e 2 , e 3 ) ( T 11 , T 12 , T 13 ) T
O O 2 = T 2 =( e 1 , e 2 , e 3 ) ( T 21 , T 22 , T 23 ) T
( e 1 1 , e 2 1 , e 3 1 )=( e 1 , e 2 , e 3 ) R 1
( e 1 2 , e 2 2 , e 3 2 )=( e 1 , e 2 , e 3 ) R 2
( X 1 , Y 1 , Z 1 ) T = R 1 T [ ( X,Y,Z ) T T 1 ]
( X 2 , Y 2 , Z 2 ) T = R 2 T [ ( X,Y,Z ) T T 2 ]
φ 1( X,Y,Z ) = 2π f 1 p 1 × X 1 Z 1 2π f 2 p 2 × X 2 Z 2
( X 1 , Y 1 , Z 1 ) T = R 1 T (X T 11 ,Y T 12 ,Z+ h (X,Y) T 13 ) T
( X 2 , Y 2 , Z 2 ) T = R 2 T (X T 21 ,Y T 22 ,Z+ h (X,Y) T 23 ) T
φ 2( X,Y,Z, h ( X,Y ) ) = 2π f 1 p 1 × X 1 ' Z 1 ' 2π f 2 p 2 × X 2 ' Z 2 '
Δφ= φ 2( X,Y,Z, h ( X,Y ) ) φ 1( X,Y,Z ) = 2π f 2 p 2 ×( X 2 ' Z 2 ' X 2 Z 2 ) 2π f 1 p 1 ×( X 1 ' Z 1 ' X 1 Z 1 )
R 1 = ( R 11 , R 12 , R 13 )
R 2 = ( R 21 , R 22 , R 23 )
Δ φ ( X,Y,Z, h ( X,Y ) ) = 2π f 2 p 2 ×[ ( X T 21 Y T 22 Z+ h ( X,Y ) T 23 ) R 21 ( X T 21 Y T 22 Z+ h ( X,Y ) T 23 ) R 23 ( X T 21 Y T 22 Z T 23 ) R 21 ( X T 21 Y T 22 Z T 23 ) R 23 ] 2π f 1 p 1 ×[ ( X T 11 Y T 12 Z+ h ( X,Y ) T 13 ) R 11 ( X T 11 Y T 12 Z+ h ( X,Y ) T 13 ) R 13 ( X T 11 Y T 12 Z T 13 ) R 11 ( X T 11 Y T 12 Z T 13 ) R 13 ]
X=cu
Y=cv
k 0 = 1 φ 2( X,Y,Z, h ( X,Y ) ) | Z=0, h ( X,Y ) =1 φ 1( X,Y,Z ) | Z=0
h 0 = k 0 ×( φ 2 - φ 1 )
X n+1 =Xc×( X c u C )× h n T 23
Y n+1 =Yc×( Y c v C )× h n T 23
φ 1 n+1 = φ 1( X n+1 , Y n+1 )
k n+1 = h n φ 2( X n+1 , Y n+1 ,Z, h n ) | Z=0 φ 1( X,Y,Z ) | Z=0
h n+1 = k n+1 ×( φ 2 φ 1 n+1 )
R 1 = Q 1 Q 2 =[ cosα 0 sinα 0 1 0 sinα 0 cosα ][ 1 0 0 0 cosβ sinβ 0 sinβ cosβ ]
R 2 =[ 1 0 0 0 1 0 0 0 1 ]
f 1 = p 1 × O 1 O 3 p O 3 ×cosα
f 2 = L CD × O 2 O 4 L AB
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.