Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive imaging system with spatial light modulator for robust shape measurement of partially specular objects

Open Access Open Access

Abstract

In imaging systems, when specular surfaces responding sensitively to varying illumination conditions are imaged on groups of CCD pixels using imaging optics, the obtained image usually suffers from pixel saturation, resulting in smearing or blooming phenomena. These problems are then serious obstacles when applying structured light-based optical profiling methods to the shape measurement of general objects with partially specular surfaces. Therefore, this paper combines a phase-based profiling system with an with an adaptive spatial light modulator in the imaging part for measuring the three-dimensional shapes of objects with an advanced dynamic range. The use of a spatial light modulator in front of a CCD camera prevents the image sensor from being saturated, as the pixel transmittance is controlled by monitoring the input images and providing modulator feedback signals over time and space. When using the proposed system, since the projected fringes are effectively imaged on the CCD without any pixel saturation, phase information according to the object’s shape can be correctly extracted from non-saturated images. The configuration of the proposed system and transmittance control scheme are explained in detail, plus the performance is verified through a series of experiments, in which phase information was successfully extracted from areas that are not normally measurable due to saturation. Based on the results, the proposed shape measurement system showed a more advanced adaptive dynamic range when compared with a conventional system.

©2010 Optical Society of America

1. Introduction

Measuring the three-dimensional shape of a specular surface is one of the most challenging problems for three dimensional metrologies, and attempts have already been made to solve this problem using photometry [1], laser point scanning [2], and deflectometry [3]. Photometry-based methods use a surface reflection model, where the shape of an object is determined by the intensity of the image based on knowing or assuming the surface reflection characteristics and illumination conditions. Yet, the difficulty involved in accurately measuring the surface reflection characteristics invariably results in the use of assumptions, thereby limiting the application of photometry-based methods. In contrast, a solder joint inspection system based on laser point scanning was proposed in reference [2]. Here, a laser point is used to scan a specular object, while a parabolic mirror gathers the light reflected from the surface. The object shape is then measured by interpreting the optical path of the laser spot incident to the sensor unit, where the measurement principle relies on the following geometric relationship: the reflecting angle is always the same as the incident angle. However, while providing accurate results for specular object measurements, the point scanning process is time consuming, plus the results are only accurate when the basic reflection principle holds true, thus if the incident angle and reflecting angle differ from each other, the results can be inaccurate. Deflectometry also uses the same reflection principle to monitor a projected pattern through specular objects [3].

Figure 1 shows a well-known surface reflection model proposed by Nayar [4]. In this model, the reflected light has three components: a specular spike, specular lobe, and diffuse lobe, where the diffuse lobe is the result of the internal reflection, while the specular spike and specular lobe are the result of the surface reflection. If the surface is perfectly specular, no diffuse lobe is observed, yet as the surface property changes from specular to non-specular, the specular spike rapidly diminishes and the diffuse lobe becomes stronger. One interesting finding from this method is that the specular spike and direction of the maximum specular lobe do not coincide, which means that as a surface becomes less specular, i.e. the surface is only partially specular, the basic reflection principle may not hold true, making the point scanning-based methods produce erroneous results. The recent development of deflectometry is based on the interpretation of reflected images of calibrated objects. However, since this method is also based on the reflection principle, it is only applicable to surfaces with a very small deflection.

 figure: Fig. 1

Fig. 1 Reflectance model. When light is incident to a surface, the light is reflected as a diffuse lobe and specular lobe, and the difference between the diffuse and specular lobes is determined by the smoothness of the reflecting surface.

Download Full Size | PDF

Therefore, applying these methods, originally developed for perfectly specular surfaces, to partially specular surfaces cannot provide accurate results. Meanwhile, methods developed for diffuse surfaces also cannot be directly applied to partially specular surfaces, due to the saturation problem resulting from the fixed dynamic range of digital imaging systems using CCDs (charge coupled devices) or CMOS (complementary metal oxide semiconductor) sensors. For a partially specular surface, the specular spike is weak, yet the specular lobe is still strong. Hence, if the specular lobe infiltrates the entrance pupil of an image sensor, the corresponding point becomes saturated. In addition, the neighboring pixels are also saturated due to the blooming phenomenon [5]. The blooming effect frequently arises with CCDs exposed to high light energy in addition to the saturation while CMOS imagers just suffer from the saturation problem. This paper focuses on CCD imaging systems widely used in high-level photographic, scientific, and industrial applications that demand advanced image quality, as the proposed system is able to provide bigger effects than CMOS imaging applications. Figure 2 shows examples of this problem when structured light illuminates partially specular objects for three-dimensional optical profiling: the projected fringes are distorted by blooming and saturation. No valid information is extracted from the pixels affected by saturation and blooming, and finally the related object areas lose original three-dimensional shape information. To acquire reliable full surface profile, it is necessary to restore valid information from saturated pixels by adjusting irradiance.

 figure: Fig. 2

Fig. 2 Fringe image distortion due to saturation and blooming. As shown inside the red-dashed-circle, the saturation and blooming cause a discontinuous and shrunken fringe image.

Download Full Size | PDF

For this purpose, there has been several approaches; An anti-blooming camera was already proposed by Lee et al. [6], where barriers between pixels prevent the overflow of superfluous charges. Yet, since the relatively larger cell size of the anti-blooming camera causes a decrease in the image resolution, this cannot solve the saturation problem. Nayar et al. [7] suggested a method of attenuating the irradiance over the saturated level using a Spatial Light Modulator [8]. An LC-SLM(Liquid Crystal Spatial Light Modulator) is located in the middle of the light path between the object surface and the image sensor. This reduces the transmittance of the modulator pixels corresponding to the saturated pixels in the CCD, thereby achieving the desired irradiance below the camera saturation level. Another similar imaging system using a Micro Mirror Device was also recently proposed [9], where the same basic idea is expanded to applications of high dynamic range imaging, feature detection, and object recognition.

Accordingly, the main idea of this paper is that the abovementioned preliminary concept can be combined with a well-developed shape measurement method to resolve the dynamic range problems of imaging devices in a three-dimensional shape measurement system. The phase-based profiling method [10] is a well-known optical method that can produce highly accurate and fast shape measurements. Therefore, this paper combines an LC-SLM with a phase measuring profilometer as an adaptive imaging system that is robust for specular object surface conditions. As a result, the proposed system extends the application of the phase-based optical profiling method to partially specular objects, which are otherwise very difficult to measure accurately with conventional imaging systems.

2. Proposed measurement method

2.1. Proposed method

Although phase-based profilometry, such as the moiré or phase-measuring technique, is known to produce highly accurate and fast micro-scale measurements, it cannot be applied to partially specular objects, as the saturation of the imaging system by a specular surface causes distorted fringe patterns. Thus, to resolve this problem, a phase-based profilometry system combined with an adaptive dynamic range using an LC-SLM is proposed. The use of an LC-SLM between the object scene and the camera adjusts the attenuation level of the incoming light to limit the intensity value to the proper dynamic range.

For general-purpose photography with a wide dynamic range, a technique called High Dynamic Range (HDR) photography has been researched [11]. Its principle is based on a set of the multiple images under different exposure conditions and their merging algorithms to make a high-contrast image. Although the HDR technique can be used for robust shape measurements, it has some disadvantages: 1) a predefined number of images should be captured with a specified variation of exposure; 2) it is a time-consuming job to compare all obtained images to make a high quality image; and 3) for the image comparison process, all images should be memorized.

However, with the proposed method using a controllable SLM, the local irradiance adjustment on CCDs can be performed as an adaptive process with a spatio-temporal feedback loop different from HDR methods, and the adjustment algorithms can be implemented easier.

Figure 3 shows a schematic diagram of the proposed system. Fringe patterns were projected onto the object using a grating and projection lens, L3. The projected fringes and object scene were then delivered to the camera through imaging lenses, L4 and L5. The LC-SLM was placed at the conjugate plane of the CCD plane, creating one-to-one relationship between the point pairs of the SLM plane and the CCD plane. When saturated pixels were detected in the obtained image, the SLM transmittance at the corresponding points was reduced. If the attenuated irradiance was decreased too much, it was increased at the next transmittance step of the LC-SLM. The reduction and augmentation of the transmittance were repeated until the initially saturated irradiance reached the desired level. This procedure is explained later in greater detail.

 figure: Fig. 3

Fig. 3 Schematic diagram of the proposed system, consisting of fringe projection part and imaging part with LC-SLM. L3 delivers a grating fringe image onto the object surface, and the scene of the object surface with the fringe is initially imaged using the LC-SLM. The image then reaches the camera through L5. After attenuation using the LC-SLM, the intensities of the saturated CCD pixels are below the saturation level of the CCD pixels.

Download Full Size | PDF

For an exact correspondence between the SLM and CCD planes, the LC-SLM was positioned at the exact focus of L4 where the object scene was first imaged. However, when the LC-SLM was located at the first-imaged position, the physical boundary-lines between the LC-SLM pixels were also imaged at the camera. Thus, to avoid this problem, the LC-SLM was located off-position from the exact focus. A more detailed description of the proposed system is presented in the following subsections.

2.2 Phase-based profiling method

Figure 4 shows how fringes are distorted due to the object shape. Without any objects, fringes were projected onto the reference plane and a regular fringe image obtained using the camera, as shown in Fig. 4(a). However, when an object was placed on the reference plane, as shown in Fig. 4(b), the fringe image was distorted and the pitch of the fringe was no longer regular.

 figure: Fig. 4

Fig. 4 Fringe changes without and with an object. (a) With no object, the fringe was projected on the plane as parallel and regular lines. This was called the reference fringe. (b) With an object, the projected fringe on the surface was distorted.

Download Full Size | PDF

The fringe distortion can be expressed using the phase change of a sinusoidal pattern. From the geometry shown in Fig. 5 , the fringe shift, u, in the reference plane due to an object at point P was calculated from the height at P, and the projection and imaging angle as follows:

u=h(tanθp+tanθc)
where h is the height at point P, and θp and θc represent the projection and imaging angle, respectively. To convert the fringe shift u to a phase change, Eq. (2) was used as follows:
φ=2πλu
where λ is the fringe pitch in the reference plane, and φ represents the phase change. Therefore, fringe image I(x, y) captured by the camera was expressed as

 figure: Fig. 5

Fig. 5 Fringe shift due to object shape. Using this geometrical relationship, the three-dimensional shape of an object can be measured from the phase information.

Download Full Size | PDF

I(x,y)=a(x,y)+b(x,y)cos{2πλxx+2πλyy+φ(x,y)}.

In Eq. (3), x and y denote the image coordinates, λx and λy are the pitch of the fringe in the image plane in the x and y direction, respectively, and a and b represent the effect of the background illumination and surface reflectance, respectively.

After obtaining the phase change at each point, the height distribution was calculated using Eqs. (1) and (2) as follows:

h=λ2π(tanθp+tanθc)φ=kφ
where k is the coefficient determined by θp, θc, and λ.

If a and b are uniform throughout the object surface, φ can be calculated using the neighboring pixels, however, this is usually not the case. Instead, φ can be obtained by shifting the projected fringes three or more times [5]. In this paper, the four-frame technique was applied, where four phase-shifted images were used to calculate φ with the following phase shifting;

In(x,y)=a(x,y)+b(x,y)cos{2πλxx+2πλyy+φ(x,y)+π2n},n=0, 1, 2, 3..
From the four images of Eq. (5), φ was calculated as
φ(x,y)=tan1[I4(x,y)I2(x,y)I1(x,y)I3(x,y)](2πλxx+2πλyy)+2Nπ=φp(x,y)+2Nπ,
where N is an integer and φp(x, y) is the principal phase.Equation (6) shows that, after calculating the phase from the four phase-shifted images, the reference phase was subtracted. The 2 term represents the 2π ambiguity. By using the above relationship, the object shape was obtained from the four fringe patterns. Since the fringe shift was encoded in the phase change, this method provided a high resolution in determining the fringe shift.

2.3 Attenuation of saturated pixels

To prevent fringe distortion due to saturation and blooming of the CCD pixels, an effective incoming-light attenuation method is needed. Figure 6 shows the basic concept of controlling the attenuator proposed by Nayar [7]. The intensity value (It) is the feedback to control the transmittance (Tt+1) of the corresponding pixel in the SLM. Although there have been researches for nonlinearity of SLM recently [12], the nonlinearity compensation is not addressed in this paper. Linear feedback controllers work properly as long as the response between the applied intensity value and the controlled transmittance of SLM is monochromatically increasing or decreasing. If the nonlinear property is analyzed in depth and its inverse transformation is calibrated, a specially designed nonlinear controller might improve the performance of the proposed system.

 figure: Fig. 6

Fig. 6 Adaptive high dynamic range method, suggested by Nayar et al. [7],which overcomes the camera saturation problem by attenuating the excessive light intensity from the LC-SLM.

Download Full Size | PDF

To adjust the LC-SLM pixels, a correspondence between the camera pixels and the LC-SLM pixels has to be obtained first. It is already known that when a plane is imaged by a projective camera, there is a homography between the image captured by the camera and the image plane [13]. Thus, in the proposed system, since the LC-SLM plane is imaged by the camera, their relationship was expressed by a homography, as in Eq. (7),

u=Hl
where l and u represent the homogeneous coordinates of a pair of corresponding points in the camera plane and the SLM plane, respectively, and H is the homography between them.

The homography matrix H was calculated using the Direct Linear Transformation (DLT) algorithm [13]. For the given camera pixel coordinates (l, m), the corresponding SLM pixel coordinates (u, v) were calculated as follows:

u=h11l+h12m+h13nh31l+h32m+h33n,v=h21l+h22m+h23nh31l+h32m+h33n
where

u=(wuwvw),H=(h11h12h13h21h22h23h31h32h33),l=(lm1).

Based on the calibrated pixel correspondence, the attenuation was performed using the following relationships. Denoting the radiance at point (x, y, z) as L(x, y, z) and the transmittance at the corresponding point (u, v) in the SLM as T(u, v), the irradiance into point (l, m) of the CCD pixel was obtained using

E(l,m)​ ​ =T(u,v)L(x,y,z),
and the pixel intensity value I(l, m) was obtained by multiplying the camera gain λ with the irradiance as follows

I(l,m)=λE(l,m).

When the irradiance E(l, m) exceeded the saturation level of a CCD pixel, the transmittance of the corresponding SLM pixel, T(u, v) was controlled to make I(l, m) reside with the desired range. The detailed steps are as follows;

  • Step 1) A search was made for saturated pixels among the camera pixels.
  • Step 2) The transmittance of the corresponding pixels in the SLM plane was attenuated as follows:

    Tt+1=αTt

where α was an experimentally determined constant.
  • Step 3) If E(l, m) after the attenuation was lower than the threshold value E min, the transmittance was increased again.

    Tt+1=Ttβ(TmaxTt)

where β was an experimentally determined constant.

  • Step 4) Step 2 and Step 3 were repeated until the first saturated radiance E(l, m) reached a level below the saturation level E sat and above the minimum desired level E min.

2.5 Fringe projection part design

The effective projection of fringe images on the object surface requires uniform and bright enough illumination. In particular, a high luminance light source is essential, as the attenuator, which is composed of the LC-SLM and two polarizers, has a very low transmittance. The LC-SLM used in this paper was a translucent spatial light modulator, LC 2002, made by Holoeye photonics, while a halogen lamp with 24V / 250W made by OSRAM was used as the light source. Although the lamp provided a high luminance, it needed to be treated carefully due to its strong heat emission. Thus, to prevent potential heat damage to the projection system by the halogen lamp, the rays from the lamp not going to the grating were blocked by a two-lens illumination system [14], as shown in Fig. 7 . In this illumination system, L1 is located at the conjugate of the grating and a diaphragm D1 is located just behind L1. Therefore, unnecessary rays can be easily blocked by adjusting the diameter of the diaphragm D1.

 figure: Fig. 7

Fig. 7 Two-lens illumination system [14]. Light is transmitted through lenses, L1 and L2. Lens L3 delivers the fringe image of the grating onto the object surface. By changing the area of the aperture stop D1, the rest of the light is blocked, except for that entering the grating.

Download Full Size | PDF

2.6 Removal of unwanted grating pattern based on image contrast

The SLM consists of a 2 dimensional square grid of pixels, which is able to adjust the light transmittance pixel by pixel based on its polarization change characteristics. By adjusting the transmittance of each pixel individually, a structured transmittance pattern may be constructed in the SLM plane. The physical construction of the SLM leads to a small obscuration along the edges of every pixel with a fixed transmittance of zero. This obscuration may be considered as an unwanted grating pattern that degrades measurement accuracy. In this paper, when a SLM is positioned at the focus point of the imaging lens system, the physical pixel boundary of the SLM works as an obstacle for incoming light bundle as shown in Fig. 8 . It causes that some of camera pixels take no information on target objects. To minimize this unwanted effect several practical approaches can be reviewed.

 figure: Fig. 8

Fig. 8 Unwanted grating pattern due to the SLM pixel boundary. The window of SLMs consists of liquid crystal pixels and their physical boundary. The pixel boundary prevents the incoming light from passing through itself. (a) SLM structure. Gray regions are individual pixels allowing the transmission of incoming light and the black lines are physical pixel boundary with a low transmittance. (b) Unwanted grating pattern generation. The incoming light is partially blocked by the SLM pixel boundary while passing through the SLM.

Download Full Size | PDF

First, spatio-temporal averaging of the unwanted grating pattern is one solution based on averaging out according to the physical vibration directly applied to the SLM. By applying external oscillations, which have a higher frequency than the camera frame rate, to the fringe generating device (SLM) in diagonal direction against the fringe, it is theoretically possible to average out the unwanted grating pattern in the final camera image. However, this high frequency vibration potentially results in a hardware system failure such as the SLM breaking down.

Second, band-pass frequency filtering is another way of dealing with this problem. When noisy signals within certain frequency bands is included in an image, frequency filtering can be used to identify and restore the original signal information. However, since the SLM pixel boundary blocks the true image signal physically and works as shadow, it is not added-type noise signals to the original signals, and the image shadowed by the SLM pixel boundary is already the true information, so general filtering would not be helpful.

Third, intentional deviation of the grating (SLM) from the focal plane can reduce the black matrix phenomenon, which is proposed in this paper. The basic concept is presented in Fig. 9 . The unwanted grating pattern image gradually disappears as the SLM is placed further from the lens focal plane with a certain range. However, this also decreases the image contrast, so locating the SLM further from the focus, does not always mean a better image quality. Therefore, to find a compromise between the black matrix removal and image quality, a series of experiments was performed using the experimental set-up shown in Fig. 10 to identify the best position for the SLM.

 figure: Fig. 9

Fig. 9 Imaged unwanted grating pattern and its removal by deviation of the SLM from the focal plane. (a) Imaged unwanted grating pattern. Light coming from A is blocked by SLM pixel boundary, when the SLM is positioned at the focus of two lenses. However, light from B is firstly imaged on the SLM pixel and passes through the pixel. Finally light from B is imaged on camera. (b) Removal of unwanted grating pattern by deviation of the SLM from the focal plane. As SLM is positioned at off-focus of two lenses, some of light bundle coming from A passes through SLM pixels. Light bundle 2 in dark gray is still blocked by SLM pixel boundary, yet light bundle 1 in light gray passes through the SLM and reaches the CCD image plane.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Diagram of experimental set-up for determining optimal position of SLM. When the SLM was moved in the S direction, the unwanted grating pattern became weaker. At the same time, the image contrast initially decreased and then increased again (second highest visibility position).

Download Full Size | PDF

As shown in Fig. 11 , the unwanted grating appears on the camera image when the SLM was located at the exact lens focus (s = 0). When the SLM was moved further from the exact focal position (s = 3), the image became blurred and the unwanted grating became weaker. When the SLM was moved even further from the exact focal position (s = 6), the unwanted grating became hardly visible, yet the image contrast became stronger, as shown in Fig. 11. It means that image quality is also influenced by SLM position, so it is important to consider both the unwanted grating and the image contrast simultaneously to determine best SLM position.

 figure: Fig. 11

Fig. 11 Fringe image contrast according to SLM displacement. The exact focal position, s = 0, produced the highest contrast, yet the unwanted grating due to SLM pixel boundary was also strong. When the SLM was moved 3 mm, the unwanted grating became invisible and the image contrast became very low. When the SLM was moved 6mm, the image contrast became stronger and the unwanted grating became invisible. When the SLM was moved 9 mm, the image contrast became weaker again. (a) s = 0 (mm), the highest contrast. (b) s = 3 (mm), the lowest contrast. (c) s = 6 (mm), second highest contrast. (d) s = 9 (mm), lower contrast. (e) visibility variation by SLM displacement in direction of s.

Download Full Size | PDF

From this phenomenon, it was possible to locate the optimal position of the SLM which made the unwanted grating pattern invisible and also yielded a high enough contrast. To quantify how the unwanted grating pattern and the image contrast behave according to the position of SLM, a visibility index (contrast variation) was applied, which measured the contrast of a time-variant fringe image, as shown in the following equation;

V(x,y)={I4(x,y)I2(x,y)}2+{I1(x,y)I3(x,y)}22

The highest visibility occurred when the SLM was located at the lens focus. As the SLM moved, the visibility decreased and then started to increase beyond a certain position. Thus, the second highest visibility, as shown in Fig. 11, was identified as the optimal position for the SLM, as it eliminated the unwanted grating, while maintaining a high image quality.

3. Experiments and results

Figure 12 shows the proposed hardware system that was used to measure the three-dimensional shapes of partially specular objects. A series of experiments was conducted using three basic geometric objects and three different solder joints on a PCB that exhibited partially specular object characteristics. The results are presented below.

 figure: Fig. 12

Fig. 12 Experimental set-up, including fringe projection part with halogen lamp and imaging part with LC-SLM.

Download Full Size | PDF

First, the 3 basic geometric objects: convex cylinders, concave cylinders, and sphere shapes were tested. For these objects, the shape reconstruction results are shown in Fig. 13 , including original fringe images with saturation, adaptive fringe images based on SLM modulation, SLM masks for adaptive dynamic range, and final 3D reconstruction images. As the tested convex, concave, and sphere shaped objects had gradually varying surface gradients and metallugical surfaces with partially specular characteristics, image areas exceeding the dynamic range of the CCD were easily observed in the original fringe images.

 figure: Fig. 13

Fig. 13 Experimental results for 3 geometric objects. In the original fringe images without adaptive attenuation (2nd column), region-wide saturation occurred. When applying the SLM mask (4th column), fringes were obtained in the original saturated region as shown in the 3rd column. When using a series of adaptive fringe images, the object shapes were truly reconstructed, as shown in the 5th column.

Download Full Size | PDF

These areas were then detected using the proposed algorithms, and SLM masks created to achieve an adaptive dynamic range for the system. The final adaptive fringe images resulting from the SLM modulation showed advanced fringe pattern images within the dynamic range of the proposed system.

Second, to demonstrate the efficiency of the proposed system for real measurement applications, three kinds of electronic component were measured and the test results are presented in Fig. 14 . The first object was a chip electronic component with a shiny steel part and solder-joint; the second object was a different chip component with four solder-joints at the vertexes, and the third object was inside-buttons used for cell-phone key-pads. In the figure, the 1st column shows the measurement area, while the 2nd and 3rd columns compare the fringe images with and without attenuation of the incoming light by the SLM.

 figure: Fig. 14

Fig. 14 Experimental results for 3 objects. 1st column: object images without fringe, 2nd column: distorted fringe images due to saturation and no attenuation, 3rd column: adjusted images based on LC-SLM attenuation. After the attenuation procedure, the fringes appear effectively.

Download Full Size | PDF

In the last two columns, the shiny regions due to specular reflection are encircled by dashed lines. The results show that the fringe patterns, which were not effectively imaged without adjustment, were well imaged after using the proposed system to adjust the transmittance. Figure 15 shows the results of the phase calculation for the third object. Without adjusting the transmittance, the correct phase information could not be extracted from the saturated area (Fig. 15(a)), however, after adjusting the transmittance, the phase information for the saturated area was successfully extracted (Fig. 15(b)), resulting in successful shape measurements.

 figure: Fig. 15

Fig. 15 Three dimensional depth information for 3rd measurement object based on analyzing fringe images. (a) The red-dashed-circle part could not be analyzed due to the absence of fringe image, as seen in the 2nd column, 3rd row in Fig. 14 When the LC-SLM attenuated the corresponding pixels to the saturated parts, the fringes were well imaged and correct three-dimensional depth information was obtained.

Download Full Size | PDF

4. Conclusion

The surface profile of partially specular surfaces are not easy to measure, as measurement methods developed for specular objects or non-specular objects cannot be directly used. When measurement methods designed for specular objects are applied, inaccurate results can be produced because of the difference between the direction of the specular spike and the direction of the maximum specular lobe. Meanwhile, measurement methods designed for non-specular objects cannot be directly applied because of blooming problems or smearing problems due to the fixed dynamic range of a CCD exposed to the relatively strong reflection from partially specular surfaces. Therefore, this paper combined phase-based profilometry with an imaging system with a controllable dynamic range to measure the three-dimensional shapes of partially specular objects. The proposed measurement system based on n-bucket phase shift images resolves the pixel saturation problems by adjusting the transmittance level of the LC-SLM located at the conjugate position of the CCD plane. Experimental results showed that the fringe patterns projected onto a shiny region were well imaged by the proposed system, and phase information was successfully extracted. Therefore, the proposed system could be very useful in various areas, such as semiconductor and electronics industries, where partially specular objects are not yet measured exactly.

Acknowledgements

This research was supported by Kyungpook National University Research Fund, 2009.

References and links

1. G. Healey and T. Binford, “Local shape from specularity,” Int. J. Comput. Vis. 42, 62–86 (1988).

2. Y. Ryu and H. Cho, “New Optical Measuring System for Solder Joint Inspection,” Opt. Lasers Eng. 26(6), 487–514 (1997). [CrossRef]  

3. M. Yamamoto, et al.., “Surface profile measurement of specular objects by grating projection method,” Proc. SPIE 4567, 48–55 (2002). [CrossRef]  

4. S. Nayar, K. Ikeuchi, and T. Kanade, “Surface Reflection: Physical and Geometrical Perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. 13(7), 611–634 (1991). [CrossRef]  

5. K. Gasvik, Optical Metrology (John Wiley & Sons, 2002).

6. T. Lee, and H. Erhardt, “Charge-coupled imager with dual gate anti-blooming structure,” U.S. patent 4,975,777 (1990).

7. S. Nayar, and V. Branzoi, “Adaptive Dynamic Range Imaging: Optical Control of Pixel Exposures Over Space and Time,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Nice, France, 2003), pp. 1168–1175.

8. J. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

9. S. Nayar, V. Branzoi, and T. E. Boult, “Programmable Imaging: Towards a Flexible Camera,” Int. J. Comput. Vis. 70(1), 7–22 (2006). [CrossRef]  

10. P. Huang and S. Zhang, “3-D Optical Measurement using Phase Shifting Based Methods,” Proc. SPIE 6000, 15–24 (2005).

11. E. Reinhard, et al., High Dynamic Range Imaging: Acquisition, Display and Image-based Lighting (Morgan Kaufmann Publishers, 2005).

12. R. Banyal and B. Prasad, “Nonlinear response studies and corrections for a liquid crystal spatial light modulator,” Pramana J. Phys. 74(6), 961–971 (2010). [CrossRef]  

13. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Univ. Press, 2001).

14. N. Menn, Practical Optics (Elsevier Academic Press, 2004).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Reflectance model. When light is incident to a surface, the light is reflected as a diffuse lobe and specular lobe, and the difference between the diffuse and specular lobes is determined by the smoothness of the reflecting surface.
Fig. 2
Fig. 2 Fringe image distortion due to saturation and blooming. As shown inside the red-dashed-circle, the saturation and blooming cause a discontinuous and shrunken fringe image.
Fig. 3
Fig. 3 Schematic diagram of the proposed system, consisting of fringe projection part and imaging part with LC-SLM. L3 delivers a grating fringe image onto the object surface, and the scene of the object surface with the fringe is initially imaged using the LC-SLM. The image then reaches the camera through L5. After attenuation using the LC-SLM, the intensities of the saturated CCD pixels are below the saturation level of the CCD pixels.
Fig. 4
Fig. 4 Fringe changes without and with an object. (a) With no object, the fringe was projected on the plane as parallel and regular lines. This was called the reference fringe. (b) With an object, the projected fringe on the surface was distorted.
Fig. 5
Fig. 5 Fringe shift due to object shape. Using this geometrical relationship, the three-dimensional shape of an object can be measured from the phase information.
Fig. 6
Fig. 6 Adaptive high dynamic range method, suggested by Nayar et al. [7],which overcomes the camera saturation problem by attenuating the excessive light intensity from the LC-SLM.
Fig. 7
Fig. 7 Two-lens illumination system [14]. Light is transmitted through lenses, L1 and L2. Lens L3 delivers the fringe image of the grating onto the object surface. By changing the area of the aperture stop D1, the rest of the light is blocked, except for that entering the grating.
Fig. 8
Fig. 8 Unwanted grating pattern due to the SLM pixel boundary. The window of SLMs consists of liquid crystal pixels and their physical boundary. The pixel boundary prevents the incoming light from passing through itself. (a) SLM structure. Gray regions are individual pixels allowing the transmission of incoming light and the black lines are physical pixel boundary with a low transmittance. (b) Unwanted grating pattern generation. The incoming light is partially blocked by the SLM pixel boundary while passing through the SLM.
Fig. 9
Fig. 9 Imaged unwanted grating pattern and its removal by deviation of the SLM from the focal plane. (a) Imaged unwanted grating pattern. Light coming from A is blocked by SLM pixel boundary, when the SLM is positioned at the focus of two lenses. However, light from B is firstly imaged on the SLM pixel and passes through the pixel. Finally light from B is imaged on camera. (b) Removal of unwanted grating pattern by deviation of the SLM from the focal plane. As SLM is positioned at off-focus of two lenses, some of light bundle coming from A passes through SLM pixels. Light bundle 2 in dark gray is still blocked by SLM pixel boundary, yet light bundle 1 in light gray passes through the SLM and reaches the CCD image plane.
Fig. 10
Fig. 10 Diagram of experimental set-up for determining optimal position of SLM. When the SLM was moved in the S direction, the unwanted grating pattern became weaker. At the same time, the image contrast initially decreased and then increased again (second highest visibility position).
Fig. 11
Fig. 11 Fringe image contrast according to SLM displacement. The exact focal position, s = 0, produced the highest contrast, yet the unwanted grating due to SLM pixel boundary was also strong. When the SLM was moved 3 mm, the unwanted grating became invisible and the image contrast became very low. When the SLM was moved 6mm, the image contrast became stronger and the unwanted grating became invisible. When the SLM was moved 9 mm, the image contrast became weaker again. (a) s = 0 (mm), the highest contrast. (b) s = 3 (mm), the lowest contrast. (c) s = 6 (mm), second highest contrast. (d) s = 9 (mm), lower contrast. (e) visibility variation by SLM displacement in direction of s.
Fig. 12
Fig. 12 Experimental set-up, including fringe projection part with halogen lamp and imaging part with LC-SLM.
Fig. 13
Fig. 13 Experimental results for 3 geometric objects. In the original fringe images without adaptive attenuation (2nd column), region-wide saturation occurred. When applying the SLM mask (4th column), fringes were obtained in the original saturated region as shown in the 3rd column. When using a series of adaptive fringe images, the object shapes were truly reconstructed, as shown in the 5th column.
Fig. 14
Fig. 14 Experimental results for 3 objects. 1st column: object images without fringe, 2nd column: distorted fringe images due to saturation and no attenuation, 3rd column: adjusted images based on LC-SLM attenuation. After the attenuation procedure, the fringes appear effectively.
Fig. 15
Fig. 15 Three dimensional depth information for 3rd measurement object based on analyzing fringe images. (a) The red-dashed-circle part could not be analyzed due to the absence of fringe image, as seen in the 2nd column, 3rd row in Fig. 14 When the LC-SLM attenuated the corresponding pixels to the saturated parts, the fringes were well imaged and correct three-dimensional depth information was obtained.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

u = h ( tan θ p + tan θ c )
φ = 2 π λ u
I ( x , y ) = a ( x , y ) + b ( x , y ) cos { 2 π λ x x + 2 π λ y y + φ ( x , y ) }
h = λ 2 π ( tan θ p + tan θ c ) φ = k φ
I n ( x , y ) = a ( x , y ) + b ( x , y ) cos { 2 π λ x x + 2 π λ y y + φ ( x , y ) + π 2 n }
n = 0, 1, 2, 3
φ ( x , y ) = tan 1 [ I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) ] ( 2 π λ x x + 2 π λ y y ) + 2 N π = φ p ( x , y ) + 2 N π
u = H l
u = h 11 l + h 12 m + h 13 n h 31 l + h 32 m + h 33 n , v = h 21 l + h 22 m + h 23 n h 31 l + h 32 m + h 33 n
u = ( w u w v w ) , H = ( h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ) , l = ( l m 1 )
E ( l , m ) ​ ​ = T ( u , v ) L ( x , y , z )
I ( l , m ) = λ E ( l , m )
T t + 1 = α T t
T t + 1 = T t β ( T max T t )
V ( x , y ) = { I 4 ( x , y ) I 2 ( x , y ) } 2 + { I 1 ( x , y ) I 3 ( x , y ) } 2 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.