Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Scaled SFS method for Lambertian surface 3D measurement under point source lighting

Open Access Open Access

Abstract

A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

SFS is a common Nondestructive Testing (NDT) technology which can recover the 3D shape from a single image by analyzing the light intensity distribution. Due to its less requirements on system complexity, it has becoming increasingly important and shown great potential on medical testing [1], endoscope [2], remote measurement [3,4] etc.

Lambertian surface is a general assumption in many practical applications like the ground surface [5], human face [6] and certain mechanical parts [7]. Under this assumption, many efforts have been made for the application of SFS on 3D measurements [8–10]. However, parallel light illumination assumption is used for most of SFS applications, which is difficult to be guaranteed and may bring errors to the result [11,12]. To solve this problem, some researches focus the methods based on the point light source model, for example, Huang et al. [13] developed a method employed point light source and showed several advantages over the methods using parallel source model. Ju et al. [14] used a point light source model to the minimization SFS method and testified its feasibility using the synthetic images. Unfortunately, these researches [13,14] together with the other related works [15–18] use too many simplifications and neglect the influences like the attenuation of the light flux and the anisotropy of the source, which may lead the methods be inaccurate. Moreover, the SFS by itself cannot derive the dimension information from the measurements [19,20]. Some methods were proposed by incorporating SFS with other optical methods [21,22]. However, these methods require additional equipment and more complex algorithm, which make them hard to be realized.

In this paper, a novel scaled SFS method under the illumination of a point light source is proposed combining a more accurate point light source model with the intrinsic matrix of the camera. This method can measure the shape of the surface with dimension using only one image. The experiments show that the precision of the method is about 1% using the synthetic images and 2% using the practical images.

2. Modeling of point light source

2.1. Point light source model

In SFS technology, light source model is very important for the measurement accuracy. Compared with these simple models for the light source [13–18], an overall model of the point light source is developed, which builds a detailed relationship among the point light source, Lambertian surface and grayscale of charge coupled device(CCD) pixels, and consider four important processes during the propagation of the light including the anisotropy of the source, the reflection on the Lambertian surface, the refraction in the lens of camera and the attenuation of the light flux during propagation.

The intensity Is(ϕ) of a light ray from a source will depend on its direction of propagation due to the anisotropy of the source, which can be expressed as shown in Eq. (1),

Is(ϕ)=I0cosμ(ϕ)
where ϕ is the angle between the optical axis of the source and the vector of the light flux, I0 is the intensity along the optical axis, and μ is the intensity distribution parameter of the source.

When the light beam reaches point M on the Lambertian surface, its intensity E(M) after reflection satisfies Eq. (2),

E(M)=ρπIs(ϕ)cos(θ)
where θ is the angle between the normal vector of point M and the vector of incoming ray, ρ describes the albedo of point M, which is treated as a constant.

Here notes pixel m is the projection of point M. Then, the light intensity ε(m) at m can be expressed as

ε(m)=E(M)π4(df)2cos4(α)
where d and f are the diameter and focal length of the camera lens, α is the angle between the vector of the arriving light beam at pixel m and the optical axis of the camera.

Finally, we obtain the expression of the recorded grayscale Rorg(m) of pixel m by adding the attenuation factor 1/r(M)2 and the proportionality coefficient of CCD β, as shown in Eq. (4),

{Rorg(m)=K1r(M)2cos4(α)cos(θ)cosμ(ϕ)K=βI0ρ4(df)2
where r is defined by the distance between M and source.

2.2. Evaluation of model

The proposed model was evaluated by using a Lambertian plane. We use a single LED manufactured by CREE company as the point light source, the distribution of the luminous intensity was recorded by a CCD camera. The evaluation result is shown in Fig. 1. Figure 1(a) is the evaluation result of the point light source model given in section 2.1, where the colored point cloud refers to the actual distribution and the black ones indicate simulated intensity distribution according to the Eq. (4). Figure 1(b) shows the deviation between them, where the RMSE equals to 1.89. The accuracy of the model proposed in this work is verified.

 figure: Fig. 1

Fig. 1 Evaluation of the point light model.

Download Full Size | PDF

3. Scaled SFS under point lighting

3.1. Combination with intrinsic matrix

For convenience, we firstly define the world coordinate frame FW, the camera frame FC and the image coordinate frame FI. In this paper, FW and FC are set be coincident to simplify the problem. Supposing in FC a 3D world point M = [X,Y,Z]T is projected into the image domain at a 2D homogenous point m = [x,y,1]T. The relationship between m and its counterpart point M is summarized as

sm=BM
where s = Z is called the depth, and B is the intrinsic matrix. Here note should be paid s is the only variable in Eq. (5) to be determined. Once knowing the intrinsic matrix, we can get the back projection for each pixel with the help of Eq. (5) as

M=sB1m

Now, we can use the variable s to represent the terms of Eq. (4) by combining Eq. (6). We take Lp as the coordinates of the source in FC, La as the vector of the source’s optical axis, L as the vector of the ray arriving at M, V as the normal vector at M, and fp as the focal length with unit of pixel. Again, note that Lp and La are the parameters of point source and can be calculated through calibration of the system. Then we can easily get

Z(X,Y)=300

Meanwhile, Tankus A has proven that ∂M/∂X × ∂M/∂Y and ∂M/∂x × ∂M/∂y are collinear [23], which means FW and FI are interchangeable during the computation. Constant parameter K in Eq. (4) and cos(α) are independent with the shape of the surface. Thus, we can further simplify the Eq. (4) by eliminating cos(α) and K through R(m) = Rorg(m)/Kcos4(α). Combining the Eqs. (4) and (7) yields the expression of R(m), the proposed model can be expressed as

R(m)=((M-Lp)TLa)μ(LaTLa)μ2((M-Lp)T(M-Lp))1+μ2*(Mx×My)T(MLp)(Mx×My)T(Mx×My)(MLp)T(MLp)

3.2. Iteration of the depth map

We use three constraints including the brightness constraint CL, the smoothness constraint CS and the integrability constraint CI to build the cost function, as shown by the following

{CL=Ω(I(m)R(m))2dΩCS=Ω(MxxTMxx+MyyTMyy)2dΩCI=Ω(Mxy-Myx)T(Mxy-Myx)dΩ
where Ω represents the image domain (x,y), Mxx = ∂(∂M/∂x)/∂x, Myy = ∂(∂M/∂y)/∂y, Mxy = ∂(∂M/∂x)/∂y, Myx = ∂(∂M/∂y)/∂x, R(m) is calculated from Eq. (8), and I(m) indicates the grayscale at m recorded by the camera. Here, the brightness constraint CL is a vital constraint to guarantee the consistency between the simulated intensity map and that acquired by the CCD camera, CS and CI are the auxiliary constraints that ensure the final reconstruction should be smooth and valid. By adding the weighting coefficient, the final cost function is

C=λLCL+λSCS+λICI=Ωf(s,sx,sy,sxx,syy,sxy,syx)dΩ

Through the calculus of variations and gradient descent optimization, the iteration format of s can be derived as

st+1=stδCs
where δ is an augment factor and ∂C/∂s can be calculated through Eq. (12).

Cs=fsx(fsx)y(fsy)+2x2(fsxx)+2y2(fsyy)+2xy(fsxy)+2yx(fsyx)

3.3. Simulation with the synthetic examples

To evaluate the proposed method, the synthetic images of three types of structure were created according to the Eq. (4). Table 1 shows the corresponding mathematical expression and the range of variables in X and Y axis. Figure 2 shows the result together with their RMSE by using our method, where the first column is the generated images, the corresponding depth maps are shown in the second column, the initial depth maps of iteration and the calculated results are shown in the third and final column.

Tables Icon

Table 1. Formula and the range of variables (unit in mm).

 figure: Fig. 2

Fig. 2 Evaluation of the point light model.

Download Full Size | PDF

From the result, one can find that the RMSE increases with the range of depth. We divide the RMSE by their range in Z direction and find that their percentage errors equal to 1.19%, 1.01% and 1.16%, which proves the precision of proposed method.

4. Experiment and analysis

The experiment was carried out by measuring a PVC tube to validate our proposal, the inner surface of PVC tube can be treated as a perfect Lambertian surface with the inner diameter of 100mm, as shown in Fig. 3. A LED was used as the point light source, the image was recorded by a CCD camera with resolution of 640*480.

 figure: Fig. 3

Fig. 3 Tube used in the measurement.

Download Full Size | PDF

The result is shown in Fig. 4. Figure 4(a) is the 3D point cloud of the tube calculated by the method proposed, Fig. 4(b) shows the comparison between the measured value and truth value of the profiles. It is obvious that the measurement results are in accordance with the actual shape of the tube.

 figure: Fig. 4

Fig. 4 Tube measurement result.

Download Full Size | PDF

The deviation between the result and the 3D model of the tube is also analyzed as shown in Fig. 5. The statistics of the deviation are shown in Table 2, where the third column is obtained through dividing the second column by the radius of the tube. One can see that the RMSE of our method is 1.97% which is slightly higher than the simulation. This result proves the applicability of our method in the 3D measurement of Lambertian surface.

 figure: Fig. 5

Fig. 5 Tube measurement result.

Download Full Size | PDF

Tables Icon

Table 2. Statistics of deviation.

5. Conclusion

This paper presented a scaled SFS method under the near point source lighting for the 3D measurement of Lambertian surface. In our method, a more practical near point light source model instead of the traditional parallel source model is studied. The new model takes into consideration of the attenuation of light flux, anisotropic of the source and the refraction of light flux at the camera lens, which is more suitable for the actual case. Coupled with the intrinsic matrix of camera, the relationship between the surface depth map and the recorded image grayscale is established, which enables the proposed method to measure a Lambertian surface with dimensions. Further analysis of the measurements using both the synthetic and practical images verified the feasibility and precision of the new method.

Funding

National Natural Science Foundation of China and the Civil Aviation Administration of China (CAAC) (U1633101, U1733119); The Fundamental Research Funds for the Central Universities of Civil Aviation University of China special funds (3122014H004).

References and links

1. M. T. E. Melegy, A. S. Abdelrahim, and A. A. Farag, “Better Shading for Better Shape Recovery,” In CVPR, 2307–2312 (2014)

2. D. Roxo, N. Gonçalves, and J. P. Barreto, “Perspective shape from shading for wide-fov near-lighting endoscopes,” in Iberian Conference on Pattern Recognition and Image Analysis, Springer, Berlin, Heidelberg (2013), pp. 21–30. [CrossRef]  

3. H. Tang, L. Yan, and P. Gao, “A modified SFS algorithm based on stereo images for the three-dimension reconstruction of Urban buildings,” in Proceedings of IEEE Conference on Geoscience and Remote Sensing Symposium (IEEE, 2009), pp. IV-390. [CrossRef]  

4. G. D. Martino, A. D. Simone, A. Iodice, D. Riccio, and G. Ruello, “SAR Shape from Shading in suburban areas,” In Urban Remote Sensing Event (JURSE) (2015), pp. 1–4.

5. L. Yang, Y. Xue, Y. Li, C. Li, J. Guang, X. He, J. Dong, and T. Hou, “Uncertainty from Lambertian surface assumption in satellite aerosol retrieval,” in Proceedings of IEEE Conference on Geoscience and Remote Sensing Symposium (IEEE, 2012), pp. 3662–3665. [CrossRef]  

6. Y. Sun, J. Dong, M. Jian, and L. Qi, “Fast 3D face reconstruction based on uncalibrated photometric stereo,” Multimedia Tools Appl. 74(11), 3635–3650 (2015). [CrossRef]  

7. L. Yang, E. Li, T. Long, J. Fan, Y. Mao, Z. Fang, and Z. Liang, “A welding quality detection method for arc welding robot based on 3D reconstruction with SFS algorithm,” Int. J. Adv. Manuf. Technol. 94(1–4), 1209–1220 (2018). [CrossRef]  

8. J. Hou, “A novel cloud surface shape estimation method based on SFS,” in Proceedings of IEEE Conference on Computer Science and Network Technology (IEEE, 2016 5th), pp. 80–83. [CrossRef]  

9. Y. Zhang and J. Peng, “Surface shape estimation of textureless area using shape from shading for Landsat imagery,” in Remote Sensing of the Atmosphere, Clouds, and Precipitation V (2014), Vol. 9259, p. 92591S.

10. L. Min, D. Li, and S. Dong, “3D Surface Roughness Measurement Based on SFS Method,” in Proceedings of IEEE Conference on Intelligent Human-Machine Systems and Cybernetics (IEEE, 2016 8th), Vol. 2, pp. 484–488. [CrossRef]  

11. J. Wu, P. L. Rosin, X. Sun, and R. R. Martin, “Improving shape from shading with interactive tabu search,” J. Comput. Sci. Technol. 31(3), 450–462 (2016). [CrossRef]  

12. H. Bingwei, C. Zhipeng, L. Dongyi, and Z. Xiaolong, “Research on reconstruction method for unknown objects through incorporating SFS algorithm and active vision technology,” PhChin. J. Sci. Instrum. 4, 002 (2012).

13. X. Huang, M. Walton, G. Bearman, and O. Cossairt, “Near light correction for image relighting and 3D shape recovery,” in Proceedings of IEEE Conference on Digital Heritage (IEEE, 2015),pp. 215–222. [CrossRef]  

14. Y. C. Ju, A. Bruhn, and M. Breuß, “Variational perspective shape from shading,” in International Conference on Scale Space and Variational Methods in Computer Vision, Springer, Cham, Heidelberg (2015), pp. 538–550.

15. J. Ackermann, S. Fuhrmann, and M. Goesele, “Geometric Point Light Source Calibration,” in VMV (2013), pp. 161–168.

16. T. Aoto, T. Taketomi, T. Sato, Y. Mukaigawa, and N. Yokoya, “Position estimation of near point light sources using a clear hollow sphere,” in Proceedings of IEEE Conference on Digital Heritage (IEEE, 2012), Vol. 1, pp. 3721–3724.

17. A. Giachetti, C. Daffara, C. Reghelin, E. Gobbetti, and R. Pintus, “Light calibration and quality assessment methods for Reflectance Transformation Imaging applied to artworks’ analysis,” in Optics for Arts, Architecture, and Archaeology V (2015), Vol. 9527, p. 95270B.

18. H. L. Shen and Y. Cheng, “Calibrating light sources by using a planar mirror,” J. Electron. Imaging 20(1), 013002 (2011). [CrossRef]  

19. J. Ma, P. Zhao, and B. Gong, “A shape-from-shading method based on surface reflectance component estimation,” in Proceedings of IEEE Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012 9th), pp. 1690–1693. [CrossRef]  

20. J. Wang, F. H. Wu, X. L. Li, and J. C. Wang, “Smoothing of SFS Reconstructed Surface Based on Genetic Algorithm,” Key Eng. Mater. 579, 877–884 (2014).

21. T. S. F. Haines and R. C. Wilson, “Combining shape-from-shading and stereo using Gaussian-Markov random fields,” in Proceedings of IEEE Conference on Pattern Recognition (IEEE, ICPR, 2008), pp. 1–4. [CrossRef]  

22. M. G. H. Mostafa, S. M. Yamany, and A. A. Farag, “Integrating stereo and shape from shading,” in Proceedings of IEEE Conference on Image Processing (IEEE, 1999), Vol. 3, pp. 130–134.

23. A. Tankus, A. N. Sochen, and Y. Yeshurun, “Shape-from-shading under perspective projection,” Int. J. Comput. Vis. 60(1), 21–43 (2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Evaluation of the point light model.
Fig. 2
Fig. 2 Evaluation of the point light model.
Fig. 3
Fig. 3 Tube used in the measurement.
Fig. 4
Fig. 4 Tube measurement result.
Fig. 5
Fig. 5 Tube measurement result.

Tables (2)

Tables Icon

Table 1 Formula and the range of variables (unit in mm).

Tables Icon

Table 2 Statistics of deviation.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I s ( ϕ )= I 0 cos μ ( ϕ )
E( M )= ρ π I s ( ϕ )cos( θ )
ε( m )=E( M ) π 4 ( d f ) 2 cos 4 ( α )
{ R org ( m )=K 1 r ( M ) 2 cos 4 ( α )cos( θ ) cos μ ( ϕ ) K=β I 0 ρ 4 ( d f ) 2
sm=BM
M=s B 1 m
Z( X,Y )=300
R( m )= ( ( M- L p ) T L a ) μ ( L a T L a ) μ 2 ( ( M- L p ) T ( M- L p ) ) 1+ μ 2 * ( M x × M y ) T ( M L p ) ( M x × M y ) T ( M x × M y ) ( M L p ) T ( M L p )
{ C L = Ω ( I( m )R( m ) ) 2 dΩ C S = Ω ( M xx T M xx + M yy T M yy ) 2 dΩ C I = Ω ( M xy - M yx ) T ( M xy - M yx )dΩ
C= λ L C L + λ S C S + λ I C I = Ω f( s, s x , s y , s xx , s yy , s xy , s yx )dΩ
s t+1 = s t δ C s
C s = f s x ( f s x ) y ( f s y )+ 2 x 2 ( f s xx ) + 2 y 2 ( f s yy )+ 2 xy ( f s xy )+ 2 yx ( f s yx )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.