Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution enhancement for topography measurement of high-dynamic-range surfaces via image fusion

Open Access Open Access

Abstract

In this paper, we introduce a method and algorithm for resolution enhancement of low-resolution surface topography data by fusing them with corresponding high-resolution intensity images. This fusion is achieved by linking the three-dimensional topographical map to its intensity image via an intrinsic image-based shape-from-shading algorithm. Through computational simulation and physical experiments, the proposed method’s effectiveness and repeatability have been evaluated, and the computational cost has been shown to be less than other state-of-the-art algorithms. This proposed method can be easily integrated with high-speed in-line measurements of high-dynamic-range surfaces.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, products with functional surfaces have drawn wide attention in scientific research and engineering applications, including the areas of tribology, thermal conduction, hydrodynamic control, optics, solar energy, medical implantation, biomimetics, bioelectronics and microelectronics. These functional surfaces comprise a series of macro-scale (up to metres) engineering products embedded with micro-/nano-scale structures [1–5]. These high-dynamic-range (HDR) surface geometrical characteristics (defined here for surface topographies of interest, the ratio of the largest to the finest scale is greater than 10 000:1) lead to a number of requirements for state-of-the-art measuring instruments [6–9] (see Fig. 1).

 figure: Fig. 1

Fig. 1 Requirements of HDR surface products in three dimensions.

Download Full Size | PDF

One of these requirements, for example, is the need for high-resolution lateral measuring capabilities. The physical lateral resolution limits of conventional mechanical tip/stylus and three-dimensional (3D) far-field optical measuring systems are determined by their tip geometry and optical transfer characteristics, respectively. Therefore, these systems cannot easily measure fine structures with lateral dimensions below 1 μm [7,10]. To surpass this lateral resolution limit to allow nanometre-scale features to be measured, techniques including scanning-probe-microscopy (SPM) and optical super-resolution (OSR) microscopy, such as stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED) microscopy, photoactivated localisation microscopy (PALM) and structured illumination microscopy (SIM) techniques have been developed. However, a fundamental limitation of these super-resolution techniques is that they require additional pre-processing of the sample, such as fluorescent staining, and in addition, their measuring ranges are limited, usually to scales of a few or several tens of micrometres [11,12].

To increase the measuring range or field-of-view (FOV), a number of HDR measuring techniques have been proposed based on stitching of a series of local high-resolution measurement data. For example, Yan et al. developed an iterative six degree-of-freedom stitching and weighted fusion algorithm for sub-aperture testing (SAT) of aspheric mirrors [13]. Lei et al. proposed a sampling noise-resistant stitching technique for large-area surface measurement with coherence scanning interferometry (CSI) systems [14]. Liu et al. developed a Gaussian process modelling-based stitching and fusion method for large surface measurement which provides higher stitching accuracy [15]. However, stitching techniques require a number (determined by aperture size and the total measuring range) of repetitive high-resolution measurements over each sub-aperture area. This demands repetitive setup operations for each sub-aperture measurement and a complete stitching process can be time consuming with potentially high data overheads.

A further requirement of HDR surface measurement is to improve measuring efficiency [16]. For example, Preibisch et al. developed a Fourier shift theorem-based fast stitching and fusion technique for biological volume images [17]. Yu and Peng developed a high-speed multi-scale registration-based stitching technique for 3D biological volume data on a gigabytes level [18]. Bria and Lannello developed a free toolbox – TeraStitcher – for high-speed tera-voxel-sized image stitching with low memory costs (less than 8 Gb) [19]. However, high-speed stitching techniques have received limited take up in mechanical manufacturing industry (although they are applied in optics manufacture) as they require complex additional operations, such as repetitive specimen adjustment and surface searching in each local measurement, compared to imaging of biological specimens.

In this paper, we propose a fast resolution-enhancement method which can produce a high-resolution surface height map by fusing a low-resolution height map with its corresponding high-resolution intensity image, based on recently developed shape-from-shading techniques [20–23]. With this solution, general sub-aperture 3D surface stitching can be replaced by fusing a global large-area low-resolution height map with several sub-aperture intensity images at different local regions, which are separately captured using a 3D sensor and 2D vision cameras. This new approach can be significantly faster than sub-aperture stitching and many recently developed multi-sensor fusion techniques [24–28], which are used with coordinate data only. For review of general coordinate data fusion techniques, readers can refer elsewhere [29,30].

2. Intrinsic image-based shape-from-shading linkage model

Fusion of the geometrical coordinate data of a 3D surface and its 2D intensity image under a specific illumination condition relies on determining a linkage model connecting the two sets of data. Shape-from-shading (SFS) is one such technique [31]. Many SFS models have been proposed, which are reviewed elsewhere [20,23]. In this paper, an intrinsic image-based lighting model [32] is employed, which can approximate complex shading factors, including natural or remote/near point lighting, Lambertian shading, specular reflectance, shadows, inter-reflections and material-related albedo variations.

Lambertian diffusion is the simplest shading model and widely used in SFS. A beam of light that is incident on a perfectly Lambertian surface is uniformly reflected in all directions. Thus, the shading (reflectance) image observed from any viewpoint is the same, and the reflected intensity is determined by the radiance received at each small surface facet cell and the local albedo (shown in Fig. 2(a)), i.e.

RL=Aρcosθi=AρSn,
where A is the radiance, ρ is the local albedo, θi is the angle of incidence, S and n are respectively the normalised vector of the lighting direction and surface normal. Unfortunately, recovery of the surface geometrical information n with three unknowns from the single Eq. (1) is ill-posed [23].

 figure: Fig. 2

Fig. 2 Lambertian diffusion (a) and Gaussian specular reflectance (b).

Download Full Size | PDF

Realistic shading models are usually expressed by a hybrid composition of Lambertian diffusion, background illumination, near lighting-induced radiance variations, plus local specular reflection, shadows and inter-reflections, i.e.

RH=Sk[RL(n)+RS(n)+],
where Sk accounts for the k-th remote light source and RS is a local specular term. As presented in Fig. 2(b), RS is usually expressed as a Gaussian (bidirectional) reflectance distribution function [23], i.e.
RS=Aγexp[cos2(θrθi)m2],
where γ is the specular reflectance coefficient, θr is the angle of emergence, i.e.(n,V), where V accounts for the viewing direction vector, and m corresponds to the local surface roughness [33]. The challenge of an exact recovery of the surface geometry information from such an under-determined least-squares optimization problem, i.e.
minIRρ,γ,(n)2,
is the ambiguity problem of shape-from-shading [34], where I is the corresponding pixel intensity captured from a viewing sensor.

Intrinsic image decomposition [32] has recently been applied to SFS by approximating an arbitrary realistic lighting environment using a first-order variation problem to overcome the complexity of Eq. (4), i.e.

R(n)=ρS(n)+β,
where S(n) is a surface normal-related Lambertian shading, ρ is the local surface albedo, β accounts for spatially independent local components, including shadows, inter-reflections and specular reflections, and R is the reflectance intensity at a specific pixel position. With the intrinsic decomposition, an intensity image of an object can be decomposed into an intrinsic part ρS and an extrinsic add-in β.

Compared to the hybrid model in Eq. (2), the intrinsic image model of Eq. (5) is highly simplified. To further reduce the solution ambiguity, constraining the piecewise smoothness of a function, here the local parameters ρ and β, as an additional penalty term has been demonstrated to be effective [22,35]. The resulting smooth local lighting parameters can approximate the original intensity images of any geometry in an arbitrarily natural and complex illumination scenario. Figure 3 presents a shading approximation example by using the smoothed intrinsic image lighting model, from which we can see that the piecewise smooth albedo and independent components synthetically map the global Lambertian shading to the original locally-shadowed image I.

 figure: Fig. 3

Fig. 3 Intrinsic image decomposition-based lighting approximation. (a) Illustrated surface Z; (b) locally shadowed image I; (c) Lambertian shading S; (d) local albedo ρ; (e) independent add-in β; and (f) approximate intensity image R. All the gray scales are in 0~255.

Download Full Size | PDF

3. Fusion computation

3.1 Fusion objective function

With the intrinsic image-based lighting model, a surface topography and its corresponding intensity image can be connected. However, if the surface normal in Eq. (5) is obtained from smoothed sample surface data, the intrinsically recomposed intensity image will not completely correspond with the original intensity image. Figure 4(b) presents such a recomposed result with the surface normal extracted from the smoothed (Gaussian, with the cut-off of five pixels) surface shown in Fig. 4(a). For this case, (I−R) accounts for the high-spatial frequency components that are not present in the input surface data but captured by the intensity image with a high-resolution machine vision sensor.

 figure: Fig. 4

Fig. 4 Intrinsic image recomposition with a smoothed surface input. (a) A smooth surface input Z0; and (b) intrinsically recomposed image R from Z0 with the gray scale in 0~255.

Download Full Size | PDF

Therefore, a minimization objective function can be designed, as shown in Eq. (6), to recover the high-spatial frequency topography information for input smooth surface data, i.e.

f(z)=IR(nZ)2+μZ1ZZ02+μZ2ΔZ2,
where Z is the final reconstructed high-resolution surface map, nZ is the extracted surface normal matrix from Z, Δ is a Laplacian, μZ1and μZ2 are respectively the control parameters for reconstruction fidelity and smoothness control. This objective function constrains the recovered surface Z to be close to the input smooth surface Z0. Thus, the reconstruction ambiguity in the general SFS problem can effectively be avoided.

Taking into account computational efficiency, we apply the method of Han et al. [21] by simplifying the intrinsic image model to the following form, i.e.

R=αS(n),
without significant information loss, where α accounts for all local lighting variations. If α and S account for a matrix map instead of a scalar, as in Eq. (5), αS indicates an entrywise product of α and S. To solve the non-linear problem in Eq. (6), a three-step algorithm (see Fig. 5(a) for the flowcharts) is presented in the following section.

 figure: Fig. 5

Fig. 5 Flowcharts of the resolution enhancement algorithm. (a) The complete algorithm; (b) the details of the iteration in step 3.

Download Full Size | PDF

3.2 Fusion algorithm

3.2.1 Estimation of global lighting

With the input of a surface height map Z and its corresponding intensity image I, the global Lambertian shading under a natural lighting condition can be approximated by low-level spherical harmonics or quadratic functions [21,34]. Because global lighting errors can be compensated by local parameters, we simply use a linear spherical approximation in this work for efficiency considerations, i.e.

S(n)=mTn¯,
where m is the harmonic coefficient column vector, n=[n,1]T is the augmented normal vector and n can be obtained using the gradient normalisation operator, given in Eq. (9) and (10), i.e.
n=ω[zx,zy,1]withω=1/1+z2,
where z is the following differentiation calculation for each height map element z, through the x and y directions

z=[zx,zy].

By concatenating the pixilated normal vectors into an N-by-4 matrix, estimation of the global lighting parameters m can be carried out by solving the following linear least-squares problem

minmImTn¯2.

Given the low-resolution map in Fig. 4(a) and its high-resolution intensity image in Fig. 3(b), the global lighting estimation process leads to a global shading result S as shown in Fig. 6(a). With the global lighting only, the recomposed shading image lacks local variations, e.g. the circular shadow shown in the bottom left of Fig. 3(b) is missing in Fig. 6(a).

 figure: Fig. 6

Fig. 6 Demonstration of computational results of (a) global lighting S, (b) local lighting αS and (c) algorithm output.

Download Full Size | PDF

3.2.2 Estimation of local lighting variations

With the global shading S obtained, the local lighting parameters can be estimated by solving the least-squares problem in Eq. (12), i.e.,:

minαIαS2.

To avoid over-fitting, we apply common assumptions used in intrinsic image decomposition [21,22,35] and constrain the local variations to a piecewise smooth function. Thus, an adaptively piecewise smooth penalty term is designed for general homogeneous material measurement in the form of:

kΩαwkααk2.
where wk is the weighting function of intensity similarity and controls the smoothness of the local lighting α with its Ωα neighbours, i.e.
wk={0ifIIk2>τexp(IIk22σ2)otherwise,
where I and Ik respectively represent the local lighting α-corresponded image intensity and the k-th intensity neighbour, σ2 controls the smoothness of the weighting parameters, and τ represents a threshold controlling the degree of fragmentation in piecewise local lighting segmentation.

The following regularised least-squares minimisation problem is defined, i.e.

minαIαS(n)2+μαkNwkααk2,
where μα controls the trade-off between the estimation accuracy and smoothness. This regularised least-squares problem can be solved using standard matrix calculus. With the local lighting estimation, a recomposed shading result αS is shown in Fig. 6(b), which shows good similarity to the input intensity image in Fig. 3(b), but with some local blurs.

Under controlled lighting conditions, e.g. with simple remote lighting and limited local specularity, shadows or inter-reflections, uniform local lighting with α = 1 can be used for algorithm acceleration. In this case, the fusion algorithm can be simplified as a combination of global lighting estimation and high-resolution surface map refinement. Our experiments in section 4 show that such an acceleration provides significant computing cost reductions. Such a simplification can be effectively used in high-speed in-line measurement, where there are lower accuracy requirements.

3.2.3 High-resolution surface reconstruction

With the global and local lighting conditions determined, refinement of an input surface map Z0 can be conducted by minimising the fusion objective function in Eq. (6). This is a non-linear problem because the surface normal estimation in Eqs. (9) and (10) is a non-linear normalisation operation. To solve the minimisation in a linear manner, we rewrite the recomposition term R in Eq. (6) as

R=α(ωm1zx+ωm2zyωm3+m4).

If the non-linear coefficient ω is fixed, the intensity image prediction in Eq. (16) becomes linear with regard to Z, and Eq. (6) becomes a linear least-squares problem. Therefore, we iteratively calculate the non-linear term ωk and estimate the refined height map zk+1 with a linear least-squares method, as applied elsewhere [22], until a specified tolerance is achieved. A description of the iterative algorithm is presented in Fig. 5(b). Figure 6(c) demonstrates an output refined height map, which shows slightly more details than the input map in Fig. 4(a).

4. Experiments

4.1 Measurement of structured surface with simulated lighting

4.1.1 MEMS surface

First, we simulated an ideal remote point source lighting (altitude: π/4, azimuth: π/4) environment to validate the proposed algorithm. In this simulation, the input low-resolution surface map was obtained from a low-pass filtered (Gaussian, with cut-off of 2.5 μm) version of the high-resolution surface map measured by a CSI [36]. The corresponding input high-resolution intensity image was simulated using Eq. (2) by integrating with Lambertian (A=ρ=1) and Gaussian (β=1, m2=0.05) reflectance terms. For a MEMS pressure sensor surface, we have the input data as presented in Figs. 7(b) and 7(c).

 figure: Fig. 7

Fig. 7 Resolution enhancement results of a MEMS surface with simulated oblique lighting. (a) Ground-truth MEMS surface for test; (b) Smoothened Low-resolution height map input Z0 (RMS error: 0.20 μm); (c) Corresponding high-resolution image input I; (d) Intrinsically recomposed intensity image R = αS; (e) Refined high-resolution surface map Zk (RMS error: 0.15 μm). All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.

Download Full Size | PDF

By assigning the fusion parameters with τ=0.05, σ2=0.052,μα=0.1, μZ1=0.004, and μZ2=0.00075, (in our calculation, the input low-resolution height map and its corresponding high-resolution image were normalised to [0,1]) and applying the fusion algorithm in section 3.2, we obtain the intrinsically recomposed intensity image R, as shown in Fig. 7(d). With the information difference between R and input high-resolution image I, a refined surface height map can be calculated, as shown in Fig. 7(e). By examining the cross-sectional profiles in Fig. 8, we can see that the refined map agrees better with the ground-truth map than the low-resolution input. A root-mean-squared (RMS) height error is defined here as [25,37]

εRMS=1N(z1,nz2,n)2,
where z1,n and z2,n represent respectively the n-th discrete value of the two height maps to be compared, and N is the total sample number. With respect to the ground-truth data, the RMS height error shows that the high-resolution enhancement leads to 35% error reduction in this simulated MEMS surface case.

 figure: Fig. 8

Fig. 8 Magnified cross-section profiles of the output map at x = 300 μm.

Download Full Size | PDF

4.1.2 Star pattern

As a popular resolution artefact for surface topography measuring instrument calibration [10], a further demonstration on a star pattern was carried out. In this test, the source surface topography data was obtained by using a lateral distortion-corrected CSI [38] and the spatial resolution enhancement was measured with the draft ISO standard-defined lateral period limit (LPL) [39,40], i.e. the spatial period at which the height response of an instrument falls to 50% (see [10] for details of how to calculated the LPL using a star pattern).

Similar to the case in section 3.1, the input low-resolution star map as shown in Fig. 9(a) was initially filtered using a low-pass Gaussian filter, with cut-off of 0.8 μm. With the fusion parameters assigned as τ=0.05, σ2=0.052,μα=0.1, μZ1=0.4 and μZ2=0.00075, our experiment under a simulated oblique lighting (altitude: π/4, azimuth: π/4) environment shows that the topography spatial resolution can be reduced from 23.3 μm to 7.4 μm (using LPL as the evaluation metric) via fusing with a high-resolution intensity image. Intermediate and final results of the fusion process are presented in Figs. 9 and 10. Note that the height response curves in Fig. 10 are extracted by subtracting two radial cross-section profiles which are respectively extracted from the lower and higher pedals of the star along X direction ( ± 5° to the X-axis). It should be pointed out that our refined fusion result in Fig. 10 shows conformance with the ground-truth data in the central area (within the range of the red arrows). But in the area off to the centre, e.g. within 45 μm to 70 μm of the abscissa, the fusion result presents less-conformant reconstruction accuracy than that of the low-resolution input. This could be caused by the ambiguity issue of SFS, i.e. SFS leads to ambiguous reconstruction of a step surface because the higher and lower pedals of a step surface have the same slope, and thus they have the same intensity.

 figure: Fig. 9

Fig. 9 Resolution enhancement results of a star pattern with simulated oblique lighting. (a) Input low-resolution height map Z0; (b) input high-resolution image I; (c) recomposed intensity image R; (d) Output refined high-resolution height map Zk; (e) top-view of the input map; and (f) top-view of the output map. All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Analysis of height response curves and LPLs along the X direction of the star map.

Download Full Size | PDF

4.2 Coin surface measurement with a vision-assisted CMM

An experiment with a vision-assisted coordinate-measuring-machine (CMM) was conducted to verify the performance of the proposed algorithm. In this experiment, a one-Chinese-Yuan coin surface was measured with a 1 mm diameter touch probe and a vision camera with partial ring lighting, as shown in Fig. 11. The touch probe provided a low-resolution height map input, while the vision system provided a high-resolution intensity image input for the resolution enhancement computation. The touch probe and vision sensor were initially calibrated in terms of their relative position and orientation accuracy using a calibrated ring gauge [29,41].

 figure: Fig. 11

Fig. 11 Setup of the vision-assisted CMM measurement system.

Download Full Size | PDF

In this experiment, a high-resolution intensity image of the coin surface I with 444☓444 pixels was initially captured, as shown in Fig. 12(b). Regarding the low-resolution height data, a 50☓50 grid sample data set was captured in the same measurement area as the high-resolution image (see the pink points in Fig. 12(a)). To match the data resolution of the low-resolution data to the high-resolution image, a Delaunay linear interpolation was conducted and a low-resolution height map Z0 with the same sample size as the high-resolution image was obtained, as shown in Fig. 12(a).

 figure: Fig. 12

Fig. 12 Resolution enhancement experiment results for a coin surface measurement with a vision-assisted CMM. (a) Low-resolution height map Z0 (RMS error = 12.81 μm) and 50☓50 CMM sample points; (b) input high-resolution image I; (c) recomposed intensity image R; (d) output refined high-resolution height map Zk (RMS error = 12.28 μm); (e) top-view of the input map; and (f) top-view of the output map. All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.

Download Full Size | PDF

By assigning the fusion parameters with τ=0.05, σ2=0.052,μα=0.1, μZ1=0.0018 and μZ2=0.005, and following the intrinsic image decomposition-based lighting estimation as shown in Fig. 12(c) and the fusion computation, a refined high-resolution height map can be obtained and is presented in Fig. 12(d). The accuracy was evaluated by comparing the fusion results with those from a contact stylus surface topography measuring instrument. By matching the CMM data to the stylus data using a scale-invariant feature transform (SIFT)-based coarse matching, followed by an iterative-closest-points (ICP) fine matching, i.e. the SIFT-ICP algorithm [42,43], the height difference of corresponding points between the stylus data (in this case assumed as the ground truth) and CMM data was calculated. Their RMS height errors show that the fusion output result reduces the measurement error by about 4.4%.

A further comparison of top view, cross-section view and fast Fourier transform (FFT) analysis of the input low-resolution map and the output high-resolution map are shown in Fig. 12(e), Fig. 12(f) and Fig. 13, from which, we can clearly see that the periodical surface discontinuity of the low-resolution height map caused by low sampling rate and linear interpolation are removed by using the proposed resolution enhancement algorithm. The output high-resolution map shows a uniform distribution of surface continuity and detail refinement. The FFT analysis in Fig. 13 shows that the input low-resolution and output high-resolution maps have coincident spatial frequency components in low-spatial frequency areas; but in high-spatial frequency areas, the resolution enhanced height map has more information than that of the low-resolution input map.

 figure: Fig. 13

Fig. 13 Coin surface cross-section profiles (top) at X = 0 and their FFTs (bottom).

Download Full Size | PDF

4.3 Analysis of accuracy and time cost

Accuracy and computational time are important factors for algorithms for in-line measurement [44]. In this section, we compare the fusion accuracy and computational efficiency of our proposed algorithm with a state-of-the-art real-time algorithm proposed by Or-El et al. [22], by using the three specimen surfaces described above. We select Or-El’s algorithm for comparison because it has been successfully used in real-time computer vision with a processing rate in approximately ten frames per second (for 640☓480 images), by using the C language and GPU acceleration.

We have compared the resolution enhancement results of our method (as shown in Figs. 7-13) and Or-El’s method, and found that the difference is negligible because the same objective function in Eq. (6) is used in both methods. Therefore, the results of Or-El’s method are not presented here.

The difference of our testing environments is that we use an Intel I5 2.4 GHz CPU instead of GPU acceleration, and the coding language is MATLAB. The 444☓444-pixelated coin surface case was analysed with a serial computation. But, for the MEMS and star cases, which have a million sample points, a twelve workers-parallelised computation with 128☓128 block partitioning was used for acceleration (Or-El’s algorithm was also accelerated in the same way in this test), considering the matrix inverse demands for high complexity [45]. A statistic of the algorithm performance is summarised in Table 1, with ten repetitive numerical runs.

Tables Icon

Table 1. Statistics of time cost and data accuracy of different algorithms*

For the accuracy analysis, the RMS error (RMSE), as defined in Eq. (17) [25,27,37], and recently highlighted universal-quality-index (UQI) [46] are used as the evaluation measures, i.e.

εUQI=4σ12z¯1z¯2(σ12+σ22)(z¯12+z¯22),
where z¯k accounts for the mean height of the k-th map, σ1, σ2 and σ12 represent respectively the variance of each height map and their co-variance. RMSE is a height value-related parameter; while UQI is a hybrid parameter which is designed to be more sensitive to lateral correlation and distortions. UQI is normalised within [−1, 1] and the best case, i.e. UQI = 1, appears only when Z1 and Z2 are exactly equal.

In Table 1, we can see that the proposed algorithm can reduce the computational cost by 35% compared to Or-El’s algorithm [22], with a similar level of accuracy. Our accelerated version of the algorithm with α = 1 (see last column of Table 1) shows that the computational cost can be reduced by approximately 80% compared to Or-El’s algorithm [22]. These statistics comply with the time complexity of each algorithm. Given an input image with N pixels, Or-El’s algorithm has the complexity O(M(N))+3O(N3), where M accounts for the complexity of the neighbourhood weight estimation with Eq. (14), the three O(N3) terms are related to the two steps of the local lighting estimation and the final reconstruction, respectively. Our algorithm and its fast version simplify the local lighting estimation and have the complexities O(M(N))+2O(N3) and O(N3), respectively. This improvement indicates that the proposed algorithm can be used for real-time resolution enhancement of surface measurement applications with a video rate of approximately fifty frames per second, if the C language and GPU acceleration are used.

The RMSE and UQI analysis show that for the simulation cases, the proposed algorithm can lead to significantly resolution-enhanced results which are similar to that of Or-El’s algorithm: RMSE is reduced by 23% from that of the original data and UQI is improved by 0.2 to 0.5 (range normalised to [−1, 1]). For these simulation cases, the accelerated algorithm with α = 1 shows both reduced time cost and improved fusion accuracy. However, it should be noted that this only happens in simulations or with well-controlled lighting conditions.

With the coin measurement, the results show that the accuracy level of the proposed algorithm is similar to Or-El’s algorithm. However, both fusion methods show limited quality enhancement: RMSE is only reduced by 4.4% and UQI is only improved by 0.046. This is probably due to the real lighting condition, as a 1/4 partial ring light has more complex local lighting variations than the simulated cases (a remote point source), and these local complexities degrade the fusion accuracy in local detail refinement. This effect can also be verified with the accelerated algorithm test in which the uniform local lighting, α = 1, significantly reduces the computing time cost, but also degrades the fusion accuracy.

5. Conclusion

A resolution enhancement solution and algorithm are proposed in this paper by fusing a low-resolution height map with its corresponding high-resolution intensity image under an arbitrary natural lighting condition (but co-axial lighting). The simulations and experiments have successfully demonstrated the performance of the proposed method in terms of fusion accuracy and computational time cost. Compared to state-of-the-art algorithms, the proposed algorithm can be significantly faster (35% to 80% faster), while retaining a similar level of accuracy. More testing, both simulation and experimental, is required to establish the sensitivity of the algorithm to different lighting conditions. In future work, we expect to apply this technique to in-line high-dynamic range measurement of large area micro-structured surfaces and in case study measurement scenarios, e.g. fusion of 3D surface topography data with scanning electron microscope images or differential interference contrast microscope images.

Funding

National Natural Science Foundation of China (NSFC) (51705178, 51875227); European Union’s Horizon 2020 Research and Innovation Programme (734174); Key Grant Project of Science and Technology Innovation Programme of Hubei Province (2017AAA001).

References

1. X. J. Jiang and D. J. Whitehouse, “Technological shifts in surface metrology,” CIRP. Ann. Manuf. Technol. 61(2), 815–836 (2012). [CrossRef]  

2. C. J. Evans and J. B. Bryan, ““Structured”, “Textured” or “Engineered” Surfaces,” CIRP. Ann. Manuf. Technol. 48(2), 541–556 (1999). [CrossRef]  

3. A. Malshe, K. Rajurkar, A. Samant, H. N. Hansen, S. Bapat, and W. Jiang, “Bio-inspired functional surfaces for advanced applications,” CIRP. Ann. Manuf. Technol. 62(2), 607–628 (2013). [CrossRef]  

4. C. A. Brown, H. N. Hansen, X. J. Jiang, F. Blateyron, J. Berglund, N. Senin, T. Bartkowiak, B. Dixon, G. Le Goïc, Y. Quinsat, W. J. Stemp, M. K. Thompson, P. S. Ungar, and E. H. Zahouani, “Multiscale analyses and characterizations of surface topographies,” CIRP. Ann. Manuf. Technol. 67(2), 839–862 (2018). [CrossRef]  

5. C. F. Cheung, M. Liu, R. K. Leach, X. Feng, and C. Zhao, “Hierarchical-information-based characterization of multiscale structured surfaces,” CIRP. Ann. Manuf. Technol. 67(1), 539–542 (2018). [CrossRef]  

6. R. K. Leach, C. W. Jones, B. Sherlock, and A. Krysinski, “The high dynamic range surface metrology challenge,” in Proceedings of the 28th Annual Meeting of the American Society for Precision Engineering (ASPE, 2013), pp.149–152.

7. R. K. Leach and B. Sherlock, “Applications of super-resolution imaging in the field of surface topography measurement,” Surf. Topogr.- Metrol. Prop. 2(2), 023001 (2013). [CrossRef]  

8. R. K. Leach, C. Evans, L. He, A. Davies, A. Duparré, A. Henning, C. W. Jones, and D. O’Connor, “Open questions in surface topography measurement: a roadmap,” Surf. Topogr.- Metrol. Prop. 3(1), 013001 (2015). [CrossRef]  

9. C. Chen, J. Wang, X. Liu, W. Lu, H. Zhu, and X. J. Jiang, “Influence of sample surface height for evaluation of peak extraction algorithms in confocal microscopy,” Appl. Opt. 57(22), 6516–6526 (2018). [CrossRef]   [PubMed]  

10. C. L. Giusca and R. K. Leach, “Calibration of the scales of areal surface topography measuring instruments: part 3. Resolution,” Meas. Sci. Technol. 24(10), 105010 (2013). [CrossRef]  

11. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef]   [PubMed]  

12. M. Bates, S. A. Jones, and X. Zhuang, “Stochastic optical reconstruction microscopy (STORM): a method for superresolution fluorescence imaging,” Cold Spring Harb. Protoc. 2013(6), 75143 (2013). [CrossRef]   [PubMed]  

13. L. Yan, X. Wang, L. Zheng, X. Zeng, H. Hu, and X. Zhang, “Experimental study on subaperture testing with iterative triangulation algorithm,” Opt. Express 21(19), 22628–22644 (2013). [CrossRef]   [PubMed]  

14. Z. Lei, X. Liu, L. Zhao, L. Chen, Q. Li, T. Yuan, and W. Lu, “A novel 3D stitching method for WLI based large range surface topography measurement,” Opt. Commun. 359, 435–447 (2016). [CrossRef]  

15. M. Y. Liu, C. F. Cheung, C. H. Cheng, R. Su, and R. K. Leach, “A Gaussian process and image registration based stitching method for high dynamic range measurement of precision surfaces,” Precis. Eng. 50, 99–106 (2017). [CrossRef]  

16. J. Wang, L. Pagani, L. Zhou, X. Liu, W. Lu, R. K. Leach, and X. Jiang, “Uncertainty-guided intelligent sampling strategy for high-efficiency surface measurement via free-knot B-spline regression modelling,” Precis. Eng. In Press. 10.1016/j.precisioneng.2018.09.002. (2018).

17. S. Preibisch, S. Saalfeld, and P. Tomancak, “Globally optimal stitching of tiled 3D microscopic image acquisitions,” Bioinformatics 25(11), 1463–1465 (2009). [CrossRef]   [PubMed]  

18. Y. Yu and H. Peng, “Automated high speed stitching of large 3D microscopic images,” in Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (IEEE, 2011), pp. 238–241. [CrossRef]  

19. A. Bria and G. Iannello, “TeraStitcher - A tool for fast automatic 3D-stitching of teravoxel-sized microscopy images,” BMC Bioinformatics 13(1), 316 (2012). [CrossRef]   [PubMed]  

20. J.-D. Durou, M. Falcone, and M. Sagona, “Numerical methods for shape-from-shading: A new survey with benchmarks,” Comput. Vis. Image Underst. 109(1), 22–43 (2008). [CrossRef]  

21. Y. Han, J. Y. Lee, and I. S. Kweon, “High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 1617–1624. [CrossRef]  

22. R. Or - El, G. Rosman, A. Wetzler, R. Kimmel, and A. M. Bruckstein, “RGBD-fusion: Real-time high precision depth recovery,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 5407–5416.

23. R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape from Shading: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 690–706 (1999). [CrossRef]  

24. R. K. Leach, D. Sims-Waterhouse, F. Medeossi, E. Savio, S. Carmignato, and R. Su, “Fusion of photogrammetry and coherence scanning interferometry data for all-optical coordinate measurement,” CIRP. Ann. Manuf. Technol. 67(1), 599–602 (2018). [CrossRef]  

25. Y. Yin, M. J. Ren, and L. Sun, “Dependant Gaussian processes regression for intelligent sampling of freeform and structured surfaces,” CIRP. Ann. Manuf. Technol. 66(1), 511–514 (2017). [CrossRef]  

26. B. M. Colosimo, M. Pacella, and N. Senin, “Multisensor data fusion via Gaussian process models for dimensional and geometric verification,” Precis. Eng. 40, 199–213 (2015). [CrossRef]  

27. J. Wang, L. Pagani, R. K. Leach, W. Zeng, B. M. Colosimo, and L. Zhou, “Study of weighted fusion methods for the measurement of surface geometry,” Precis. Eng. 47, 111–121 (2017). [CrossRef]  

28. S. K. Ramasamy and J. Raja, “Performance evaluation of multi-scale data fusion methods for surface metrology domain,” J. Manuf. Syst. 32(4), 514–522 (2013). [CrossRef]  

29. A. Weckenmann, X. Jiang, K. D. Sommer, U. Neuschaefer-Rube, J. Seewig, L. Shaw, and T. Estler, “Multisensor data fusion in dimensional metrology,” CIRP. Ann. Manuf. Technol. 58(2), 701–721 (2009). [CrossRef]  

30. J. Wang, R. K. Leach, and X. Jiang, “Review of the mathematical foundations of data fusion techniques in surface metrology,” Surf. Topogr.- Metrol. Prop. 3(2), 023001 (2015). [CrossRef]  

31. B. K. P. Horn, Shape from shading: a method for obtaining the shape of a smooth opaque object from one view (Massachusetts Institute of Technology, 1970).

32. R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman, “Ground truth dataset and baseline evaluations for intrinsic image algorithms,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 2335–2342. [CrossRef]  

33. A. S. Glassner, An Introduction to Ray Tracing (Academic, 1989).

34. M. K. Johnson and E. H. Adelson, “Shape estimation in natural illumination,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 2553–2560.

35. J. T. Barron and J. Malik, “Shape, albedo, and illumination from a single image of an unknown object,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 334–341. [CrossRef]  

36. C. Hu, X. Liu, W. Yang, W. Lu, N. Yu, and S. Chang, “Improved zero-order fringe positioning algorithms in white light interference based atomic force microscopy,” Opt. Lasers Eng. 100, 71–76 (2018). [CrossRef]  

37. L. Sun, M. Ren, and Y. Yin, “Domain-specific Gaussian process-based intelligent sampling for inspection planning of complex surfaces,” Int. J. Prod. Res. 55(19), 5564 (2017). [CrossRef]  

38. P. Ekberg, R. Su, and R. Leach, “High-precision lateral distortion measurement and correction in coherence scanning interferometry using an arbitrary surface,” Opt. Express 25(16), 18703–18712 (2017). [CrossRef]   [PubMed]  

39. ISO/FDIS 25178–600, Geometrical product specifications (GPS)–Surface texture: Areal–Part 600: Metrological characteristics for areal-topography measuring methods (International Organization for Standardization, 2017).

40. R. K. Leach, C. L. Giusca, H. Haitjema, C. Evans, and X. Jiang, “Calibration and verification of areal surface texture measuring instruments,” CIRP. Ann. Manuf. Technol. 64(2), 797–813 (2015). [CrossRef]  

41. Y. Huang, X. Qian, and S. Chen, “Multi-sensor calibration through iterative registration and fusion,” Comput. Aided Des. 41(4), 240–255 (2009). [CrossRef]  

42. X. Jiang, X. Zhang, and P. J. Scott, “Template matching of freeform surfaces based on orthogonal distance fitting for precision metrology,” Meas. Sci. Technol. 21(4), 045101 (2010). [CrossRef]  

43. M. Li, W. Lu, J. Wang, B. Lu, P. Xiao, and Y. Wang, “A feature-clustering-based three-dimensional registration algorithm for structured surface measurement,” Surf. Topogr.- Metrol. Prop. Under Review. (2018)

44. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

45. G. H. Golub and C. F. Van Loan, Matrix Computations (4th Ed.) (Johns Hopkins University, 2013).

46. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Requirements of HDR surface products in three dimensions.
Fig. 2
Fig. 2 Lambertian diffusion (a) and Gaussian specular reflectance (b).
Fig. 3
Fig. 3 Intrinsic image decomposition-based lighting approximation. (a) Illustrated surface Z; (b) locally shadowed image I; (c) Lambertian shading S; (d) local albedo ρ; (e) independent add-in β; and (f) approximate intensity image R. All the gray scales are in 0~255.
Fig. 4
Fig. 4 Intrinsic image recomposition with a smoothed surface input. (a) A smooth surface input Z0; and (b) intrinsically recomposed image R from Z0 with the gray scale in 0~255.
Fig. 5
Fig. 5 Flowcharts of the resolution enhancement algorithm. (a) The complete algorithm; (b) the details of the iteration in step 3.
Fig. 6
Fig. 6 Demonstration of computational results of (a) global lighting S, (b) local lighting αS and (c) algorithm output.
Fig. 7
Fig. 7 Resolution enhancement results of a MEMS surface with simulated oblique lighting. (a) Ground-truth MEMS surface for test; (b) Smoothened Low-resolution height map input Z0 (RMS error: 0.20 μm); (c) Corresponding high-resolution image input I; (d) Intrinsically recomposed intensity image R = αS; (e) Refined high-resolution surface map Zk (RMS error: 0.15 μm). All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.
Fig. 8
Fig. 8 Magnified cross-section profiles of the output map at x = 300 μm.
Fig. 9
Fig. 9 Resolution enhancement results of a star pattern with simulated oblique lighting. (a) Input low-resolution height map Z0; (b) input high-resolution image I; (c) recomposed intensity image R; (d) Output refined high-resolution height map Zk; (e) top-view of the input map; and (f) top-view of the output map. All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.
Fig. 10
Fig. 10 Analysis of height response curves and LPLs along the X direction of the star map.
Fig. 11
Fig. 11 Setup of the vision-assisted CMM measurement system.
Fig. 12
Fig. 12 Resolution enhancement experiment results for a coin surface measurement with a vision-assisted CMM. (a) Low-resolution height map Z0 (RMS error = 12.81 μm) and 50☓50 CMM sample points; (b) input high-resolution image I; (c) recomposed intensity image R; (d) output refined high-resolution height map Zk (RMS error = 12.28 μm); (e) top-view of the input map; and (f) top-view of the output map. All the gray scales are in 0~255. All the pseudo colour height maps share the same colour bar.
Fig. 13
Fig. 13 Coin surface cross-section profiles (top) at X = 0 and their FFTs (bottom).

Tables (1)

Tables Icon

Table 1 Statistics of time cost and data accuracy of different algorithms*

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

R L =Aρcos θ i =Aρ S n ,
R H = S k [ R L ( n )+ R S ( n )+ ],
R S =Aγexp[ cos 2 ( θ r θ i ) m 2 ],
min I R ρ,γ, ( n ) 2 ,
R( n )=ρS( n )+β,
f(z)= IR( n Z ) 2 + μ Z1 Z Z 0 2 + μ Z2 ΔZ 2 ,
R=αS( n ),
S( n )= m T n ¯ ,
n =ω[ z x , z y , 1 ] with ω=1/ 1+ z 2 ,
z=[ z x , z y ].
min m I m T n ¯ 2 .
min α IαS 2 .
k Ω α w k α α k 2 .
w k ={ 0 if I I k 2 >τ exp( I I k 2 2 σ 2 ) otherwise ,
min α IαS( n ) 2 + μ α kN w k α α k 2 ,
R=α( ω m 1 z x +ω m 2 z y ω m 3 + m 4 ).
ε RMS = 1 N ( z 1,n z 2,n ) 2 ,
ε UQI = 4 σ 12 z ¯ 1 z ¯ 2 ( σ 1 2 + σ 2 2 )( z ¯ 1 2 + z ¯ 2 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.