Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Distortion spot correction and center location base on deep neural network and MBAS in measuring large curvature aspheric optical element

Open Access Open Access

Abstract

Large curvature aspheric optical elements are widely used in visual system. But its morphological detection is very difficult because its accuracy requirement is very high. When we use the self-developed multi-beam angle sensor (MBAS) to detect large curvature aspheric optical elements, the accuracy will be reduced due to spot distortion. Therefore, we propose a scheme combining distorted spot correction neural network (DSCNet) and gaussian fitting method to improve the detection accuracy of distorted spot center. We develop a spot discrimination method to determine spot region in multi-spot images. The spot discrimination threshold is obtained by the quantitative distribution of pixels in the connected domain. We design a DSCNet, which corrects the distorted spot to Gaussian spot, to extract the central information of distorted spot images by multiple pooling. The experimental results demonstrate that the DSCNet can effectively correct the distorted spot, and the spot center can be extracted to sub-pixel level, which improves the measurement accuracy of the MBAS. The standard deviations of plano-convex lenses with curvature radii of 500 mm, 700 mm and 1000 mm measured with the proposed method are respectively 0.0112 um, 0.0086 um and 0.0074 um.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

At present, the shape measuring machine is widely used in the high-precision optical element detection field. Especially in the process of processing optical components with different curvatures, the shape information detected in real-time is fed back to the system. In this way, the shape measuring machine can be used to modify the next machining process [13]. In previous studies, we independently developed a sensor MBAS for measuring three-dimensional topography. We use MBAS to measure roundness [4] and planarity [5], especially large curvature aspheric optical elements [6,7]. In previous experiments, we used MBAS to measure cylindrical surfaces with a radius of curvature of 519 mm, with a standard deviation of 0.037 um. However, when measuring the aspherical optical element with large curvature variation, the center of the distorted spot cannot be accurately extracted, which affects the accuracy of the angle difference. Consequently, the 3D shape cannot be precisely reconstructed, and the measurement range is also affected.

Many methods have been proposed to solve the problem of spot center extraction. Common spot center extraction methods include the centroid method (CM), sub-pixel grayscale centroid extraction method (SPGCEM), circle fitting method (CFM), ellipse fitting method (EFM) and Gaussian fitting method (GFM) [815]. Where, the CM is simple to implement, but its precision is low [8]. The SPGCEM has high efficiency and is widely used, but its accuracy is also very low [9]. To solve the problem of low precision of the SPGCEM, Dong et al. proposed an image denoising method based on LRSD to improve the accuracy of the SPGCEM [10]. The CFM and EFM fit the edge of the spot, so their accuracy is greatly affected by the edge [11,12]. In contrast, the GFM makes full use of the energy distribution information of the spot, enabling it to obtain high accuracy in the center extraction. The GFM is suitable for the image of the spot with Gaussian distribution [13,14]. But these methods all cannot deal with the irregular spot with severe distortion, which results in a decrease in MBAS measurement accuracy.

Image distortion exists in many visual fields, such as biomedical vision [15]. And most of the distortion is caused by the imaging lens. However, in our system, the distortion of the spot depends on the curvature of the measured object and change with the change of the measured object, so we cannot design a simple one-to-one model to correct the spot.

To solve the low accuracy problem of MBAS in measuring large curvature aspheric optical elements due to spot distortion, we design a high-precision center location method with the DSCNet by correcting distorted spot. In this method, the spot discrimination method is proposed to determine spot region in multi-spot images. The spot discrimination threshold is acquired by the quantitative distribution of pixels in the connected domain. We propose a DSCNet, which uses multiple average pooling to expand the effective receptive field. The proposed DSCNet can remove the redundant edge information and retain the center position information of the spot, which corrects the distorted spot into a Gaussian spot. Firstly, the required spots in the multi-spot image acquired by the MBAS are identified and divided into several independent target areas. Then the DSCNet is exhibited to correct the distorted spot to the Gaussian spot. Finally, the Gaussian fitting method is applied to extract the center coordinates of the corrected spot. For distorted spots, the proposed method has much higher extraction precision than the CM, CFM and GFM. The proposed method can realize the center coordinate extraction of distorted spots at the sub-pixel level. The experimental results show that the proposed method enables the MBAS to measure plano-convex lenses with different curvatures and improve the measurement accuracy of the MBAS.

2. Principle

2.1 System framework and process

Figure 1 shows the structure and optical path of the MBAS. Firstly, the laser beam passes through the condenser lens (CL) and a pinhole and is collimated by a collimator. Then the beam is reflected by the beam splitter (BS) and projected onto the workpiece surface through the cylindrical lens. The reflected beam from the surface of the workpiece passes through the BS and is focused on a microlens array, which divides the beam into several parts. The spot image is recorded by the CMOS camera mounted on a vertical axis. The angle difference between two points on the workpiece surface can be calculated by the distance between light spots.

 figure: Fig. 1.

Fig. 1. The structure and optical path of the MBAS.

Download Full Size | PDF

According to the relationship between the Fourier coefficient of angle difference and the Fourier coefficient of the workpiece surface profile, the three-dimensional contour of the workpiece surface can be reconstructed directly [7]. However, when the curvature of the workpiece surface changes sharply, the spot collected by MBAS will be distorted, and the positioning error of spot center will become larger. The positioning error of spot center will reduce the accuracy of 3D reconstruction by affecting the angle difference. Therefore, the accurate extraction of distorted spot center is very important.

The flowchart of the proposed method is shown in Fig. 2. In the process of pretreatment, an array of spots in the images collected by the CMOS camera are preliminarily located. Each spot and its neighborhood are cut into a square sub-image with one and only one spot in each image. In addition, a group of simulated distorted spots corresponding to gaussian spots are added with different levels of noise. In the process of model training, gaussian spot images and distorted spot images with added noise are put into DSCNet model for training, optimized by optimizer of ADAM, and a reliable and stable model is obtained by cross validation. In the process of spot center extraction, all the spot sub-images obtained by spot segmentation are input to the trained DSCNet model, and then a set of corrected Gaussian spot images are obtained. Finally, the GFM is used to calculate the center of the spot in the sub-images, and the central coordinates of the spot in the original image can be obtained.

 figure: Fig. 2.

Fig. 2. The flowchart of the proposed method.

Download Full Size | PDF

2.2 Spot area recognition and segmentation

First, the OSTU method is used to find an appropriate threshold to binarize the image [16]. Then all eight-connected domains are found [17]. The minimum threshold value of the pixel number in the spot-connected domain is determined by the following method. The number of pixels in all connected domains is counted and sorted in descending order. The descending histogram of the pixel number in the connected domains is exhibited in Fig. 3(a). Then the backward difference of the histogram is calculated by

$${T_k}\textrm{ = }L({{\Omega _k}} )- L({{\Omega _{k + 1}}} ), $$
where $L({{\Omega _k}} )$ represents the pixel number of the $k\textrm{ - }th$ connected domain (${\Omega _k}$).

 figure: Fig. 3.

Fig. 3. Pixel number in the connected domain: (a) Descending histogram, (b) Differential histogram.

Download Full Size | PDF

Figure 3(b) is the difference histogram. The maximum difference value (${T_m}$) is found to calculate the threshold value ($\delta $) of the spot-connected domain. $\delta $ is used to determine whether a connected domain is a spot, given by

$$\delta = \frac{{L({{\Omega _m}} )+ L({{\Omega _{m + 1}}} )}}{2}. $$

The connected domain with the pixel number ($L({{\Omega _k}} )$) greater than $\delta $ is considered as a spot, and the estimated center coordinates of the spot-connected domain is given by

$${\hat{x}_k} = \frac{1}{{L({{\Omega _k}} )}}\sum\limits_{n = 1}^{L({{\Omega _k}} )} {{x_n}({{\Omega _k}} )} \begin{array}{*{20}{c}} , &{} \end{array}{\hat{y}_k} = \frac{1}{{L({{\Omega _k}} )}}\sum\limits_{n = 1}^{L({{\Omega _k}} )} {{y_n}({{\Omega _k}} )} , $$
where $({{{\hat{x}}_k},{{\hat{y}}_k}} )$ denotes the estimated center coordinate of the $k\textrm{ - }th$ spot-connected domain, ${x_n}({{\Omega _k}} )$ and ${y_n}({{\Omega _k}} )$ respectively represent the abscissa and ordinate of the $n\textrm{ - }th$ pixel of the $k\textrm{ - }th$ spot-connected domain.

Spot segmentation is based on the estimated center of each connected domain, and all single spot sub-images are cut out. As displayed in Fig. 4, the cyan box is the dividing line, and each required spot is divided into an independent sub-image. In this way, the multi-spot center extraction problem is transformed into the central extraction problem of single spot, which greatly reduces the complexity of spot correction and precise localization.

 figure: Fig. 4.

Fig. 4. Segmentation of multi-spot image.

Download Full Size | PDF

2.3 Spot correction based on DSCNet

Since the distortion is related to the measured object topography. The distortion shape and degree of spot will be very different for different objects. Therefore, we cannot establish a simple one-to-one model between distorted and Gaussian spots. To solve this problem, we propose a deep learning model. The mathematical model of the proposed deep learning correction method can be expressed as an implicit function

$$\hat{T} = G({D,\Psi } )$$
where $\Psi $ is the parameter of the model, $\hat{T}$ denotes the output image, D is the input distorted spot image, $G({\cdot} )$ represents an implicit function that maps the input distorted spot to the output Gaussian spot. We propose a DSCNet to solve the mapping problem of this function. The proposed network adopts a set of simulation dataset $\{{({{D_k},{S_k}} )} \}_{k = 1}^K$ for training, where K represents the number of training samples, ${D_k}$ is the simulated distorted spot (DS), and ${S_k}$ denotes the Gaussian spot (GS) with the same central coordinate as the distorted spot ${D_k}$.

The architecture of the proposed DSCNet is shown in Fig. 5. The architecture includes convolutional layers, pooling layers and upsampling layers. The first layer is the input layer, and the input data is the distorted spot image. The second and third layers are the convolutional layers. All the convolutional layers adopt a small convolution kernel with the size $3 \times 3$, and the convolution step is set as 1. Multiple $3 \times 3$ convolutional layers of are stacked instead of a single convolutional layer with a large convolution kernel to expand the effective receptive field [18]. The fourth layer is performed over a $2 \times 2$ pixel window with stride 2, which is applied to remove redundant information of the spot image and compress image features. We add batch normalization to the pooling layer to speed up the convergence of the loss function. The fifth to fifteenth layers are structurally similar to the second, third and fourth layers, and jointly realize the extraction of the spot center in functionality. The sixteenth to eighteenth layers are the transition convolution layers. The nineteenth to thirty-first layers consist of the upsampling and convolutional layers, which are employed to restore the target image and reduce the image channel. The last two convolutional layers output a Gaussian image with the same center as the input spot. The trainable parameters in the network are mainly weight parameters and deviation of convolution layer. The total number of trainable parameters in the whole deep learning neural network is 2.26 M.

 figure: Fig. 5.

Fig. 5. The proposed DSCNet architecture. Cv 3 × 3, convolution with filter size 3 × 3; RL, rectified linear unit; AP, Average pooling, stride (2, 2); BN, batch normalization; US, Upsample, factor 2.

Download Full Size | PDF

2.4 Gaussian fitting center extraction

After the proposed DSCNet correction, the shape of the spot is similar to the Gaussian spot. As the neural network has no function of noise filtering, we need to perform median filtering [19] in the image to filter out the Gaussian noise in the image. Thus, the edge line of the spot is relatively smooth, which is conducive to Gaussian fitting accuracy.

The GFM [20] has high accuracy in solving the Gaussian spot center, and its primary model is given by

$$f({x,y} )= E\cdot \textrm{exp} \left\{ { - \frac{1}{{2{\sigma^2}}}[{{{({x - {x_0}} )}^2} + {{({y - {y_0}} )}^2}} ]} \right\}, $$
where E represents the total energy of the spot, $\sigma $ is the mean square deviation of the Gaussian function, and $({{x_0},{y_0}} )$ denotes the central coordinates of the spot. The least-square method is used to obtain the parameter that minimizes the mean square error. Finally, the exact center coordinates $({{x_0},{y_0}} )$ of the spot ${S_k}$ is obtained.

3. Simulation results and analysis

3.1 Training data generation and model building

In the experiment, we use a dataset $\{{({{D_k},{S_k}} )} \}_{k = 1}^K$ to train DSCNet. The dataset is composed of $K = 5000$ DS-GS pairs. Distorted spots ${D_k}$ and Gaussian spots ${S_k}$ are simulated by the MATLAB platform. The simulated distorted spot is added with different levels of noise, and then the gaussian spot and the distorted spot are input into the network for training. We adopted the cross-validation training strategy, and the ratio of training set to test set was 9:1. We train the DSCNet on the PyCharm platform with Pytorch. The optimizer used in the training process is ADAM, which can make the objective function jump out of the local optimal solution. And the loss function is defined as

$${L_{oss}}(\Psi )= \frac{1}{{2K}}\sum\limits_{k = 1}^K {{{||{G({{D_k},\Psi } )- {S_k}} ||}^2}} . $$

The training epoch is set as 100 and the initial learning rate is ${10^{ - 3}}$. When the training epoch is greater than 50 and less than 60, the learning rate becomes to ${10^{ - 4}}$. When the training epoch is greater than 60, the learning rate drops to ${10^{ - 6}}$. Because the number of trainable parameters is relatively small compared with other models, it only takes about 5 hours to train the proposed neural network. Figure 6 shows the curve of average error and signal-to-noise ratio (SNR) varying with the training epoch. With the increase of the training epoch, the error decreases and the SNR increases, indicating that the corrected spot is closer and closer to the concentric Gaussian spot. Next, we analyze the performance of this model and use it to predict the center of the spot in the images of the experiment.

 figure: Fig. 6.

Fig. 6. Variation curve of (a) the average error and (b) SNR during training.

Download Full Size | PDF

3.2 Accuracy and stability analysis of the proposed method

To test the positioning accuracy of the proposed method, ten groups of DS-GS pairs images are simulated by MATLAB. The central position of the distorted spot is obtained by using the CM, CFM, GFM and the proposed method. The positioning error is defined as

$${E_a} = \sqrt {{{({x - {x_0}} )}^2} + {{({y - {y_0}} )}^2}} , $$
where, $({x,y} )$ is the calculated center coordinate of the spot, and $({{x_0},{y_0}} )$ denotes the accurate center coordinate of the spot. The results of the four center positioning methods are shown in Fig. 7. It can be seen that the center position obtained by the proposed method is more accurate than other methods. Table 1 lists the specific center coordinates of ten spots calculated by different methods. As shown in Table 1, CFM has the largest extraction error of distorted spot center. According to Eq. (7), its average error is 0.9255 pixels, which is caused by the fact that the contour of the distorted spot is not round. Both CM and GFM fully consider the information of all pixel points, and the positioning accuracy is relatively close. However, due to the existence of distortion, their positioning accuracy is not high, and their average errors are 0.7090 pixel and 0.6883 pixel respectively. Compared with the CM, CFM, GFM, the proposed method uses DSCNet to correct the distorted spot before extracting the center. The average positioning error of the proposed method is only 0.0494 pixels, with sub-pixel accuracy. This shows that DSCNet can correct the distorted spot to Gaussian spot on the premise of perfectly retaining the original spot image's center position information. It can be seen that the proposed method can improve the measurement accuracy of the MBAS, especially when the spots acquired by the MBAS are severely distorted.

 figure: Fig. 7.

Fig. 7. The results of spot center extraction by four methods.

Download Full Size | PDF

Tables Icon

Table 1. Precision comparison of four center positioning methods. Unit: pixel.

What is noteworthy is that the error of spot center extraction with Gaussian fitting method without distortion correction is more than 13 times that with spot correction with DSCNet, which shows that the distorted spot correction neural network plays an important role in spot extraction.

To analyze the stability of the presented method, Gaussian white noises of different levels are added to the simulated distorted spot. And the center position of the spot image at different noise levels is obtained using the CM, CFM, GFM and the proposed method. As the noise level increases, the error change curves of four methods are shown in Fig. 8. The CFM is most affected by noise because noise affects the edge features of the spot, resulting in the instability of the method. CM, GFM and the proposed method all have good stability when the noise is less than 9 dB, but the proposed method has better performance. The difference between GFM and the proposed method is the ability of distortion spot correction, which leads to the error of GFM is more than 4 times that of the proposed method within 15 dB noise. When the noise is lower than 20 dB, the error of the proposed method can be controlled within 0.33 pixels, with good stability. The proposed method is suitable for spot center extraction with high environmental interference, indicating that DSCNet has a strong anti-interference ability. The proposed method enables the MBAS to have a high measurement accuracy even in harsh environments.

 figure: Fig. 8.

Fig. 8. Comparison of positioning errors of four methods with the increase of noise level.

Download Full Size | PDF

In short, the CM, CFM and GFM all have large error in center extraction of distorted spot, and are greatly affected by noise, which seriously reduces the measurement accuracy of the MBAS. The error of the proposed method is small and the stability is good. When the MBAS is used to measure the workpiece with large curvature in harsh environments, the spot is severely distorted and have a lot of noise, which make it difficult to reconstruct the 3D shape of the workpiece. The proposed method can eliminate the noise and extract the accurate center of distorted spot, which improves the measurement accuracy of MBAS.

4. Experimental results and analysis

4.1 Configuration of the experiment

Figure 9 shows the MBAS measurement system, which is built according to the framework of Fig. 1. The main components include the MBAS, XY-platform, rotary platform and tilt platform. The light source (HL6501MG99) of the MBAS is a semiconductor laser light source with a wavelength of 658 nm. The components that the laser passes through from the light source to the workpiece are focusing lens (Edmund # 87-161), pinhole, collimating lens (Edmund # 63-708), aperture, BS (SIGMAKOKI: RPB-15-4M) and cylindrical lens (Edmund # 47-766). The light reflected from the workpiece passes through the BS to the CMOS camera (Edmund EO-5012M).

 figure: Fig. 9.

Fig. 9. Experimental setup of the MBAS.

Download Full Size | PDF

4.2 Extraction of spot centers of measuring plane-convex lenses with different curvatures

The spot center extraction model based on DSCNet based on simulation data is used to predict the center position of the spot in the experimental multi-spot images. We use the plano-convex lens with a curvature radius of 500 mm, 700 mm, 1000 mm as the workpiece. The MBAS is used to perform circular scanning and a series of multi-spot images are obtained. The resolution of the original multi-spot images is $2560 \times 1920$ pixels. After spot recognition and segmentation, the resolution of spot sub-image is $128 \times 128$ pixels. Then the spot sub-images are input into the model to obtain the corrected Gaussian spot. Finally, the exact spot center is obtained by GFM. Figures 10, 11 and 12 respectively show the results of spot center extraction of the plano-convex lens with the curvature radius of 500 mm, 700 mm and 1000 mm. The spot distribution corresponding to workpieces with different curvature radius is obviously different, and the brightness of spots is also different. In this case, it is essential to identify and segment the spots in the multi-spot images. It can be seen from the segmented spot sub-image that the distortion degree of the spot obtained under different curvature radius is different. However, after the correction based on DSCNet, these distorted spots can be well corrected into Gaussian spots. And then the center of the spots can be accurately extracted by the GFM. The proposed method can correct the spot with different distortion degree and extract the precise center coordinates.

 figure: Fig. 10.

Fig. 10. The results of spot center extraction of a plano-convex lens with a curvature radius of 500 mm.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The results of spot center extraction of a plano-convex lens with a curvature radius of 700 mm.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The results of spot center extraction of a plano-convex lens with a curvature radius of 1000 mm.

Download Full Size | PDF

4.3 Surface reconstruction of the plane-convex lens

We use the extracted spot center to reconstruct the 3D shape of the plane-convex lens with different curvature. Figures 13(a)(b)(c) reconstruct a set of 3D plots of the plane-convex lens surface with 10 different measurement radii, and the curvature radii of plane-convex lens were 500 mm, 700 mm and 1000 mm respectively. The surface of the plane-convex lens is accurately determined at each position on the circle. This process is repeated for cyclic scanning of different radii to produce the overall surface shape and reconstruct the surface topography. The least-squares analyses of the experimental data are shown in Figs. 13(d)(e)(f). Least-squares analysis indicated the measurement deviation is very small near the center of the circumferential scanning, and only some large deviation occurs at the edge. The calculated curvature radii of plane-convex lens are 500.1069 mm, 705.0331 mm and 1001.6000 mm respectively, with small deviations.

 figure: Fig. 13.

Fig. 13. Plane-convex lens surface shapes (measured radii range from 0.3254 mm to 2.2651 mm) and deviations. (a)-(c) are measured surface data of the plano-convex lenses with the radii of curvature of 500 mm, 700 mm and 1000 mm. (d)-(f) are deviations based on measurement of the plano-convex lenses with the radii of curvature of 500 mm, 700 mm and 1000 mm, as calculated via the least-squares method.

Download Full Size | PDF

We also study the effect of spot center extraction with or without DSCNet on 3D shape reconstruction. Figure 14 shows the standard deviation of 3D shape reconstruction with or without DSCNet when measuring plane-convex lenses with different radius of curvature. It can be seen from the curve that the smaller the radius of curvature, the more severe the curvature change and the larger the reconstruction deviation. However, spot center extraction with DSCNet can effectively reduce the deviation, and the more severe the curvature change, the more obvious the reduction. The standard deviation of plane-convex lenses with curvature radius of 500 mm, 700 mm and 1000 mm reconstructed with the proposed distortion spot correction and center location method is 0.0112 um, 0.0086 um and 0.0074 um, respectively.

 figure: Fig. 14.

Fig. 14. Comparison of standard deviation of 3D shape reconstruction with and without DSCNet measuring plane-convex lenses with different radius of curvature.

Download Full Size | PDF

5. Conclusion

We have experimentally demonstrated a new method to correct and extract distorted spot center which improves the measurement accuracy of MBAS. The spot discrimination method is developed to determine the spot location. We propose a DSCNet which corrects the distorted spot to the Gaussian spot directly. Then Gaussian fitting method is performed for the corrected spot to obtain the center position of the spot. The CM, CFM, GFM and the proposed method are used to extract the spot center. The extraction average errors are 0.7090, 0.9255, 0.6883 and 0.0494 pixels, respectively. The error of the proposed method is relatively small. In contrast, the extraction errors of the CM, CFM, and GFM are very high, which is more than 13 times that of the proposed method. When the noise is lower than 20 dB, the extraction error of the proposed method is within 0.33 pixels, with strong anti-interference ability. The experimental results show that the standard deviations of the proposed method are only 0.0112 um, 0.0086 um and 0.0074 um for the measurement of plano-convex lenses with curvature radii of 500 mm, 700 mm and 1000 mm by MBAS. In the future, we will extend the application of the proposed method to other ultra-large curvature workpiece measurements.

Funding

National Natural Science Foundation of China (62173098); Natural Science Foundation of Guangdong Province (2021A1515011817, 2022A1515010005, 2022A1515011636).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

References

1. C. Jiang, T. Bell, and S. Zhang, “High dynamic range real-time 3D shape measurement,” Opt. Express 24(7), 7337–7346 (2016). [CrossRef]  

2. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

3. M. Schaffer, M. Grosse, B. Harendt, and R. Kowarschik, “High-speed three-dimensional shape measurements of objects with laser speckles and acousto-optical deflection,” Opt. Lett. 36(16), 3097–3099 (2011). [CrossRef]  

4. M. Chen, S. Takahashi, and K. Takamasu, “Development of high-precision micro-roundness measuring machine using a high-sensitivity and compact multi-beam angle sensor,” Precis. Eng. 42, 276–282 (2015). [CrossRef]  

5. M. Chen, S. Takahashi, and K. Takamasu, “Multi-beam angle sensor for flatness measurement of mirror using circumferential scan technology,” Int. J. Precis. Eng. Manuf. 17(9), 1093–1099 (2016). [CrossRef]  

6. M. Chen, S. Takahashi, and K. Takamasu, “Calibration for the sensitivity of multi-beam angle sensor using cylindrical plano-convex lens,” Precis. Eng. 46, 254–262 (2016). [CrossRef]  

7. M. Chen, S. Xie, H. Wu, S. Takahashi, and K. Takamasu, “Three-dimensional surface profile measurement of a cylindrical surface using a multi-beam angle sensor,” Precis. Eng. 62, 62–70 (2020). [CrossRef]  

8. B. F. Alexander and K. C. Ng, “Elimination of systematic error in subpixel accuracy centroid estimation [also Letter 34 (11) 3347-3348 (Nov1995)],” Opt. Eng. 30(9), 1320–1332 (1991). [CrossRef]  

9. Y. Li, J. Zhou, F. Huang, and L. Liu, “Sub-pixel extraction of laser stripe center using an improved gray-gravity method,” Sensors 17(4), 814 (2017). [CrossRef]  

10. Z. Dong, X. Sun, F. Xu, and W. Liu, “A low-rank and sparse decomposition-based method of improving the accuracy of sub-pixel grayscale centroid extraction for spot images,” IEEE Sens. J. 20(11), 5845–5854 (2020). [CrossRef]  

11. J. Zhu, Z. Xu, D. Fu, and C. Hu, “Laser spot center detection and comparison test,” Photonic Sens. 9(1), 49–52 (2019). [CrossRef]  

12. J. H. Xing, X. W. Li, and L. T. Zhao, “Research on roadbed settlement measurement with laser spot detection based on ellipse fitting,” Trans Tech Publ 113–116 (2014).

13. C. C. Liebe, “Accuracy performance of star trackers-a tutorial,” IEEE Trans. Aerosp. Electron. Syst. 38(2), 587–599 (2002). [CrossRef]  

14. H. Dong and L. Wang, “Non-iterative spot center location algorithm based on Gaussian for fish-eye imaging laser warning system,” Optik 123(23), 2148–2153 (2012). [CrossRef]  

15. K. Kalluri, N. Bhusal, D. Shumilov, A. Konik, and J. Dey, “Multi-pinhole cardiac SPECT performance with hemi-Ellipsoid detectors for two geometries,” (2015).

16. X. Xu, S. Xu, L. Jin, and E. Song, “Characteristic analysis of Otsu threshold and its applications,” Pattern Recogn. Lett. 32(7), 956–961 (2011). [CrossRef]  

17. L. Sun, Z. Dong, R. Zhang, R. Fan, and D. Chen, “Waveform LiDAR signal denoising based on connected domains,” Front. Optoelectron. 10(4), 388–394 (2017). [CrossRef]  

18. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

19. W. U. Jin, “Wavelet domain denoising method based on multistage median filtering,” The Journal of China Universities of Posts and Telecommunications 20(2), 113–119 (2013). [CrossRef]  

20. S. M. Anthony and S. Granick, “Image analysis with rapid and accurate two-dimensional Gaussian fitting,” Langmuir 25(14), 8152–8160 (2009). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The structure and optical path of the MBAS.
Fig. 2.
Fig. 2. The flowchart of the proposed method.
Fig. 3.
Fig. 3. Pixel number in the connected domain: (a) Descending histogram, (b) Differential histogram.
Fig. 4.
Fig. 4. Segmentation of multi-spot image.
Fig. 5.
Fig. 5. The proposed DSCNet architecture. Cv 3 × 3, convolution with filter size 3 × 3; RL, rectified linear unit; AP, Average pooling, stride (2, 2); BN, batch normalization; US, Upsample, factor 2.
Fig. 6.
Fig. 6. Variation curve of (a) the average error and (b) SNR during training.
Fig. 7.
Fig. 7. The results of spot center extraction by four methods.
Fig. 8.
Fig. 8. Comparison of positioning errors of four methods with the increase of noise level.
Fig. 9.
Fig. 9. Experimental setup of the MBAS.
Fig. 10.
Fig. 10. The results of spot center extraction of a plano-convex lens with a curvature radius of 500 mm.
Fig. 11.
Fig. 11. The results of spot center extraction of a plano-convex lens with a curvature radius of 700 mm.
Fig. 12.
Fig. 12. The results of spot center extraction of a plano-convex lens with a curvature radius of 1000 mm.
Fig. 13.
Fig. 13. Plane-convex lens surface shapes (measured radii range from 0.3254 mm to 2.2651 mm) and deviations. (a)-(c) are measured surface data of the plano-convex lenses with the radii of curvature of 500 mm, 700 mm and 1000 mm. (d)-(f) are deviations based on measurement of the plano-convex lenses with the radii of curvature of 500 mm, 700 mm and 1000 mm, as calculated via the least-squares method.
Fig. 14.
Fig. 14. Comparison of standard deviation of 3D shape reconstruction with and without DSCNet measuring plane-convex lenses with different radius of curvature.

Tables (1)

Tables Icon

Table 1. Precision comparison of four center positioning methods. Unit: pixel.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

T k  =  L ( Ω k ) L ( Ω k + 1 ) ,
δ = L ( Ω m ) + L ( Ω m + 1 ) 2 .
x ^ k = 1 L ( Ω k ) n = 1 L ( Ω k ) x n ( Ω k ) , y ^ k = 1 L ( Ω k ) n = 1 L ( Ω k ) y n ( Ω k ) ,
T ^ = G ( D , Ψ )
f ( x , y ) = E exp { 1 2 σ 2 [ ( x x 0 ) 2 + ( y y 0 ) 2 ] } ,
L o s s ( Ψ ) = 1 2 K k = 1 K | | G ( D k , Ψ ) S k | | 2 .
E a = ( x x 0 ) 2 + ( y y 0 ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.