Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-precision autocollimation method based on a multiscale convolution neural network for angle measurement

Open Access Open Access

Abstract

A high-precision autocollimation method based on multiscale convolution neural network (MSCNN) for angle measurement is proposed. MSCNN is integrated with the traditional measurement model. Using the multiscale representation learning ability of MSCNN, the relationship between spot shape (large-scale feature), gray distribution (small-scale feature), and the influence of aberration and assembly error in the collimating optical path is extracted. The constructed accurate nonlinear measurement model directly improves the uncertainty of angle measurement. Experiments demonstrate that the extended uncertainty reaches 0.29 arcsec (k = 2), approximately 7 times higher than that with the traditional measurement principle, and solves the nonlinear error caused by aberration and assembly error in the autocollimation system. Additionally, this method has a good universality and can be applied to other autocollimation systems.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Precision small-angle measurement is widely used in metrology for precision motion control [1,2], surface metrology [35], precision manufacturing [6], and scientific research [711]. The autocollimator is traditionally used to realize precision small-angle measurement [1214] because of the advantages of non-contact, high precision, and simple structure. However, in recent years, the need for high-precision small-angle measurement has increased [15]. For example, angular motion errors of a wafer stage in the lithography machine need to be measured precisely to improve manufacturing accuracy to realize micro-/nanoradian-order [16].

Extensive research has been conducted worldwide to improve autocollimation system accuracy. The first strategy was to create new structures. Gao et al. introduced a grating reflector to generate diffraction beams for detecting the 3D angle [17]. The third dimensional measurement avoids the nonlinearity caused by inverse trigonometric functions. In a range of 40 arcsec, the measuring error was 2.11 arcsec. Zhu et al. proposed a laser autocollimation technology based on common-path beam drift measurement and synchronous compensation [18,19], with accuracy reaching up to 0.013 arcsec with a range of 30 arcsec. The second strategy involved building more accurate measurement models. Konyakhin et al. implemented a ray-tracing model to predict the illuminance distribution and calculate vignetting errors [20]. The error caused by vignetting is 3 arcsec at the working distance of 5 m; Huang et al. established a model of autocollimation system based on vector operation to analyze the coupling relationship with two-dimensional angles [21]. Experiments indicated that accuracy was promoted from 1.71 to 1.55 arcsec after compensation. On the one hand, the innovative structure effectively enhances the accuracy, but this strategy significantly increases the cost and complexity; on the other hand, existing autocollimation models can only analyze the influence of one or several factors on measurement accuracy, but the problem of nonlinear error persists. Therefore, there is an urgent need for a high-precision autocollimation method, which can solve the nonlinear errors without increasing the system complexity.

In this paper, a high-precision autocollimation method based on multiscale convolution neural network (MSCNN) for angle measurement is designed to solve the problem of nonlinear error. Because the MSCNN has strong nonlinear fitting [2224] and multiscale representation learning abilities [2527], this method uses the idea of supervised learning to construct an accurate MSCNN-assisted autocollimation model. In addition, ultra-precision measurement accuracy can be achieved using algorithms. The absence of additional structures reduces the installation and adjustment costs. Furthermore, this approach can be applied to other autocollimation systems, indicating universality, thereby improving the accuracy of autocollimation techniques.

2. Measurement principle

2.1 Traditional measurement model of autocollimation system

The schematic of autocollimation system for angle measurement is shown in Fig. 1. A light beam emitted from a laser diode (LD) passes through a diaphragm (D), a beam splitter (BS) and a collimating lens (L). Then, the laser beam collimated by L is projected onto a plane mirror (M) along the z-axis. If the M keeps still, this beam is reflected along its incoming path and finally focused on the middle of CMOS detector. When the M is rotated by a pitch angle a, the beam deflection will be twice the rotation angle of the M, and the light-spot on CMOS will displace from its current position. Generally, the displacement of light-spot Δx can be calculated as follows:

$$\Delta x = f\tan (2\alpha )$$
where f is the focal length of L.

 figure: Fig. 1.

Fig. 1. Schematic of autocollimation system.

Download Full Size | PDF

Therefore, the relationship between the displacements of light-spot and angles of M can be expressed as follows:

$$\begin{array}{l} \alpha = \frac{1}{2}\arctan (\frac{x}{f})\\ \beta = \frac{1}{2}\arctan (\frac{y}{f}) \end{array}$$
where f is the focal length of L; x and y denote the x- and y-directional displacements of the light-spot on CMOS; a and β are pitch angle and yaw angle rotations of the M, respectively.

2.2 Error analysis of traditional measurement model

According to the above model, the nonlinear error in autocollimation systems mainly comprises the aberration and assembly error.

  • (1) Aberration of the collimating lens:
When the plane mirror stays still, the spherical aberration affects the focusing position of the light-spot, and finally affects the focal length f in the measurement model.

When the plane mirror rotates, the reflected light will have an angle with the z-axis, and the light-spot will be affected by the aberrations such as coma, astigmatism and field curve. These aberrations change the shape and energy distribution of the light-spot, finally affecting the displacements x and y obtained by CMOS.

  • (2) Assembly error:
The defocus, translation and inclination of the optical fiber connector, collimating lens, CMOS, and beam splitter will affect the distance f and displacements x and y in the actual optical path. Also the assembly error and spherical aberration combine to affect displacements x and y of autocollimation systems.

These errors make it difficult to establish an accurate measurement model. In addition, these errors are nonlinear, which makes improving measurement accuracy further challenging.

2.3 High-precision Autocollimation Method based on MSCNN for Angle Measurement

A high-precision autocollimation method based on MSCNN (MCAM) for angle measurement is proposed to eliminate the nonlinear aberration and assembly error. The schematic of this method is shown in Fig. 2. First, the uncalibrated autocollimation system and standard instrument (Möller-Wedel Elcomat 3000, Germany) simulataneously measure the angle of M in real-time. The latter then outputs a set of angle labels for each CMOS image output by the former. Second, based on the idea of supervised learning, an MSCNN is trained for accurate spot location using these collected sample images and labels. Finally, the MCAM is formed by combining image processing algorithms, MSCNN, and angle measurement model, as in Fig. 3, to effectively remove the nonlinear error and significantly reduce the installation and adjustment requirements.

 figure: Fig. 2.

Fig. 2. Schematic of the MCAM in this paper.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Flowchart of the MCAM

Download Full Size | PDF

The input image preprocessing algorithm improves the signal-to-noise ratio, filters out the high-frequency photoelectric sensor noise, and reduces the training difficulty of MSCNN. Firstly, the spot in the image is roughly positioned by the centroid location algorithm. In addition, rough coordinates of the spot center RPX and RPY can be computed as follows:

$$\begin{array}{l} RPX = \frac{{\sum {{x_{ij}}{I_{ij}}} }}{{\sum {{I_{ij}}} }}\\ RPY = \frac{{\sum {{y_{ij}}{I_{ij}}} }}{{\sum {{I_{ij}}} }} \end{array}$$
where Iij is the gray value of the pixels in row i and column j in the CMOS image; xij and yij represent the x- and y- directional coordinates of the pixels in row i and column j on the CMOS image, respectively.

The light-spot diameter is approximately 32×32 pixels, while the image collected by CMOS is 2048×1536 pixels. If the original image is directly used as the input of MSCNN, the signal-to-noise ratio will be too low and significantly increase the training difficulty. Therefore, the threshold segmentation algorithm segments the original image into 64×64 pixels taking RPX and RPY as the image center. Secondly, the Butterworth filter removes the high-frequency photoelectric sensor noise; the filtered 64×64 pixels image is input to the MSCNN.

The MSCNN consists of 3 scales and 9 layers of neural networks, as shown in Fig. 4. It helps locate the spot accurately and settle the nonlinear autocollimation system errors.

 figure: Fig. 4.

Fig. 4. Schematic of the MSCNN structure.

Download Full Size | PDF

The network input volume has a size (imageWidth × imageHeight × channel: 64×64×1) (1 channel indicates an 8-bit grayscale image). Each scale has 3 (C11, C12, C13), 2 (C21, C22), and 1 (C31) convolutional layers, while the convolution kernel size of these scales is 32×32, 16×16, and 4×4, and the number of convolution kernels is 8, 16, and 24, respectively; each convolutional layer is combined with an activation function ReLu. With the help of multiscale convolution kernels, MSCNN identifies the spot gray distribution (small-scale feature) and spot shape (the large-scale feature), extracts their relationship with nonlinear aberration and assembly errors, and builds an accurate measurement model to directly improve the measurement uncertainty. Based on the above measurement situation, the effective information size in the image is approximately 32×32 pixels, and the edge grayscale information of the spot is significant promoting positioning accuracy. Therefore, the convolution kernel with a small size should be selected with short stride, and the pooling layer should be avoided, to retain more grayscale information of the spot edge.

The full-connected layer of the network also has three layers (F41, F42, F43) with 1920, 120, and 64 neurons, respectively. The data output from the last three convolutional layers (C13, C22, C31) constitute inputs of 1920 neurons (F41) together. Eventually, the final outputs are the coordinates X and Y of the spot relative to the central positioning of the input image.

The labels X and Y required for training are computed using the outputs from the standard instrument. The network optimizer selects the Adam algorithm, which can adaptively adjust the learning rate and improve the network training efficiency.

The angle measurement model converts the outputs X and Y from MSCNN into the angles a and β output by the autocollimation system. Firstly, the x- and y-directional displacements of the CMOS light-spot are obtained as follows:

$$\begin{array}{l} \Delta X = X + RPX - 1024\\ \Delta Y = Y + RPY - 768 \end{array}$$
where ΔX and ΔY are the x- and y-directional displacements of the CMOS light-spot, respectively; X and Y represent the outputs from MSCNN; RPX and RPY are the rough coordinates of the spot center.

Finally, the angles a and β can be calculated using Eq. (2) according to Section 2.1.

The white box in MCAM is input image preprocessing (the 1st part) and angle measurement model (the 3rd part), while the MSCNN (the 2nd part) focuses on solving the nonlinear system errors. For the uncalibrated autocollimation system, the above method can be used to build an accurate measurement model. The digital image signal of the angle measurement system is directly converted into the standard instrument outputs through MCAM, enhancing the measurement accuracy and realizing the effect of ultra-precision measurement with a low-cost device.

3. Experiments and results

3.1 Experimental setup

To investigate the feasibility and performance of the MCAM, an experimental setup, which is shown in Fig. 5, was designed and constructed in a clean room. The LD was a Fiber-coupled laser diode with a central wavelength of 532 nm and output power of 10 mW. The CMOS detector was BFS-U3-32S4M (FLIR, USA) with a sensitive area of 7.07 mm × 5.30 mm (2048×1536) and pixel size of 3.45 µm. The focal length and diameter of the collimating lens (L) were 500 mm and 50 mm, respectively. The plane mirror (M) was carried by a 2D angle generator (2D-AG). And this 2D-AG was New Focus 8824-AC (Newport, USA), which produced the 2D angles during experiments.

 figure: Fig. 5.

Fig. 5. Photograph of the autocollimation system based on proposed method.

Download Full Size | PDF

3.2 Accuracy test

According to Section 2.3, the proposed method can improve the measurement accuracy by solving the nonlinear errors. The accuracy of the autocollimatiton system was tested with or without MCAM to verify this claim.

As shown in Fig. 6, a standard instrument and autocollimatiton system were simultaneously employed to measure the 2D angles of the plane mirror. The accuracy test used the experimental data of the standard instrument as the true value and compared the measurement error before and after using the proposed method. The results without MCAM were given by the centroid localization algorithm and Eq. (2). The standard instrument used in this experiment was a calibrated 2-DOF autocollimator (Möller-Wedel Elcomat 3000, Germany) with a resolution of 0.05 arcsec and an accuracy of ±0.1 arcsec.

 figure: Fig. 6.

Fig. 6. Setup for the accuracy test.

Download Full Size | PDF

In this experiment, firstly, the controller makes the 2D-AG drive the plane mirror to change the angle. Within an angle range of ±2.5 arcsec, the autocollimation system and standard instrument collect 11000 samples together, of which 10000 samples are used as the network training set and 1000 evenly distributed samples are used as the network test set. Secondly, the MSCNN is written in Python on the Pycharm platform, and the GPU accelerated training is carried out through NVIDIA GeForce GTX1050Ti. As shown in Fig. 7, after 600 epochs of training, the losses of training and test sets are 0.01 and 0.04, respectively. The network converges rapidly in 200 epochs and achieves good results.

 figure: Fig. 7.

Fig. 7. Losses of training and test sets after 600 epochs.

Download Full Size | PDF

The accuracy results without MCAM are shown in black in Figs. 8(a) and (b). The extended uncertainty of pitch and yaw angle are 1.94 arcsec and 1.75 arcsec (k = 2), respectively. With MCAM, the extended uncertainty of pitch and yaw angle increases to 0.29 arcsec and 0.18 arcsec (k = 2), respectively, as shown in red in Figs. 8(a) and (b). Thus, the measurement uncertainty is raised from 1.94 arcsec to 0.29 arcsec when using MCAM, approximately 7 times higher.

 figure: Fig. 8.

Fig. 8. Results of the accuracy test.

Download Full Size | PDF

3.3 Stability test

To prove that the proposed method has no effect on the system stability, the stability tests were conducted with and without using MCAM.

During the stability test, the 2D-AG did not produce any angle. At this time, the autocollimation system was used to measure the plane mirror for 15 min. The stability results of the autocollimation system without MCAM are shown in Fig. 9. The stability of pitch and yaw angles are 0.20 arcsec and 0.10 arcsec, respectively. The stability results of the autocollimation system with MCAM are shown in Fig. 10. The stability of pitch and yaw angles are 0.25 arcsec and 0.14 arcsec, respectively. The results verify that the proposed method can support system stability.

 figure: Fig. 9.

Fig. 9. Results of the stability test without using MCAM.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Results of the stability test with using MCAM.

Download Full Size | PDF

3.4 Resolution test

The resolution tests were carried out with and without using MCAM, to prove this method has no effect on the system resolution. As shown in Fig. 6, the autocollimatiton system and Elcomat 3000 were simultaneously employed to measure the 2D small angles of the plane mirror.

During the resolution test, the angle step of the 2D-AG was set to 0.05 arcsec every 3 seconds, and the pitch and yaw angles were generated independently. The resolution results without MCAM are shown in Fig. 11. The pitch and yaw angle resolutions are 0.05 arcsec. When using MCAM, the resolution results are shown in Fig. 12. Also the pitch and yaw angle resolutions all reach 0.05 arcsec. The results show that the MCAM maintains the system resolution.

 figure: Fig. 11.

Fig. 11. Results of the resolution test without using MCAM.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Results of the resolution test with using MCAM.

Download Full Size | PDF

3.5 Other autocollimation systems using MCAM

A new autocollimation system with different band light sources and focal length collimating lens compared with that in Section 3.1 was built to test the universality of the proposed method.

The new experimental setup is shown in Fig. 13. The light source, M265F1 (Thorlabs, USA), is a Fiber-coupled LED with a central wavelength of 625 nm and output power of 5.7 mW. The focal length and diameter of the collimating lens (L) are 125 mm and 25.4 mm, respectively. The extended uncertainty of this new autocollimation system is shown in Fig. 14, which is elevated from 9.24 arcsec to 0.78 arcsec. The results indicate that the proposed method has a good universality and can be used in other autocollimation systems.

 figure: Fig. 13.

Fig. 13. Setup of the new autocollimation system.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Results of the accuracy test in other autocollimation systems.

Download Full Size | PDF

4. Conclusion

Aiming at the problem of nonlinear errors in autocollimation system, a high-precision autocollimation method based on MSCNN for angle measurement is proposed in this paper. This method utilizes MSCNN to construct an accurate nonlinear measurement model, so that the uncalibrated autocollimation system can directly output the measurement value of standard instrument. Consequently, this strategy directly improves the measurement accuracy with reducing the installation cost. An autocollimation system based on this method is constructed, and experiments illustrate that MCAM achieved a high angular extended uncertainty of 0.29 arcsec without affecting the system resolution and stability. Additionally, this technique can be applied to other autocollimation systems, which proves a good universality.

The MCAM provides researchers with a software method to achieve ultra-precision measurement accuracy with low-cost sensors. And in future studies, we plan to establish an accurate analytical expression of the autocollimation system, in order to reduce the training cost of this method.

Funding

National Natural Science Foundation of China (51405107, 51775149).

Acknowledgments

J. S. thanks the National Natural Science Foundation for supporting this work.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Shimizu, S. L. Tan, D. Murata, T. Maruyama, S. Ito, Y. Chen, and W. Gao, “Ultra-sensitive angle sensor based on laser autocollimation for measurement of stage tilt motions,” Opt. Express 24(3), 2788–2805 (2016). [CrossRef]  

2. J. Xue, Z. Qiu, L. Fang, Y. Lu, and W. Hu, “Angular Measurement of High Precision Reducer for Industrial Robot,” IEEE Trans. Instrum. Meas. 70, 1–10 (2021). [CrossRef]  

3. J. Yellowhair and J. H. Burge, “Analysis of a scanning pentaprism system for measurements of large flat mirrors,” Appl. Opt. 46(35), 8466–8474 (2007). [CrossRef]  

4. F. Siewert, J. Buchheim, S. Boutet, G. J. Williams, P. A. Montanez, J. Krzywinski, and R. Signorato, “Ultra-precise characterization of LCLS hard X-ray focusing mirrors by high resolution slope measuring deflectometry,” Opt. Express 20(4), 4525–4536 (2012). [CrossRef]  

5. L. Huang, J. Nicolas, and M. Idir, “Repeatability analysis of one-dimensional angular-measurement-based stitching interferometry,” Opt. Express 26(16), 20192–20202 (2018). [CrossRef]  

6. P. Huang, Y. Li, H. Wei, L. Ren, and S. Zhao, “Five-degrees-of-freedom measurement system based on a monolithic prism and phase-sensitive detection technique,” Appl. Opt. 52(26), 6607–6615 (2013). [CrossRef]  

7. S. Zhou, V. Le, S. Xiong, Y. Yang, K. Ni, Q. Zhou, and G. Wu, “Dual-comb spectroscopy resolved three-degree-of-freedom sensing,” Photonics Res 9(2), 243–251 (2021). [CrossRef]  

8. G. J. Bergues, C. Schurrer, and N. Brambilla, “Uncertainty Determination of the Set Nikon 6B Autocollimator plus Visual Interface,” IEEE Trans. Instrum. Meas. 67(5), 1058–1064 (2018). [CrossRef]  

9. Y. Chen, Y. Shimizu, Y. Kudo, S. Ito, and W. Gao, “Mode-locked laser autocollimator with an expanded measurement range,” Opt. Express 24(14), 15554–15569 (2016). [CrossRef]  

10. Y. Chen, Y. Shimizu, J. Tamada, Y. Kudo, S. Madokoro, K. Nakamura, and W. Gao, “Optical frequency domain angle measurement in a femtosecond laser autocollimator,” Opt. Express 25(14), 16725–16738 (2017). [CrossRef]  

11. X. Tan, F. Zhu, C. Wang, Y. Yu, J. Shi, X. Qi, F. Yuan, and J. Tan, “Two-Dimensional Micro-/Nanoradian Angle Generator with High Resolution and Repeatability Based on Piezo-Driven Double-Axis Flexure Hinge and Three Capacitive Sensors,” Sens. 17(11), 2672 (2017). [CrossRef]  

12. R. Li, M. Zhou, G. Konyakhin, K. Di, Y. Lu, Q. Guo, and Y. Liu, “Cube-corner autocollimator with expanded measurement range,” Opt. Express 27(5), 6389–6403 (2019). [CrossRef]  

13. J. Wang, C. Liu, S. Qin, G. Zhu, Y. Shao, S. Fu, and D. Liu, “Double-grating with multiple diffractions enabled small angle measurement,” Opt. Express 27(4), 5289–5296 (2019). [CrossRef]  

14. I. L. Lovchy, “Modeling a broad-band single-coordinate autocollimator with an extended mark and a detector in the form of a linear-array camera,” J. Opt. Technol. 88(11), 654–660 (2021). [CrossRef]  

15. C. Peng, H. Gong, Z. Gao, G. Wang, X. Liang, Y. He, X. Dong, and J. Wang, “New type of autocollimator based on normal tracing method and Risley prisms,” Appl. Opt. 60(32), 10114–10119 (2021). [CrossRef]  

16. P. Hu, D. Chang, J. Tan, R. Yang, H. Yang, and H. Fu, “Displacement measuring grating interferometer: a review,” Front Inform Technol Electron Eng. 20(5), 631–654 (2019). [CrossRef]  

17. W. Gao, Y. Saito, H. Muto, Y. Arai, and Y. Shimizu, “A three-axis autocollimator for detection of angular error motions of a precision stage,” CIRP Annals 60(1), 515–518 (2011). [CrossRef]  

18. F. Zhu, J. Tan, and J. Cui, “Common-path design criteria for laser datum based measurement of small angle deviations and laser autocollimation method in compliance with the criteria with high accuracy and stability,” Opt. Express 21(9), 11391–11403 (2013). [CrossRef]  

19. F. Zhu, J. Tan, and J. Cui, “Beam splitting target reflector based compensation for angular drift of laser beam in laser autocollimation of measuring small angle deviations,” Rev. Sci. Instrum. 84(6), 065116 (2013). [CrossRef]  

20. I. A. Konyakhin and A. Smekhov, “Survey of illuminance distribution of vignetted image at autocollimation systems by computer simulation,” Proc. SPIE 8759, 87593F (2013). [CrossRef]  

21. Y. Huang, Y. Lin, W. Wang, and M. Zhao, “Research on the model of photoelectric auto-collimating system based on vector operation,” Laser. Infrared. 39, 1086–1090 (2009).

22. L. Tan, Y. Cao, J. Ma, and K. Li, “Optical image centroid prediction based on machine learning for laser satellite communication,” Opt. Express 27(19), 26615–26638 (2019). [CrossRef]  

23. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

24. L. V. Nguyen, C. C. Nguyen, G. Carneiro, H. Ebendorff-Heidepriem, and S. C. Warren-Smith, “Sensing in the presence of strong noise by deep learning of dynamic multimode fiber interference,” Photonics Res 9(4), B109–B118 (2021). [CrossRef]  

25. G. Jiang, H. He, J. Yan, and P. Xie, “Multiscale Convolutional Neural Networks for Fault Diagnosis of Wind Turbine Gearbox,” IEEE Trans. Ind. Electron. 66(4), 3196–3207 (2019). [CrossRef]  

26. J. Zhu, N. Chen, and W. Peng, “Estimation of Bearing Remaining Useful Life Based on Multiscale Convolutional Neural Network,” IEEE Trans. Ind. Electron. 66(4), 3208–3216 (2019). [CrossRef]  

27. G. Li and Y. Yu, “Visual Saliency Detection Based on Multiscale Deep CNN Features,” IEEE Trans. on Image Process. 25(11), 5012–5024 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Schematic of autocollimation system.
Fig. 2.
Fig. 2. Schematic of the MCAM in this paper.
Fig. 3.
Fig. 3. Flowchart of the MCAM
Fig. 4.
Fig. 4. Schematic of the MSCNN structure.
Fig. 5.
Fig. 5. Photograph of the autocollimation system based on proposed method.
Fig. 6.
Fig. 6. Setup for the accuracy test.
Fig. 7.
Fig. 7. Losses of training and test sets after 600 epochs.
Fig. 8.
Fig. 8. Results of the accuracy test.
Fig. 9.
Fig. 9. Results of the stability test without using MCAM.
Fig. 10.
Fig. 10. Results of the stability test with using MCAM.
Fig. 11.
Fig. 11. Results of the resolution test without using MCAM.
Fig. 12.
Fig. 12. Results of the resolution test with using MCAM.
Fig. 13.
Fig. 13. Setup of the new autocollimation system.
Fig. 14.
Fig. 14. Results of the accuracy test in other autocollimation systems.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Δ x = f tan ( 2 α )
α = 1 2 arctan ( x f ) β = 1 2 arctan ( y f )
R P X = x i j I i j I i j R P Y = y i j I i j I i j
Δ X = X + R P X 1024 Δ Y = Y + R P Y 768
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.