Abstract
In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.
©2013 Optical Society of America
1. Introduction
In photographic practice one is often faced with the problem of imaging of objects that are lying in the plane tilted by some angle with respect to an optical axis of a rotationally symmetrical optical system, i.e. in the plane, which is not perpendicular to the optical axis. A typical situation of the object lying in such a tilted plane is, for example, the case of taking pictures of tall buildings. In the field of metrology this situation occurs, for example, in the case of laser triangulation sensors, etc. This problem was firstly investigated by Jules Carpentier [1]. However, he has not described the problem mathematically. The detailed analysis of the problem was performed later by Theodor Scheimpflug (1865-1911) [2], who proved mathematically that if the image of the object lying in the plane tilted with respect to the optical axis of a photographic lens should be sharp, then the plane of the film (photographic plate, detector) has to be tilted with respect to the optical axis of a photographic lens in such a way that it intersects the image principal plane of the lens in the same height as the object plane intersects the object principal plane and it goes through the image of the axial point of the object. This condition is called Scheimpflug condition (Scheimpflug rule) [2–4]. Several companies manufacture commercially professional photographic cameras that enable to use Scheimpflug condition or various tilt and shift lenses are used for classical cameras [5,6].
Another field where one can meet with the problem of imaging of objects that lie in planes tilted with respect to the optical axis of a rotationally symmetrical optical system is the field of laser triangulation sensors for distance or surface topography measurements [7–18]. Due to the fact that Theodor Scheimpflug assumed the photographic lens as an ideal optical system in his analysis, the results he obtained are not completely accurate for the case of a real optical system. If one performs a more detailed analysis of a given type of imagery for the case of the real optical system, then it is possible to find that for different object points the optical system has different f-numbers and different aberrations. Due to these two effects the image generated by the real optical system will not be sharp. In order to reduce the negative influence of above mentioned effects as much as possible one can reduce the aperture of the optical system, for example, by setting the f-number higher than 11. As far as we are concerned this problem of an analysis of Scheimpflug imaging with respect to aberrations was not analyzed and described in literature yet. In further text we focus on a detailed analysis of laser triangulation sensors based on geometrical optics theory and the relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.
2. Imaging of objects lying in the plane tilted with respect to optical axis of rotationally symmetrical optical system
Let us now focus on a derivation of an equation for sharp imaging of points in the object space on a straight line tilted with respect to the optical axis of the ideal rotationally symmetrical optical system, i.e. the optical system that images point as a point, line as a line and plane as a plane.
Assume imaging of two different points by a given optical system shown in Fig. 1. The first point is the point A that lies on the optical axis of our optical system in the axial distance qA from the object focal point F and the second point is the point C that lies in distance y perpendicularly from the optical axis of our optical system and in the axial distance qB from object focal point F. F' is the image focal point of a given optical system. The image of the point A is the point A' and the image of the point C is the point C', that lies in distance y′ perpendicularly from the optical axis of optical system. Points P and P' are principal points of the optical system. For imaging of such optical system in air it holds [19,20]
where m is the transverse magnification, is the focal length of given optical system, q or q′ are distances of object or image points from object or image focal points. The meaning of other symbols is evident from Fig. 1. The straight line going in the object space through points A and C makes an angle α with the optical axis of the optical system (α is negative), for which we can writeAnalogously, the straight line going in the image space through points A' and C' makes an angle β with the optical axis of the optical system (β is positive). It holdswhere we used Eq. (2). If we denote and then according to Fig. 1 it holdsUsing Eqs. (4) and (5) one obtainsand thereforeThus we can express the following lemma (Scheimpflug condition): “If the optical system images points lying on a straight line tilted with respect to the optical axis of this optical system that goes through the axial object point, then the images of these points will lie on the straight line going through the image of the axial object point and both straight lines will intersect with principal planes in the same heights.”Let us now apply the previous equations to the problem of laser triangulation sensors [7–18]. Assume we have the detector (e.g. CCD matrix sensor) in the image plane, which is tilted by the angle with respect to the optical axis of the optical system of the sensor that enable us to measure the distance (d is positive and is negative). Now, if we want to determine the distance in the object space, which corresponds to the value d measured by the detector, we will proceed in the following way. From Eq. (4) we obtain
and thusThis equation is the fundamental equation of the laser triangulation method. Quantities , , and are known (they are given by the construction parameters of the measuring device). By measuring the quantity d, i.e. distance between points C' and A', and using the previous equation, one can calculate the distance t. Using Eq. (9) we obtain for the distance d the following formulaBy differentiation of Eq. (10) we haveEquation (11) enable us to calculate the change of the position of the spot on the detector corresponding to the change of the quantity qB by a small value . The quantity is the measurement error using the triangulation sensor. If we can find the position of the spot on the detector with accuracy then the measurement error is according to Eq. (11) given byThe calculation of parameters of the laser triangulation sensor can be performed in the following way. We choose values: , , d, and t. For the focal length of the objective lens we then obtain using Eqs. (4) and (9) the following formulaLet us now assume the imaging by the real optical system. Figure 2 shows the situation, where ζ and ζ' denote the entrance and exit pupils of the optical system. The diameter of the entrance pupil is D and the diameter of the exit pupil is D'. The meaning of other symbols is clear from Fig. 2.Assume now that the optical system is the diffraction limited optical system (aberrations are zero). For angle between edge meridional rays that emerge from the point C and go through the edge of the entrance pupil and for angle between two edge meridional rays that go through the edge of the exit pupil and converge into point C' one can derive the following relations
For the sagittal rays one can writeDenoting ω0 the angle between the two meridional rays emerging from the axial point A and going through the edge of the entrance pupil and the angle between the two meridional rays converging to the axial point A' and going through the edge of the exit pupil, one can derive following relations
Denoting the aperture angle of the ray bundle in the object space as and the aperture angle of the ray bundle in the image space as , i.e. the angle between the ray that goes through the edge of the pupil and the central ray of the bundle i.e. the ray that halves the angle in object space and in image space one can writeThe f-number in the image space for the optical system in air is given byAs it is known from the theory of optical imaging the image of a point is not a point but some energy distribution called the point-spread-function (PSF). In case of the diffraction limited optical system with the circular pupil the diameter of the central part of PSF (Airy disc) for the imaging of the axial object point is given by [20]where is the wavelength of light.3. Change in aberrations with object position
Consider the problem of the influence of the change in the object position on imaging properties of a general rotationally symmetrical optical system. The described problem will be analyzed using the theory of third-order aberrations [19,21–28] which enables to obtain the solution in a simple analytical form. Aberration properties for light of a specific wavelength are given by its third-order aberration coefficients SI, SII, SIII, SIV, SV, and SVI, where SI is the coefficient of spherical aberration, SII is the coefficient of coma, SIII is the coefficient of astigmatism, SIV is the Petzval’s sum, SV is the coefficient of distortion, and SVI is the coefficient of spherical aberration in pupils. We obtain the following equations [21,22] for transverse ray aberrations within the validity of the third-order aberration theory
where are transverse ray aberrations in the object plane, and are transverse ray aberrations in the image plane. The object can be represented in a general case as the image created by the preceding optical system. Values δx and δy describe transverse ray aberrations of the preceding optical system in the object plane of the optical system under consideration. We can set δx = 0 and δy = 0 if any optical system is not located in front of the considered optical system. The coefficients in previous formulas are given bywhere m is the transverse magnification of the optical system, g is the angular magnification of the optical system, gP is the angular magnification of the optical system in pupils, p1 is the distance from the object to the entrance pupil, xP1, yP1 are the coordinates of the intersection of the ray with the plane of entrance pupil, y is the size of the object, and n, n' are indices of refraction of object and image space. The case, when object and image surfaces are not planar, is described e.g. in [26,27]. Coefficients SI, SII, SIII, SIV, SV, which describe imaging properties of the optical system for an arbitrary position of the object (arbitrary transverse magnification m), can be expressed using the third order aberration coefficients . These coefficients characterize imaging properties of the optical system for imaging of the object at infinity (, transverse magnification m = 0), and the coefficient of spherical aberration in pupils . Formulas for aberration coefficients SI, SII, SIII, SIV, SV [19,21–28] can be rewritten after a tedious derivation into the following matrix form [21]where From previous relations, it is evident that if aberration coefficients of the optical system are known for one value of magnification, than we can calculate aberrations coefficients of the optical system for any other value of magnification. One can see from Eqs. (22) – (24) the advantage of the matrix form of formulas for aberration coefficients. The matrix B has to be calculated once for all and one can use it for different values of magnification. The matrix form is also very useful for zoom lens design.It can be shown that previous formulas are generally valid (within the validity of the third-order aberration theory) and do not depend on the composition of the optical system [21–28]. Properties of the optical system are then fully specified by its focal length f′ and third-order aberration coefficients , which characterize imaging properties of the optical system for imaging of the object at infinity. Aberration coefficients SI, SII, SIII, SIV, SV are calculated for the following input values
and aberration coefficients are determined for the following input valueswhere and is the paraxial incidence height and angle of the aperture ray (first auxiliary ray), and is the paraxial incidence height and angle of the principal ray (second auxiliary ray) at the first surface of the optical system, s1 is the distance from the first surface of the optical system to the object plane, and sP1 is the distance from the first surface of the optical system to the entrance pupil. We can clearly see from previous equations that aberrations of the optical system change in case of the varying object position. The optical system is called aberration-free for a given value of magnification (object position) if all aberration coefficients are zero for this magnification (object position), i.e. in our case we have .Assume now that for laser triangulation sensor we will use the optical system (objective lens) that has corrected all third order aberrations for the object at infinity and therefore third order aberration coefficients have zero values: . The matrix B then simplifies to
Assume that both the image and object space is air n = n' = 1 (the most common situation in practice). Using Eqs. (22) – (24) and Eq. (27) we can rewrite Eq. (20) aswhere is the numerical aperture in the object space, φ is the angle in the entrance pupil plane, w is the angle of field of view (tg w = y/p1), andThe upper index “0” in Eq. (29) denotes the aberration coefficients for the optical system, which has zero aberration coefficients for the object at infinity. The mean value of transverse ray aberrations (coordinates of the centroid of the spot diagram) , and the diameter dc of the circle of confusion in the paraxial image plane can be expressed by the following formulas [21,22] whereF0 is the f-number of the optical system for the object at infinity and w is the angle between the chief ray and the optical axis in the object space. The diameter dc of the circle of confusion gives us the size of the area, where almost all energy of the ray bundle is concentrated. Using previous equations we obtain for the “aberration induced error” of measurement [21,22]where m is the transverse magnification of the optical system (m = 1/g), mP is the transverse magnification in pupils of the optical system (mP = 1/gP), and is the image size. The detailed discussion of this problem can be found in [21,22] and therefore we will not deal with it here.Using Eq. (4) we obtain
If we set in Eq. (33) we obtainFigure 3 shows the imaging of points lying in the object plane η tilted with respect to optical axis by the real optical system OS with aberrations. The point A' is the paraxial image of the point A and the point C' lying in the paraxial image plane η' is the paraxial image of the point C. Planes η and η' satisfy Scheimpflug condition. The paraxial image height is. The homocentric bundle of rays emerging from the point C goes through the entrance pupil ζ and it is transformed by the optical system OS to the nonhomocentric bundle of rays, which has the energetic centre located at the point C” at the distance from the paraxial point C'. The chief ray of this bundle intersects the plane η' at the point D', which is the intersection of straight lines and . We obtain for the distance according to Fig. 3Due to the fact that , the third expression in Eq. (36) is sufficiently accurate for practical cases. Equation (36) enables us to calculate the value of the shift of the energetic centre of the spot at the detector (lying in plane η') with respect to the position of this energetic centre in case of the ideal optical system without aberrations. As one can see if we use the optical system corrected for the object at infinity (e.g. photographic lens) in the laser triangulation sensor, then measuring of objects in the finite distance (measuring the finite distance) is affected by the measurement error.We can write for the distance measurement error caused by the dependence of aberrations of the optical system on the object distance then according to Eq. (11)
The accuracy of measurement is affected by the roughness of the measured object. This problem was investigated in Refs [7,8,14,29]. The measuring error due to surface roughness is then given bywhere C is the speckle contrast (C = 1 for coherent illumination and C < 1 for partially coherent illumination), λ is the wavelength, and is the observation aperture in the object space. The measurement accuracy is also affected by properties of the photodetector, design, fabrication, and material properties of the measuring device. It was not the aim of this work to analyze all these error sources. The reader can find more information in Refs [30,31], where the properties of photodetectors are described, and in Ref [32], which is focused on optomechanical properties of optoelectronic devices. By a suitable adjustment many of above-mentioned effects can be reduced to an acceptable level. The adjustment process of optical systems and devices is described in detail in books [33–35].4. Example
Let us show the example of calculation of parameters of the laser triangulation sensor using the objective lens corrected for the object at infinity ( ). We choose, for example, , , dmax = 15 mm, and
The results of the error calculation for different object positions are given in Table 1, where is the error in the determination of the object position, is the shift of the position of the spot on the detector corresponding to the quantity and is the diameter of the circle of confusion in the plane ξ' with the center at the point C”. The linear dimensions in Table 1 are given in millimeters. Table 1 presents two cases of objective lenses with different f-numbers, and . As one can see from Table 1 the change of aberrations of the objective lens with f-number causes the error of the measured distance of for the case of 20 mm measuring range of sensor. As it can also be seen from Table 1 the error caused by the aberrations is higher than the error caused by the surface roughness of the measured object. By decreasing the aperture (increasing the f-number) the error of the objective lens reduces.
5. Conclusion
It was shown that the fulfillment of the so called Scheimpflug condition does not guarantee the sharp image of the object as it is usually declared, because due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. We performed a detailed theoretical analysis of the so called Scheimpflug imaging condition, i.e. the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system. The analysis was performed within the validity of the third order aberration theory. It was presented the analysis of the influence of mentioned effects on the accuracy of the laser triangulation sensors measurements. Formulas for the calculation of measurement errors and construction parameters of laser triangulation sensors were derived in our work.
Acknowledgment
This work has been supported by the Czech Science Foundation grant 13-31765S.
References and links
1. Improvements in Enlarging or like Cameras, British Patent No.1139, 1901.
2. Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for other Purposes, British Patent No.1196, 1904.
3. L. Larmore, Introduction to Photographic Principles (Dover Publications, 1965).
4. S. F. Ray, Applied Photographic Optics (Focal Press, 2002).
5. http://www.linhof.de/index-e.html/
6. http://www.micro-epsilon.com/
7. R. G. Dorsch, G. Häusler, and J. M. Herrmann, “Laser triangulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33(7), 1306–1314 (1994). [CrossRef] [PubMed]
8. R. Leach, Optical Measurement of Surface Topography (Springer, 2011).
9. K. Harding, Handbook of Optical Dimensional Metrology (Taylor & Francis, 2013).
10. K. Žbontar, M. Mihelj, B. Podobnik, F. Povše, and M. Munih, “Dynamic symmetrical pattern projection based laser triangulation sensor for precise surface position measurement of various material types,” Appl. Opt. 52(12), 2750–2760 (2013). [CrossRef] [PubMed]
11. H.-Y. Feng, Y. Liu, and F. Xi, “Analysis of digitizing errors of a laser scanning system,” Precis. Eng. 25(3), 185–191 (2001). [CrossRef]
12. R.-T. Lee and F.-J. Shiou, “Multi-beam laser probe for measuring position and orientation of freeform surface,” Measurement 44(1), 1–10 (2011). [CrossRef]
13. J. Liu, L. Tian, and L. Li, “Light power density distribution of image spot of laser triangulation measuring,” Opt. Lasers Eng. 29(6), 457–463 (1998). [CrossRef]
14. Lei Shen, Dinggen Li, and Feng Luo, “A study on laser speckle correlation method applied in triangulation displacement measurement,” Optik (submitted) 2013. [CrossRef]
15. H. Wang, “Long-range optical triangulation utilising collimated probe beam,” Opt. Lasers Eng. 23(1), 41–52 (1995). [CrossRef]
16. V. Lombardo, T. Marzulli, C. Pappalettere, and P. Sforza, “A time-of-scan laser triangulation technique for distance measurements,” Opt. Lasers Eng. 39(2), 247–254 (2003). [CrossRef]
17. G. Wang, B. Zheng, X. Li, Z. Houkes, and P. P. L. Regtien, “Modelling and calibration of the laser beam-scanning triangulation measurement system,” Robot. Auton. Syst. 40(4), 267–277 (2002). [CrossRef]
18. B. Muralikrishnan, W. Ren, D. Everett, E. Stanfield, and T. Doiron, “Performance evaluation experiments on a laser spot triangulation probe,” Measurement 45(3), 333–343 (2012). [CrossRef]
19. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999).
20. H. Gross, Handbook of Optical Systems: Fundamentals of Technical Optics (Wiley 2005).
21. A. Miks and J. Novak, “Estimation of accuracy of optical measuring systems with respect to object distance,” Opt. Express 19(15), 14300–14314 (2011). [CrossRef] [PubMed]
22. A. Miks and J. Novak, “Dependence of camera lens induced radial distortion and circle of confusion on object position,” Opt. Laser Technol. 44(4), 1043–1049 (2012). [CrossRef]
23. A. Miks, Applied Optics (Czech Technical University, 2009).
24. H. A. Buchdahl, An Introduction to Hamiltonian Optics (Cambridge University, 1970).
25. W. T. Welford, Aberrations of the Symmetrical Optical Systems (Academic Press, 1974).
26. M. Herzberger, Modern Geometrical Optics (Interscience, 1958).
27. M. Herzberger, Strahlenoptik (Verlag von Julius Springer, Berlin, 1931).
28. C. G. Wynne, “Primary aberrations and conjugate change,” Proc. Phys. Soc. 65B, 429–437 (1952).
29. R. Baribeau and M. Rioux, “Centroid fluctuations of speckled targets,” Appl. Opt. 30(26), 3752–3755 (1991). [CrossRef] [PubMed]
30. F. Träger, Handbook of Laser and Optics (Springer, 2007).
31. B. E. A. Saleh and M. C. Teich, Fundamental of Photonics (John Wiley & Sons, 2007).
32. P. E. Yoder, Jr., Opto-Mechanical Systems Design (CRC, 2006).
33. M. M. Rusinov, Юстировка оптических приборов (Недра, 1969).
34. J. Picht, Meß - und Prüfmethoden der Optischen Fertigung (Akademie-Verlag, 1953).
35. F. Hansen, Justierung (VEB Verlag Technik, 1967).