Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Analysis of imaging for laser triangulation sensors under Scheimpflug rule

Open Access Open Access

Abstract

In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.

©2013 Optical Society of America

1. Introduction

In photographic practice one is often faced with the problem of imaging of objects that are lying in the plane tilted by some angle with respect to an optical axis of a rotationally symmetrical optical system, i.e. in the plane, which is not perpendicular to the optical axis. A typical situation of the object lying in such a tilted plane is, for example, the case of taking pictures of tall buildings. In the field of metrology this situation occurs, for example, in the case of laser triangulation sensors, etc. This problem was firstly investigated by Jules Carpentier [1]. However, he has not described the problem mathematically. The detailed analysis of the problem was performed later by Theodor Scheimpflug (1865-1911) [2], who proved mathematically that if the image of the object lying in the plane tilted with respect to the optical axis of a photographic lens should be sharp, then the plane of the film (photographic plate, detector) has to be tilted with respect to the optical axis of a photographic lens in such a way that it intersects the image principal plane of the lens in the same height as the object plane intersects the object principal plane and it goes through the image of the axial point of the object. This condition is called Scheimpflug condition (Scheimpflug rule) [24]. Several companies manufacture commercially professional photographic cameras that enable to use Scheimpflug condition or various tilt and shift lenses are used for classical cameras [5,6].

Another field where one can meet with the problem of imaging of objects that lie in planes tilted with respect to the optical axis of a rotationally symmetrical optical system is the field of laser triangulation sensors for distance or surface topography measurements [718]. Due to the fact that Theodor Scheimpflug assumed the photographic lens as an ideal optical system in his analysis, the results he obtained are not completely accurate for the case of a real optical system. If one performs a more detailed analysis of a given type of imagery for the case of the real optical system, then it is possible to find that for different object points the optical system has different f-numbers and different aberrations. Due to these two effects the image generated by the real optical system will not be sharp. In order to reduce the negative influence of above mentioned effects as much as possible one can reduce the aperture of the optical system, for example, by setting the f-number higher than 11. As far as we are concerned this problem of an analysis of Scheimpflug imaging with respect to aberrations was not analyzed and described in literature yet. In further text we focus on a detailed analysis of laser triangulation sensors based on geometrical optics theory and the relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.

2. Imaging of objects lying in the plane tilted with respect to optical axis of rotationally symmetrical optical system

Let us now focus on a derivation of an equation for sharp imaging of points in the object space on a straight line tilted with respect to the optical axis of the ideal rotationally symmetrical optical system, i.e. the optical system that images point as a point, line as a line and plane as a plane.

Assume imaging of two different points by a given optical system shown in Fig. 1. The first point is the point A that lies on the optical axis of our optical system in the axial distance qA from the object focal point F and the second point is the point C that lies in distance y perpendicularly from the optical axis of our optical system and in the axial distance qB from object focal point F. F' is the image focal point of a given optical system. The image of the point A is the point A' and the image of the point C is the point C', that lies in distance y′ perpendicularly from the optical axis of optical system. Points P and P' are principal points of the optical system. For imaging of such optical system in air it holds [19,20]

m=yy=qf=fq,
qq=f2,
where m is the transverse magnification, f is the focal length of given optical system, q or q′ are distances of object or image points from object or image focal points. The meaning of other symbols is evident from Fig. 1. The straight line going in the object space through points A and C makes an angle α with the optical axis of the optical system (α is negative), for which we can write
tanα=yqAqB.
Analogously, the straight line going in the image space through points A' and C' makes an angle β with the optical axis of the optical system (β is positive). It holds
tanβ=yqAqB=yqAqBf2(qAqB)=tanαqAf=tanαmA,
where we used Eq. (2). If we denote h=PH¯ and h=PH¯ then according to Fig. 1 it holds
tanα=hqAf,tanβ=hqA+f=hqAf(qAf).
Using Eqs. (4) and (5) one obtains
hqAf(qAf)=(hqAf)qAf
and therefore
h=h.
Thus we can express the following lemma (Scheimpflug condition): “If the optical system images points lying on a straight line tilted with respect to the optical axis of this optical system that goes through the axial object point, then the images of these points will lie on the straight line going through the image of the axial object point and both straight lines will intersect with principal planes in the same heights.”

 figure: Fig. 1

Fig. 1 Imaging of object in tilted plane by ideal optical system.

Download Full Size | PDF

Let us now apply the previous equations to the problem of laser triangulation sensors [718]. Assume we have the detector (e.g. CCD matrix sensor) in the image plane, which is tilted by the angle β with respect to the optical axis of the optical system of the sensor that enable us to measure the distance d=AC¯=y/sinβ (d is positive and yis negative). Now, if we want to determine the distance t=qAqB in the object space, which corresponds to the value d measured by the detector, we will proceed in the following way. From Eq. (4) we obtain

y(qAt)=tftanα
and thus
t=yqAftanα+y=dqAsinβdsinβftanα=dqAdf2qAcosβ.
This equation is the fundamental equation of the laser triangulation method. Quantities f, α, β and qA are known (they are given by the construction parameters of the measuring device). By measuring the quantity d, i.e. distance between points C' and A', and using the previous equation, one can calculate the distance t. Using Eq. (9) we obtain for the distance d the following formula
d=(f2qAcosβ)(1qAqB).
By differentiation of Eq. (10) we have
δd=(f2cosβ)δqBqB2.
Equation (11) enable us to calculate the change of the position of the spot on the detector corresponding to the change of the quantity qB by a small value δqB. The quantity δqBis the measurement error using the triangulation sensor. If we can find the position of the spot on the detector with accuracyδd then the measurement error is according to Eq. (11) given by
δqB=qB2(cosβf2)δd.
The calculation of parameters of the laser triangulation sensor can be performed in the following way. We choose values: qA, α, d, and t. For the focal length f of the objective lens we then obtain using Eqs. (4) and (9) the following formula
f4+(qAtanα)2f2(qAd)2(1qA/t)2=0.
Let us now assume the imaging by the real optical system. Figure 2 shows the situation, where ζ and ζ' denote the entrance and exit pupils of the optical system. The diameter of the entrance pupil is D and the diameter of the exit pupil is D'. The meaning of other symbols is clear from Fig. 2.

 figure: Fig. 2

Fig. 2 Imaging of object in tilted plane by diffraction limited optical system.

Download Full Size | PDF

Assume now that the optical system is the diffraction limited optical system (aberrations are zero). For angle ωmbetween edge meridional rays that emerge from the point C and go through the edge of the entrance pupil and for angle ωmbetween two edge meridional rays that go through the edge of the exit pupil and converge into point C' one can derive the following relations

tanωm=pCDpC2+y2D2/4,tanωm=pCDpC2+y2D2/4.
For the sagittal rays one can write

tanωs2=D2pC2+y2,tanωs2=D2pC2+y2.

Denoting ω0 the angle between the two meridional rays emerging from the axial point A and going through the edge of the entrance pupil and ω0 the angle between the two meridional rays converging to the axial point A' and going through the edge of the exit pupil, one can derive following relations

tanω02=D2pA,tanω02=D2pA.
Denoting the aperture angle of the ray bundle in the object space as θ and the aperture angle of the ray bundle in the image space as θ, i.e. the angle between the ray that goes through the edge of the pupil and the central ray of the bundle i.e. the ray that halves the angle ω in object space and ω in image space one can write
θm=ωm/2,θm=ωm/2,θs=ωs/2,θs=ωs/2,θ0=ω0/2,θ0=ω0/2.
The f-number in the image space for the optical system in air is given by
F=12sinθ.
As it is known from the theory of optical imaging the image of a point is not a point but some energy distribution called the point-spread-function (PSF). In case of the diffraction limited optical system with the circular pupil the diameter of the central part of PSF (Airy disc) for the imaging of the axial object point is given by [20]
dA=2.4λF,
where λ is the wavelength of light.

3. Change in aberrations with object position

Consider the problem of the influence of the change in the object position on imaging properties of a general rotationally symmetrical optical system. The described problem will be analyzed using the theory of third-order aberrations [19,2128] which enables to obtain the solution in a simple analytical form. Aberration properties for light of a specific wavelength are given by its third-order aberration coefficients SI, SII, SIII, SIV, SV, and SVI, where SI is the coefficient of spherical aberration, SII is the coefficient of coma, SIII is the coefficient of astigmatism, SIV is the Petzval’s sum, SV is the coefficient of distortion, and SVI is the coefficient of spherical aberration in pupils. We obtain the following equations [21,22] for transverse ray aberrations within the validity of the third-order aberration theory

δy=mδy+k(a1g4SIa2g3gpSII+a3g2gP2SIII+a4SIVa5ggP3SV),δx=mδx+k(b1g4SIb2g3gpSII+b3g2gP2SIII+b4SIV),
where δx,δy are transverse ray aberrations in the object plane, and δx,δyare transverse ray aberrations in the image plane. The object can be represented in a general case as the image created by the preceding optical system. Values δx and δy describe transverse ray aberrations of the preceding optical system in the object plane of the optical system under consideration. We can set δx = 0 and δy = 0 if any optical system is not located in front of the considered optical system. The coefficients in previous formulas are given by
k=12ngp13,a1=yP1(yP12+xP12),a2=(3yP12+xP12)y,a3=3yP1y2,a4=n2yP1y2p12,a5=y3,b1=xP1(yP12+xP12),b2=2yP1xP1y,b3=xP1y2,b4=n2xP1y2p12,
where m is the transverse magnification of the optical system, g is the angular magnification of the optical system, gP is the angular magnification of the optical system in pupils, p1 is the distance from the object to the entrance pupil, xP1, yP1 are the coordinates of the intersection of the ray with the plane of entrance pupil, y is the size of the object, and n, n' are indices of refraction of object and image space. The case, when object and image surfaces are not planar, is described e.g. in [26,27]. Coefficients SI, SII, SIII, SIV, SV, which describe imaging properties of the optical system for an arbitrary position of the object (arbitrary transverse magnification m), can be expressed using the third order aberration coefficients SI,SII,SIII,SIV,SV. These coefficients characterize imaging properties of the optical system for imaging of the object at infinity (p1=, transverse magnification m = 0), and the coefficient of spherical aberration in pupils SVI=SVI. Formulas for aberration coefficients SI, SII, SIII, SIV, SV [19,2128] can be rewritten after a tedious derivation into the following matrix form [21]
S=BG,
where
S=(g4SIgPg3SIIgP2g2SIIISIVgP3gSV),B=(b11b12b13b14b150b22b23b24b2500b33b34b350000b45000b54b55),G=(g4g3g2g1),
b11=SI,b12=4gP(SI+SII)nf,b13=6gP2(SI2SII+SIII)+2n2f2SIV,b14=4gP3(SI+3SII3SIII+SV)4n2f2gPSIV+3nf,b15=gP4(SI4SII+6SIII4SV+SVI)+2n2f2gP2SIV+gPnf(gP23),b22=gPSII,b23=3gP2(SII+SIII)+nf(nfSIVgP),b24=3gP3(SII2SIII+SV)2nf(nfgPSIV1),b25=gP4(SII+3SIII3SV+SVI)+n2f2gP2SIV+gPnf(gP22),b33=gP2SIII,b34=2gP3(SVSIII)nf(gP21),b35=gP4(SIII+SVI2SV)+nfgP(gP21),b45=SIV,b54=gP3SV,b55=gP4(SV+SVI).
From previous relations, it is evident that if aberration coefficients of the optical system are known for one value of magnification, than we can calculate aberrations coefficients of the optical system for any other value of magnification. One can see from Eqs. (22)(24) the advantage of the matrix form of formulas for aberration coefficients. The matrix B has to be calculated once for all and one can use it for different values of magnification. The matrix form is also very useful for zoom lens design.

It can be shown that previous formulas are generally valid (within the validity of the third-order aberration theory) and do not depend on the composition of the optical system [2128]. Properties of the optical system are then fully specified by its focal length f′ and third-order aberration coefficients SI,SII,SIII,SIV,SV,SVI, which characterize imaging properties of the optical system for imaging of the object at infinity. Aberration coefficients SI, SII, SIII, SIV, SV are calculated for the following input values

h1=s1/g,σ1=1/g,hP1=sP1/gP,σP1=1/gP,
and aberration coefficients SI,SII,SIII,SIV,SV,SVI are determined for the following input values
h1=f,σ1=0,hP1=sP1/gP,σP1=1/gP,
where h1 and σ1 is the paraxial incidence height and angle of the aperture ray (first auxiliary ray), hP1 and σP1 is the paraxial incidence height and angle of the principal ray (second auxiliary ray) at the first surface of the optical system, s1 is the distance from the first surface of the optical system to the object plane, and sP1 is the distance from the first surface of the optical system to the entrance pupil. We can clearly see from previous equations that aberrations of the optical system change in case of the varying object position. The optical system is called aberration-free for a given value of magnification (object position) if all aberration coefficients are zero for this magnification (object position), i.e. in our case we have SI=SII=SIII=SIV=SV=0.

Assume now that for laser triangulation sensor we will use the optical system (objective lens) that has corrected all third order aberrations for the object at infinity and therefore third order aberration coefficients have zero values: SI=0,SII=0,SIII=0,SIV=0,SV=0, SVI=0. The matrix B then simplifies to

B=f(0n03nngP(gP23)00ngP2nngP(gP22)000n(gP21)ngP(gP21)0000000000).
Assume that both the image and object space is air n = n' = 1 (the most common situation in practice). Using Eqs. (22)(24) and Eq. (27) we can rewrite Eq. (20) as
δx=(1/2g)[SI0A3cosφSII0A2sin2φtgw+SIII0Acosφtg2w],δy=(1/2g)[SI0A3sinφSII0A2(1+2sin2φ)tgw+3SIII0Asinφtg2w],
where A=xP12+yP12/p1 is the numerical aperture in the object space, φ is the angle in the entrance pupil plane, w is the angle of field of view (tg w = y/p1), and
SI0=f[g33ggP(gP23)],SII0=f[g2gP2ggP(gP22)],SIII0=f[g(gP21)gP(gP21)].
The upper index “0” in Eq. (29) denotes the aberration coefficients for the optical system, which has zero aberration coefficients for the object at infinity. The mean value of transverse ray aberrations (coordinates of the centroid of the spot diagram) δx, δy and the diameter dc of the circle of confusion in the paraxial image plane can be expressed by the following formulas [21,22]
δx=0,δy=12gAM2tgwSIIo,
dc=AM6g9AM4SI02+48AM2SI0SIII0tg2w+24AM2SII02tg2w+90SIII02tg4w,
where
AM=12F0(gg)P,tgw=yf(gPg).
F0 is the f-number of the optical system for the object at infinity and w is the angle between the chief ray and the optical axis in the object space. The diameter dc of the circle of confusion gives us the size of the area, where almost all energy of the ray bundle is concentrated. Using previous equations we obtain for the “aberration induced error” of measurement [21,22]
δy=ymmP+m2(12mP2)8F02(mPm)2,
where m is the transverse magnification of the optical system (m = 1/g), mP is the transverse magnification in pupils of the optical system (mP = 1/gP), and y is the image size. The detailed discussion of this problem can be found in [21,22] and therefore we will not deal with it here.

Using Eq. (4) we obtain

y=f(mBmA)tanβ=f(mB/mA1)tanα.
If we set m=mB=f/qB in Eq. (33) we obtain
δyB=ymBmP+mB2(12mP2)8F02(mPmB)2.
Figure 3 shows the imaging of points lying in the object plane η tilted with respect to optical axis by the real optical system OS with aberrations. The point A' is the paraxial image of the point A and the point C' lying in the paraxial image plane η' is the paraxial image of the point C. Planes η and η' satisfy Scheimpflug condition. The paraxial image height isy=BC¯. The homocentric bundle of rays emerging from the point C goes through the entrance pupil ζ and it is transformed by the optical system OS to the nonhomocentric bundle of rays, which has the energetic centre located at the point C” at the distance δyB=CC¯ from the paraxial point C'. The chief ray EC¯ of this bundle intersects the plane η' at the point D', which is the intersection of straight lines AC¯ and EC¯. We obtain for the distance δdB=CD¯ according to Fig. 3
δdB=δyB1+tan2βtanwtanβ=δyB1+tan2βtanwmP(1+δyBy)tanβδyB1+tan2βtanwmPtanβ.
Due to the fact that δyB/y<<1, the third expression in Eq. (36) is sufficiently accurate for practical cases. Equation (36) enables us to calculate the value of the shift δdB of the energetic centre of the spot at the detector (lying in plane η') with respect to the position of this energetic centre in case of the ideal optical system without aberrations. As one can see if we use the optical system corrected for the object at infinity (e.g. photographic lens) in the laser triangulation sensor, then measuring of objects in the finite distance (measuring the finite distance) is affected by the measurement error.

 figure: Fig. 3

Fig. 3 Imaging of object in tilted plane by real optical system with aberrations.

Download Full Size | PDF

We can write for the distance measurement error caused by the dependence of aberrations of the optical system on the object distance then according to Eq. (11)

δqB=qB2(cosβf2)δdB.
The accuracy of measurement is affected by the roughness of the measured object. This problem was investigated in Refs [7,8,14,29]. The measuring error due to surface roughness is then given by
δqspeckle=C2πλsinαsinu,
where C is the speckle contrast (C = 1 for coherent illumination and C < 1 for partially coherent illumination), λ is the wavelength, and sinu is the observation aperture in the object space. The measurement accuracy is also affected by properties of the photodetector, design, fabrication, and material properties of the measuring device. It was not the aim of this work to analyze all these error sources. The reader can find more information in Refs [30,31], where the properties of photodetectors are described, and in Ref [32], which is focused on optomechanical properties of optoelectronic devices. By a suitable adjustment many of above-mentioned effects can be reduced to an acceptable level. The adjustment process of optical systems and devices is described in detail in books [3335].

4. Example

Let us show the example of calculation of parameters of the laser triangulation sensor using the objective lens corrected for the object at infinity (SI=0,SII=0,SIII=0,SIV=0, SV=0, SVI=0). We choose, for example, qA=35mm, α=20°, dmax = 15 mm, and t=10mm.

The results of the error calculation for different object positions qB are given in Table 1, where δqB is the error in the determination of the object position, δdB is the shift of the position of the spot on the detector corresponding to the quantityδqB and dcis the diameter of the circle of confusion in the plane ξ' with the center at the point C”. The linear dimensions in Table 1 are given in millimeters. Table 1 presents two cases of objective lenses with different f-numbers, F0=5 and F0=8. As one can see from Table 1 the change of aberrations of the objective lens with f-number F0=5 causes the error of the measured distance of δqB0.034mmfor the case of 20 mm measuring range of sensor. As it can also be seen from Table 1 the error caused by the aberrations is higher than the error δqspeckle caused by the surface roughness of the measured object. By decreasing the aperture (increasing the f-number) the error of the objective lens reduces.

Tables Icon

Table 1. Triangulation Sensor Example

5. Conclusion

It was shown that the fulfillment of the so called Scheimpflug condition does not guarantee the sharp image of the object as it is usually declared, because due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. We performed a detailed theoretical analysis of the so called Scheimpflug imaging condition, i.e. the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system. The analysis was performed within the validity of the third order aberration theory. It was presented the analysis of the influence of mentioned effects on the accuracy of the laser triangulation sensors measurements. Formulas for the calculation of measurement errors and construction parameters of laser triangulation sensors were derived in our work.

Acknowledgment

This work has been supported by the Czech Science Foundation grant 13-31765S.

References and links

1. Improvements in Enlarging or like Cameras, British Patent No.1139, 1901.

2. Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for other Purposes, British Patent No.1196, 1904.

3. L. Larmore, Introduction to Photographic Principles (Dover Publications, 1965).

4. S. F. Ray, Applied Photographic Optics (Focal Press, 2002).

5. http://www.linhof.de/index-e.html/

6. http://www.micro-epsilon.com/

7. R. G. Dorsch, G. Häusler, and J. M. Herrmann, “Laser triangulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33(7), 1306–1314 (1994). [CrossRef]   [PubMed]  

8. R. Leach, Optical Measurement of Surface Topography (Springer, 2011).

9. K. Harding, Handbook of Optical Dimensional Metrology (Taylor & Francis, 2013).

10. K. Žbontar, M. Mihelj, B. Podobnik, F. Povše, and M. Munih, “Dynamic symmetrical pattern projection based laser triangulation sensor for precise surface position measurement of various material types,” Appl. Opt. 52(12), 2750–2760 (2013). [CrossRef]   [PubMed]  

11. H.-Y. Feng, Y. Liu, and F. Xi, “Analysis of digitizing errors of a laser scanning system,” Precis. Eng. 25(3), 185–191 (2001). [CrossRef]  

12. R.-T. Lee and F.-J. Shiou, “Multi-beam laser probe for measuring position and orientation of freeform surface,” Measurement 44(1), 1–10 (2011). [CrossRef]  

13. J. Liu, L. Tian, and L. Li, “Light power density distribution of image spot of laser triangulation measuring,” Opt. Lasers Eng. 29(6), 457–463 (1998). [CrossRef]  

14. Lei Shen, Dinggen Li, and Feng Luo, “A study on laser speckle correlation method applied in triangulation displacement measurement,” Optik (submitted) 2013. [CrossRef]  

15. H. Wang, “Long-range optical triangulation utilising collimated probe beam,” Opt. Lasers Eng. 23(1), 41–52 (1995). [CrossRef]  

16. V. Lombardo, T. Marzulli, C. Pappalettere, and P. Sforza, “A time-of-scan laser triangulation technique for distance measurements,” Opt. Lasers Eng. 39(2), 247–254 (2003). [CrossRef]  

17. G. Wang, B. Zheng, X. Li, Z. Houkes, and P. P. L. Regtien, “Modelling and calibration of the laser beam-scanning triangulation measurement system,” Robot. Auton. Syst. 40(4), 267–277 (2002). [CrossRef]  

18. B. Muralikrishnan, W. Ren, D. Everett, E. Stanfield, and T. Doiron, “Performance evaluation experiments on a laser spot triangulation probe,” Measurement 45(3), 333–343 (2012). [CrossRef]  

19. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999).

20. H. Gross, Handbook of Optical Systems: Fundamentals of Technical Optics (Wiley 2005).

21. A. Miks and J. Novak, “Estimation of accuracy of optical measuring systems with respect to object distance,” Opt. Express 19(15), 14300–14314 (2011). [CrossRef]   [PubMed]  

22. A. Miks and J. Novak, “Dependence of camera lens induced radial distortion and circle of confusion on object position,” Opt. Laser Technol. 44(4), 1043–1049 (2012). [CrossRef]  

23. A. Miks, Applied Optics (Czech Technical University, 2009).

24. H. A. Buchdahl, An Introduction to Hamiltonian Optics (Cambridge University, 1970).

25. W. T. Welford, Aberrations of the Symmetrical Optical Systems (Academic Press, 1974).

26. M. Herzberger, Modern Geometrical Optics (Interscience, 1958).

27. M. Herzberger, Strahlenoptik (Verlag von Julius Springer, Berlin, 1931).

28. C. G. Wynne, “Primary aberrations and conjugate change,” Proc. Phys. Soc. 65B, 429–437 (1952).

29. R. Baribeau and M. Rioux, “Centroid fluctuations of speckled targets,” Appl. Opt. 30(26), 3752–3755 (1991). [CrossRef]   [PubMed]  

30. F. Träger, Handbook of Laser and Optics (Springer, 2007).

31. B. E. A. Saleh and M. C. Teich, Fundamental of Photonics (John Wiley & Sons, 2007).

32. P. E. Yoder, Jr., Opto-Mechanical Systems Design (CRC, 2006).

33. M. M. Rusinov, Юстировка оптических приборов (Недра, 1969).

34. J. Picht, Meß - und Prüfmethoden der Optischen Fertigung (Akademie-Verlag, 1953).

35. F. Hansen, Justierung (VEB Verlag Technik, 1967).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1
Fig. 1 Imaging of object in tilted plane by ideal optical system.
Fig. 2
Fig. 2 Imaging of object in tilted plane by diffraction limited optical system.
Fig. 3
Fig. 3 Imaging of object in tilted plane by real optical system with aberrations.

Tables (1)

Tables Icon

Table 1 Triangulation Sensor Example

Equations (38)

Equations on this page are rendered with MathJax. Learn more.

m= y y = q f = f q ,
q q = f 2 ,
tanα= y q A q B .
tanβ= y q A q B = y q A q B f 2 ( q A q B ) =tanα q A f = tanα m A ,
tanα= h q A f ,tanβ= h q A + f = h q A f ( q A f ) .
h q A f ( q A f ) =( h q A f ) q A f
h= h .
y ( q A t)=t f tanα
t= y q A f tanα+ y = d q A sinβ dsinβ f tanα = d q A d f 2 q A cosβ .
d=( f 2 q A cosβ )( 1 q A q B ).
δd=( f 2 cosβ ) δ q B q B 2 .
δ q B = q B 2 ( cosβ f 2 )δd.
f 4 + ( q A tanα) 2 f 2 ( q A d) 2 ( 1 q A /t ) 2 =0.
tan ω m = p C D p C 2 + y 2 D 2 /4 ,tan ω m = p C D p C 2 + y 2 D 2 /4 .
tan ω s 2 = D 2 p C 2 + y 2 ,tan ω s 2 = D 2 p C 2 + y 2 .
tan ω 0 2 = D 2 p A ,tan ω 0 2 = D 2 p A .
θ m = ω m /2, θ m = ω m /2, θ s = ω s /2, θ s = ω s /2, θ 0 = ω 0 /2, θ 0 = ω 0 /2.
F= 1 2sin θ .
d A =2.4λF,
δ y =mδy+k( a 1 g 4 S I a 2 g 3 g p S II + a 3 g 2 g P 2 S III + a 4 S IV a 5 g g P 3 S V ), δ x =mδx+k( b 1 g 4 S I b 2 g 3 g p S II + b 3 g 2 g P 2 S III + b 4 S IV ),
k= 1 2 n g p 1 3 , a 1 = y P1 ( y P1 2 + x P1 2 ), a 2 =(3 y P1 2 + x P1 2 )y, a 3 =3 y P1 y 2 , a 4 = n 2 y P1 y 2 p 1 2 , a 5 = y 3 , b 1 = x P1 ( y P1 2 + x P1 2 ), b 2 =2 y P1 x P1 y, b 3 = x P1 y 2 , b 4 = n 2 x P1 y 2 p 1 2 ,
S=BG,
S=( g 4 S I g P g 3 S II g P 2 g 2 S III S IV g P 3 g S V ),B=( b 11 b 12 b 13 b 14 b 15 0 b 22 b 23 b 24 b 25 0 0 b 33 b 34 b 35 0 0 0 0 b 45 0 0 0 b 54 b 55 ),G=( g 4 g 3 g 2 g 1 ),
b 11 = S I , b 12 =4 g P ( S I + S II )n f , b 13 =6 g P 2 ( S I 2 S II + S III )+2 n 2 f 2 S IV , b 14 =4 g P 3 ( S I +3 S II 3 S III + S V )4 n 2 f 2 g P S IV +3n f , b 15 = g P 4 ( S I 4 S II +6 S III 4 S V + S VI )+2 n 2 f 2 g P 2 S IV + g P n f ( g P 2 3), b 22 = g P S II , b 23 =3 g P 2 ( S II + S III )+n f (n f S IV g P ), b 24 =3 g P 3 ( S II 2 S III + S V )2n f (n f g P S IV 1), b 25 = g P 4 ( S II +3 S III 3 S V + S VI )+ n 2 f 2 g P 2 S IV + g P n f ( g P 2 2), b 33 = g P 2 S III , b 34 =2 g P 3 ( S V S III )n f ( g P 2 1), b 35 = g P 4 ( S III + S VI 2 S V )+n f g P ( g P 2 1), b 45 = S IV , b 54 = g P 3 S V , b 55 = g P 4 ( S V + S VI ).
h 1 = s 1 /g, σ 1 =1/g, h P1 = s P1 / g P , σ P1 =1/ g P ,
h 1 = f , σ 1 =0, h P1 = s P1 / g P , σ P1 =1/ g P ,
B= f ( 0 n 0 3n n g P ( g P 2 3) 0 0 n g P 2n n g P ( g P 2 2) 0 0 0 n( g P 2 1) n g P ( g P 2 1) 0 0 0 0 0 0 0 0 0 0 ).
δ x =(1/2g)[ S I 0 A 3 cosφ S II 0 A 2 sin2φtgw+ S III 0 Acosφt g 2 w], δ y =(1/2g)[ S I 0 A 3 sinφ S II 0 A 2 (1+2 sin 2 φ)tgw+3 S III 0 Asinφt g 2 w],
S I 0 = f [ g 3 3g g P ( g P 2 3)], S II 0 = f [ g 2 g P 2g g P ( g P 2 2)], S III 0 = f [g( g P 2 1) g P ( g P 2 1)].
δ x =0,δ y = 1 2g A M 2 tgw S II o ,
d c = A M 6g 9 A M 4 S I 0 2 +48 A M 2 S I 0 S III 0 t g 2 w+24 A M 2 S II 0 2 t g 2 w+90 S III 0 2 t g 4 w ,
A M = 1 2 F 0 (gg ) P ,tgw= y f ( g P g ) .
δ y = y m m P + m 2 (12 m P 2 ) 8 F 0 2 ( m P m) 2 ,
y = f ( m B m A )tanβ= f ( m B / m A 1)tanα.
δ y B = y m B m P + m B 2 (12 m P 2 ) 8 F 0 2 ( m P m B ) 2 .
δ d B = δ y B 1+ tan 2 β tan w tanβ = δ y B 1+ tan 2 β tanw m P ( 1+ δ y B y )tanβ δ y B 1+ tan 2 β tanw m P tanβ .
δ q B = q B 2 ( cosβ f 2 )δ d B .
δ q speckle = C 2π λ sinαsinu ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.