Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Geometrical camera calibration with diffractive optical elements

Open Access Open Access

Abstract

Traditional methods for geometrical camera calibration are based on calibration grids or single pixel illumination by collimated light. A new method for geometrical sensor calibration by means of Diffractive Optical Elements (DOE) in connection with a laser beam equipment is presented. This method can be especially used for 2D-sensor array systems but in principle also for line scanners.

©2008 Optical Society of America

1. Introduction

In order to use camera based measurements in machine vision high accuracy geometric camera calibration is absolutely essential. The objective is to determine the interior camera parameters needed for mapping 3D world coordinates to 2D image coordinates. A common approach is the photogrammetric calibration using predefined calibration grids [1, 2]. Several observations with different orientations are needed to estimate the camera parameters by minimizing a nonlinear error function. Due to a restricted grid size this technique is more or less limited to close range camera calibration. Another method eligible for far field camera calibration uses collimator-goniometer arrangements to illuminate a set of single pixels (n×m). Knowing the directions of the collimated light, it is possible to estimate the camera parameters [3]. A more comprehensive summary of key developments in camera calibration is provided in [4].

The calibration procedure reported here combines the particular advantages of calibration grid arrangements and single pixel illumination. By using diffractive optical elements as beam splitters only one image with n×m diffraction points is needed to estimate the interior camera parameters.

2. Camera calibration with diffractive optical elements

Diffractive optical elements (DOEs) can be used to split an incoming laser beam with wavelength λ into a number of beams with well-known propagation directions. As the image on the sensor is a Fraunhofer diffraction pattern, each projected image point represents a point at infinity, denoted in 3D projective space P3 by the homogeneous coordinate d=[X,Y,Z,0]T with

d=[λfx,λfy,(1λ2(fx2+fy2))12,0]T

where f=[f x, f y] denotes a spatial frequency encoded in the DOE. With suitable computational algorithms [5] it is possible to encode spatially aperiodic DOEs with arbitrary spatial frequencies, choosing the propagation directions freely. As they are easier to design for the large aperture diameters needed spatially periodic DOEs were used here. Their spatial frequencies are given by f x,y=n x,y/g x,y, with grating constants [g x,g y] and [n x,n y] denoting the particular diffraction orders. The grating vectors are defining the x- and y-axes of the DOE coordinate frame.

However, Eq. (1) is only valid if the incident light wave is a plane wave with uniform intensity distribution, perfectly perpendicular to the DOE surface. In a real setup, the beam is finite in extension and often has a non-uniform intensity profile, which is typically Gaussian. Moreover, a slight tilt of the DOE with respect to the incident beam is hard to avoid.

The deviations of the real beam profile from a plane wave cause the diffraction spots in the far field to have a certain size, which can be estimated from the Convolution theorem of Fourier Optics [6]. For a more detailed analysis, a laser beam can be expressed by its angular spectrum. The consequent propagation directions are determined with the diffraction formula for non-perpendicular incidence to the DOE, which needs to be applied in our analysis anyway because of the potentially unavoidable tilt of the DOE with respect to the incident laser beam. For the following analysis, the DOE coordinate system will be used, in which the incident beam is given by

r=[sin(β),sin(α)cos(β),cos(α)cos(β)]T

with the euler angles α and β rotating the x- and y- axes of the DOE coordinate frame in terms of the collimator coordinate frame. The directions of the diffracted beams are now obtained as follows [9]

d=[λfx+rx,λfy+ry,(1(λfx+rx)2(λfy+ry)2)12,0]T.

It is straightforward to calculate the diffracted beam directions in the DOE coordinate frame by simple matrix operations, therefore we will omit the somewhat lengthy expressions that are obtained as a result.

In order to transform the beam directions into the camera coordinate frame, the exterior orientation of the camera in terms of the DOE coordinate frame has to be considered:

d=[Rt01]d

where R is a 3×3 rotation matrix defining the camera orientation and t the translation vector for the camera position. Equation 4 shows that the mapping of ideal points at infinity is invariant against translation which is a necessary condition for the following steps. It is also a great advantage compared to classical calibration grids for just one image is sufficient for calibration and therefore less parameters have to be estimated.

2.1. DOE Design and Fabrication

In the design of the DOEs the task is to distribute the incident laser power evenly among the diffraction orders that are chosen for the production of calibration spots. Because we intend to use large diffraction angles, an exact simulation of powers of the diffracted orders would require a simulation method based on the exact electromagnetic diffraction theory, like the rigorous coupled-wave analysis [7] which can be used to simulate crossed gratings.

However, such exact simulation is computationally demanding and would take quite long, especially in our case where many diffraction orders are needed. From earlier experience we know that scalar diffraction theory can still deliver valuable predictions as long as the requirements for accuracy are not too high [8]. As this is exactly the case in our application, in which we are much more interested in an exact prediction of the diffraction angles and can tolerate non-ideal performance with respect to diffraction efficiency or uniformity error.

For our application, the choice of the grating constants g x and g y is very important, as it determines the angular spacing between the created waves. The range of orders to be created defines the overall angular range that is accessible in the calibration procedure. Because a large angular range can only be obtained by using sufficiently small pixels in the design, restrictions to the angular range can apply. The angular spacing between two adjacent diffraction orders [n x,n y] and [n x+1,n y] increases both with n x and n y. In the pattern created by the first designed and fabricated ‘71×71’ DOE (table 1) it was found that for this reason the spot density decreased significantly in the outer sensor regions. Therefore a different design approach was used for the ‘29×29’ DOE where suitable orders where selected in order to keep the spot density at a constant level over the sensor. Also, the angular spacing was increased because the spot density of the earlier ‘71×71’ DOE was too high for cameras with smaller focal lengths.

The DOEs were fabricated by e-beam-lithography and subsequent etching to the chromium layer, so that the DOEs used were binary amplitude gratings. The undiffracted (‘zero order’) beam has a power of 25% for such grating and therefore is by far the strongest spot in the diffraction pattern, but this was tolerable for our experiments. If it would turn out to be a limitation, a fabrication of a binary phase-only element is still possible by reactive ion etching into the fused silica substrate. In this case the diffracted orders would gain power at the expense of the zero order.

The crucial point in the fabrication is the necessity of high resolution in order to be able to access a certain angular range, but on the other hand a large aperture is required for calibration of the whole lens system of the camera. The sequential e-beam writing process imposes limitations here, because the writing time which is feasible is restricted by the necessity to preserve the grating constants over the whole aperture. Thus, stitching and positioning errors which are more likely to be introduced with increasing writing time need to be avoided by finding a compromise between the size of the writing area and the pixel resolution. Table 1 summarizes the important parameters of the two different DOEs used in the experiments.

The accuracy of the diffraction angles dependents on the accuracy both of the wavelength and the grating constants, as can be seen from Eq. (1). Therefore gas lasers emitting precisely given wavelengths in the visible were used, rather than diode lasers which can easily drift in wavelength. The angular accuracy was checked with a collimator-goniometer arrangement finding only minor deviations from the computed values of less than 0.001°.

2.2. Calibration setup

The principle scheme for geometrical sensor calibration is illustrated in Fig. 1. A mixed-gas argon/krypton ion laser (Melles Griot 643 series) was used which offers a selection of wavelengths in the visible spectral range. The beam was collimated and enlarged with a beam expander to a diameter of 78 mm. The enlarged beam was then diffracted by a DOE which is located directly in front of the camera optics. The diameter of the incident laser beam and that of the DOE active area should at least equal to the aperture diameter of the camera lens. Each of the diffracted beams is focused within the image plane of the camera. In order to obtain spots over the whole camera sensor area, the maximum diffraction angle of the DOE should be larger than the field of view of the camera. No further alignment steps are necessary, because firstly the mapping of the diffraction points is invariant against translation, and secondly the rotation of the DOE in terms of the collimation system as well as the exterior orientation of the camera is modeled and can thus be determined, as will be proved in section 3.

 figure: Fig. 1.

Fig. 1. Scheme of camera calibration with DOE

Download Full Size | PDF

2.3. Camera model

The mapping of 3D world coordinates to 2D image coordinates is done by central projection. Ideal beam directions d from (4) are projected on the plane Z =1:

[xy1]=[XZYZ1]

with [x,y]T representing the ideal normalized image coordinates. Applying the camera matrix K defining the pinhole model we get the ideal pixel image coordinates [u,v]T.

[uv1]=K[xy1]

with:

K=[f0u00fv0001]

where f is the focal lenght in terms of pixel dimensions and the principal point [u 0,v 0]T in pixel coordinates. Because of the pinhole model does not consider lens distortion, the model is extended by a position error δ.

[x̂ŷ]=[xy]+δ(x,y)

There are several different distortion models available. The most common is the radial distortion model by Brown [1] considering pincushion- or barrel distortion which is expressed as follows

δ(x,y)=[xy](k1r2+k2r4+k3r6+)

with

r2=x2+y2

The complete mapping of ideal points to distorted image coordinates [û, v̂]T is subsumed to

[xy][ûv̂]=[u0v0]+f[xy](1+k1r2+k2r4+k3r6+)

Given a set of correspondent points d↔ [û,v̂]T we seek to minimize the cost function

minm[ûu0v̂v0]f[xy](1+k1r2+k2r4+k3r6+)2

where m=[f,u 0,v 0,k1,k2,k3,ω,φ,κ,α,β]T describing the interior- and exterior orientation of the camera and a possible rotation (α,β) of the DOE in terms of the collimation coordinate frame. For the mapping to be invariant against translation the exterior orientation only consists of the rotation matrix R which is expressed by the euler angles ω,φ,κ. Correspondent points are found by an iterative approach constantly refining the model parameters. The results are improved by calculating the centroid of the diffraction points which gives subpixel accuracy.

3. Experimental results and discussion

The experiments were conducted with a Dalsa 1M28-SA which is a monochrome CMOS camera and the semiprofessional digital single-lens reflex camera Nikon D2X. Both cameras were calibrated with a wavelength of 676.4 nm, allowing a maximum diffraction angle of 59.96° needed to calibrate wide angle lenses. After aligning the DOE to the collimator system within 200″, the cameras are initially (#1) aligned to the DOE-system by direct lens reflections which determined the principal point at [521,481]T for the Dalsa and [2153,1430]T for the Nikon. This method allows an accuracy of about 3–4 pixel. The achieved results are given in table 3 and 4 with interior orientations (u 0,v 0, f) stated in pixel dimensions. Exterior camera orientation (ω,φ,κ) and DOE tilt (α,β) are given in terms of the collimator coordinate frame and stated in degrees. The number of points used for calibration is denoted with n.

Tables Icon

Table 2. Camera parameters

Dalsa 1M28-SA Due to the low resolution of the camera and a wide-angle short-focal-length lens the ‘29×29’ DOE was chosen for calibration. To prove that interior- and exterior orientation parameters are independent and separable, images with different exterior orientations (dataset #2 and #3) were taken. Furthermore the camera was calibrated with a classic photogrammetric chessboard pattern calibration [10]. The achieved results are shown in table 3.

Tables Icon

Table 3. Calibration results for the Dalsa 1M28-SA

The standard deviation of the residuals between model- and measurement points is less than 0.2 pixel (<2µm) with a maximum error of 1 pixel for each dataset. Applying the parameters of interior orientation from one dataset to another only minimizing for exterior orientation leads to similar residuals. One major error source is the uncertainty at locating the centroid for each diffraction point in subpixel dimensions. When the projected points are rather small it is even more challenging. Additionally the distortion of the 4.8 mm wide-angle lens is very strong (see Fig. (4)) and the used distortion model just justified in this case. A correction for radial distortion is shown in Fig. (2) and (3).

 figure: Fig. 2.

Fig. 2. Original Dalsa image

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Corrected image

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Calibration pattern (+) with radial distortion vectors for Dalsa 1M28-SA

Download Full Size | PDF

Nikon D2X The second test series with the Nikon D2X was done in order to prove that the method also works with high resolution cameras. Here the ‘71×71’ DOE with a higher density of diffraction points was used. In accordance to the wavelength used for calibration only the red channel was evaluated. Starting with an aligned system (#1) and not changing the camera orientation in terms of the collimator frame, images with a tilted DOE were taken (# 2, # 3, # 4). Additionally the exterior orientation of the camera was changed (# 5 and # 6), leaving the DOE tilted. When rotating the camera for measurement # 6, the holder of the DOE was touched by the camera lens, and from the obtained values we can derive that the DOE tilt was changed slightly during this collision. It is noticeable that an exact alignment of the DOE with respect

Tables Icon

Table 4. Calibration results for the Nikon D2X

to the incident laser beam is apparently not required for obtaining a steady calibration result. Using equation 3, both the exterior camera orientation in terms of the collimator coordinate frame and the internal camera parameters can be reproduced very well for the measurements (# 2, # 3, # 4). For all measurements (# 1…# 6), a better resolution compared to the Dalsa and therefore a more accurate subpixel position as well as a better fitting distortion model leads to better results with a standard deviation of less than 0.1 Pixels (<0.4µm) and a maximum residual of less than 0.3 pixel for each dataset.

4. Conclusion and outlook

A new approach for geometrical sensor calibration which uses custom-made diffractive optical elements as beam-splitters with precisely known diffraction angles was described in this paper. As the virtual sources of the diffracted beams are points at infinity, the object to be imaged is similar to the starry sky, which gives a image invariant against translation. This particular feature allows a complete camera calibration with a single image avoiding complex bundle adjustments, resulting in a very fast and reliable calibration procedure.

The achieved results are in accordance with classical camera calibration using the pinhole camera model and a radial distortion model. Decentering distortion has been included as well in our analysis, but with no improvement on the result. It was shown that a reliable solution can be obtained which allows to separate the parameters of the interior orientation from the rotation of the DOE and the exterior orientation of the camera. Hence, a complex alignment of the calibration setup components is not necessary which simplifies the calibration process and in principle allows in-field calibration.

A limiting factor for the accuracy is the determination of the subpixel position for each diffraction point, therefore further investigations on shape and intensity distribution is intended. By using not only a single but different reference wavelengths the presented method also allows the determination and correction of chromatic aberrations. Reduction of transverse chromatic aberrations may even permit a resolution enhancement of the camera for sensor regions with strong wavelength-dependent geometrical distortions. Evidence for this expectation must be delivered by further investigations.

References and links

1. D. C. Brown, “Close-range camera calibration,” Photogrammetric Engineering37, 855–866 (1971)

2. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses,”IEEE Journal of Robotics and Automation3, 323–344 (Aug. 1987).

3. R. Schuster and B. Braunecker, “The Calibration of the ADC (Airborne Digital Camera) -System,” Int. Arch. of Photogrammetry and Remote SensingXXXIII, 288–294 (2000).

4. T. A. Clarke and J. F. Fryer, “The development of camera calibration methods and models,”Photogrammetric Record16, 51–66 (1998).

5. A. Hermerschmidt, S. Krüger, and G. Wernicke, “Binary diffractive beam splitters with arbitrary diffraction angles,”Opt. Lett.32, 448–450 (2007).

6. J. W. Goodman, “Introduction to Fourier Optics,”3rd ed., Roberts & Company Publishers (2005).

7. M. G. Moharam, E. B. Grann, D. A. Pommet, and T. K. Gaylord, “Stable implementation of the rigorous coupled-wave analysis for surface-relief gratings: Enhanced transmittance matrix approach,” J. Opt. Soc. Am. A 12, 1077–1086 (1995). [CrossRef]  

8. M. Ferstl, A. Hermerschmidt, D. Dias, and R. Steingrüber, “Theoretical and experimental properties of a binary linear beam splitting element with a large fan angle,” J. Mod. Opt. 51, 2125–2139 (2004). [CrossRef]  

9. R. C. McPhedran, G. H. Derrick, and L. C. Brown, “Theory of crossed gratings,”227–275 in R. Petit (ed.), “Electromagnetic theory of gratings,”Springer Verlag Berlin (1980)

10. K. H. Strobl, W. Sepp, S. Fuchs, C. Paredes, and K. Arbter, “DLR CalLab und DLR CalDe,”http://www.robotic.dlr.de/callab/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Scheme of camera calibration with DOE
Fig. 2.
Fig. 2. Original Dalsa image
Fig. 3.
Fig. 3. Corrected image
Fig. 4.
Fig. 4. Calibration pattern (+) with radial distortion vectors for Dalsa 1M28-SA

Tables (4)

Tables Icon

Table 1. DOE parameter

Tables Icon

Table 2. Camera parameters

Tables Icon

Table 3. Calibration results for the Dalsa 1M28-SA

Tables Icon

Table 4. Calibration results for the Nikon D2X

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

d = [ λ f x , λ f y , ( 1 λ 2 ( f x 2 + f y 2 ) ) 1 2 , 0 ] T
r = [ sin ( β ) , sin ( α ) cos ( β ) , cos ( α ) cos ( β ) ] T
d = [ λ f x + r x , λ f y + r y , ( 1 ( λ f x + r x ) 2 ( λ f y + r y ) 2 ) 1 2 , 0 ] T .
d = [ R t 0 1 ] d
[ x y 1 ] = [ X Z Y Z 1 ]
[ u v 1 ] = K [ x y 1 ]
K = [ f 0 u 0 0 f v 0 0 0 1 ]
[ x ̂ y ̂ ] = [ x y ] + δ ( x , y )
δ ( x , y ) = [ x y ] ( k 1 r 2 + k 2 r 4 + k 3 r 6 + )
r 2 = x 2 + y 2
[ x y ] [ u ̂ v ̂ ] = [ u 0 v 0 ] + f [ x y ] ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + )
min m [ u ̂ u 0 v ̂ v 0 ] f [ x y ] ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.