Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Geometric model for an independently tilted lens and sensor with application for omnifocus imaging

Open Access Open Access

Abstract

Optical imaging systems in which the lens and sensor are free to rotate about independent pivots offer greater degrees of freedom for controlling and optimizing the process of image gathering. However, to benefit from the expanded possibilities, we need an imaging model that directly incorporates the essential parameters. In this work, we propose a model of imaging which can accurately predict the geometric properties of the image in such systems. Furthermore, we introduce a new method for synthesizing an omnifocus (all-in-focus) image from a sequence of images captured while rotating a lens. The crux of our approach lies in insights gained from the new model.

© 2017 Optical Society of America

1. INTRODUCTION

In this paper, we introduce a geometric model for imaging systems in which the lens and the sensor are free to rotate about independent pivots. An example of such an imager is a Scheimpflug camera.

Although there are existing models, several of them use the thin lens approximation that is overly simplistic. For example, using a thin-lens model, it is impossible to describe the shift in the image field observed upon tilting a lens. On the other hand, thick-lens models that represent the lens using the cardinal planes do not explicitly consider the effects of the pupils on image formation. The absence of pupil parameters in these models makes it difficult to predict the exact nature of the warping in the image field induced by lens rotation.

The model in this paper is no more (or less) accurate than the current thick-lens models, yet it is better suited for predicting the nature of warping in the image when we rotate the lens about an arbitrary point along the optical axis. In the absence of aberrations, the center of perspective projection resides at the center of the entrance pupil [1]. That is, the bundle of chief rays emerging from points in the object space converges at the center of the entrance pupil, forming the vertex of the object-side perspective cone. Concomitantly, the bundle of chief rays diverging from the center of the exit pupil constitutes the vertex of the perspective cone on the image side. Therefore, it seems natural that to make accurate predictions, the image formation model must explicitly incorporate the pupil parameters (the location and size of the entrance and exit pupils), which directly influence the nature of image warping.

We have divided this paper into two main parts. In the first part (Sections 2 and 3), we derive two general relationships: (1) Eq. (12) represents the mapping between image and object points in a system with an arbitrarily rotated lens and sensor planes; (2) Eq. (19) is the relationship between the position and orientations of the object, lens, and sensor planes required for focusing in such systems.

In the second part of the paper (Section 4), we combine salient features from Scheimpflug imaging and focus stacking to propose a new computational technique for circumventing the problem of limited depth of field (DOF). Specifically, we demonstrate how to synthesize an omnifocus (all-in-focus) image from a sequence of images captured while rotating a lens about the center of its entrance pupil.

The underlying mechanisms of our technique stem from the insights we gain about the properties of the geometric image using the derived model. We discuss the impact of the pupils on the correspondence problem between the images in the stack. In particular, we demonstrate that we can register the sequence of images analytically if we rotate the lens about the center of its entrance pupil. Analytic registration is advantageous because it avoids iterative algorithms and is unaffected by the noise and optical blurring that is inevitable in such methods. Our model also shows that if the entrance and exit pupils are of equal size, then the transformation between the images obtained while rotating a lens reduces to a combination of simple translation and scaling.

2. GEOMETRIC MODEL OF IMAGING WITH TILTED LENS AND SENSOR

A. Geometric Image for Tilted Lens and Sensor

1. Transfer of Chief Rays’ Direction Cosine from Entrance to Exit Pupil

To make the problem tractable, yet provide sufficient complexity required for the model, we have made a few assumptions. Specifically, we assume the lens to be rectilinear and free of optical aberrations. We have also utilized a few constructs of paraxial optics theory, such as we approximate the entrance and exit pupils as planes, and the object and image distances from the entrance and exit pupils, respectively, are large compared to the semi-diameter of the corresponding pupils. We also restrict the rotation angles of the object, lens, and sensor planes between π/2 and π/2 about both x and y axes (in-plane rotations or rotations about the z axis are irrelevant for our purpose). This constraint warrants non-negative values for the z-component of the plane normals and permits us to estimate the plane normal unambiguously.

Figure 1 shows a schematic of a general camera represented by the pupil and sensor planes. The figure also enumerates the set of symbols used in the mathematical derivation of our model. We denote the camera coordinate frame by {C}. The pivot for the lens is at the origin of {C}, about which the optical axis may rotate about the x and y axes. The centers of the paraxial entrance and exit pupils—represented by E and E´—lie along the optical axis at distances de and d´e, respectively, from the origin of {C}. The diameters of the entrance and exit pupils are he and h´e, respectively. The symbol {I} denotes the two-dimensional image coordinates. The origin of {I}, at which the sensor plane is pivoted, is located at ti=[0,0,z´o]T in {C}. The figure also illustrates two rays that are fundamental to geometric optics from the object space to the image space—the chief ray and the marginal ray. These two rays, along with the optical axis, always lie in the meridional plane that spans across the object and image spaces [2,3].

 figure: Fig. 1.

Fig. 1. Schematic of the general optical system with the lens pivoted at {C}, the sensor plane pivoted at {I}, and the object plane pivoted at {O}.

Download Full Size | PDF

The chief ray, with direction cosine l, emerges from the object point x, passes through the center of the entrance pupil E, reemerges from the center of the exit pupil E´ with direction cosine l´, and intersects the sensor plane at x´. We expect the input direction cosine l and the output direction cosine l´ to be coplanar; but are l and l´ equal? Before we attempt to answer this question, we first consider a simpler question: if the chief ray makes angles ω and ω´ with the optical axis in the object and image space, respectively, then is |ω|=|ω´|?

To find the relationship between ω and ω´, we consider the marginal rays and the pupils. The marginal ray in the object space originates at the base (projection) of the object point x on the optical axis and travels to the edge of the paraxial entrance pupil at height he/2. The marginal ray in the image space travels from the edge of the exit pupil at height h´e/2 to the base of the image point x´ on the optical axis. Suppose the marginal ray makes an angle Ω with the optical axis in the object space and an angle Ω´ with the optical axis in the image space. Then, if zehe and z´eh´e (generally the case in macroscopic imaging), we obtain

tan(ω)tan(ω´)=h´eheyΩy´Ω´.

Note that although the image point x´ lies in the sensor plane (by definition), its projection on the optical axis may not. The projection of x´ on the optical axis lies in the sensor plane only in the special, yet common case when the optical axis is normal to the sensor plane.

The ratio of the paraxial exit pupil height to the entrance pupil height, h´e/he, is defined as the pupil magnification mp [1,4,5]. Further, according to the Lagrange invariant property [4] of the two rays (the chief and the marginal rays), the transverse magnification (y´/y) is reciprocal to the angular magnification (Ω´/Ω). Therefore, Eq. (1) reduces to

tan(ω)tan(ω´)=mp.

Equation (2) has been derived in [5] using a different approach. We see that unlike nodal rays, the angles that the chief ray makes with the optical axis in the object space are, in general, not equal to the angle it makes with the optical axis in the image space.

To derive the relation between the object and image space direction cosine of the chief ray—l and l´—let us initially suppose that the lens is in the nominal orientation in which the optical axis is coincident with the z axis of frame {C} (we shall relax this condition later). Consequently, the zenith angle of all chief rays in the object space and all chief rays in the image space are ω and ω´, respectively. For a chief ray, let the azimuthal angles in the object and image space be φ and φ´, respectively. If we represent l=[l,m,n]T and l´=[l´,m´,n´]T, then in terms of the azimuthal and zenith angles we have

l=[l,m,n]T=[cos(φ)sin(ω),sin(φ)sin(ω),cos(ω)]Tl´=[l´,m´,n´]T=[cos(φ´)sin(ω´),sin(φ´)sin(ω´),cos(ω´)]T.

Please note that if the optical axis and the z axis of {C} were always coincident, then, utilizing axial symmetry, we could simplify the definition of the direction cosines by letting φ=90° (and φ´=90°), and restricting our analysis to the meridional plane. However, since the lens and the sensor are free to rotate about their independent pivots, the system is not axially symmetric. Therefore, we use the full definition of direction cosines: using both azimuthal and zenith angles.

Following a few algebraic steps using Eq. (2), Eq. (3), and using the fact that a chief ray in the object and image space is always confined to the same meridional plane (i.e., φ´=φ), we obtain

l´=1mpn´nl,m´=1mpn´nm,n´=±mp1+(mp21)n2n.

We can write Eq. (4) compactly as

l´=±11+(mp21)n2Mpl,
where Mp is a 3×3 diagonal matrix with 1, 1, and mp as the diagonal elements. Further, we can safely drop the negative sign in Eq. (5) since the ray emerging from the exit pupil travels in the direction of the positive z axis toward the sensor plane. Equation (5) represents the relationship between the input and output direction cosines l and l´ when the lens is not rotated (i.e., the optical axis coincides with the z axis of frame {C}).

To derive the general expression for the transfer of the chief ray’s direction cosine, we first introduce RR3×3—the rotation matrix applied to the optical axis to rotate the lens about its pivot (at the origin of {C}). We also introduce a local coordinate frame {L} with its origin also at the lens’s pivot, but fixed to the lens such that the z axis of {L} is along the optical axis. The pupil planes and the reference frame {L} rotate along with the optical axis when the lens rotates. As before, we represent the input direction cosine of the chief ray in frame {C} as l. The vector l in frame {L} becomes lL=RTl. Consequently, nL, the z-component of lL, is nR=r,3Tl, where r,3 is the third column of R. Using Eq. (5), we obtain the output direction cosine of the chief ray in reference frame {L} as

l´L=11+(mp21)nR2MpRTl.

Finally, we obtain the output direction cosine of the chief ray, in frame {C}, that emerges from the exit pupil as l´=Rl´L. That is,

l´=11+(mp21)nR2RMpRTl,
where nR=r,3Tl.

We expect the direction cosine l´ to have unit magnitude. It is indeed straightforward to show the 2-Norm of l´ equals unity, and (1+(mp21)(nL)2)1 is the normalizing term. Note that if the pupil magnification mp of the lens equals unity, then l´=l. This result implies that the opening angles of the image and object space perspective cones are equal irrespective of the orientation of the optical axis if mp=1. In terms of geometric optics, mp=1 also implies that the paraxial entrance and exit pupil planes are coincident with the front and rear principal planes, respectively. Such lenses in which mp=1 are called symmetric lenses.

2. Expression of Image Coordinates for Arbitrary Orientation of Lens and Sensor Planes

Equation (7) relates the direction cosines of the chief ray in the object and image spaces. The expression already includes important parameters we would like to model—pupil magnification and the lens rotation matrix. All we are left to do is to incorporate the sensor plane’s orientation, the object point x, and the image point x´. In this section, we build upon Eq. (7) and use properties of planes and ray–plane intersection to obtain an expression for the image point coordinates x´.

The centers of the entrance and exit pupils are located at distances de and d´e from the origin of {C} along the optical axis. Following the rotation of the optical axis, the new locations of the pupil centers in frame {C} become R[0,0,de]T=der,3 and R[0,0,d´e]T=d´er,3. Further, we express the chief ray emerging from the exit pupil as k(λ)=d´er,3+λl´, where the parameter λR determines the length of the ray. Substituting Eq. (7) for l´ we obtain

k(λ)=d´er,3+λ1+(mp21)nR2RMpRTl.

We would like to determine the expression for λ for which k(λ)=x´. Let z´o be the perpendicular distance of the sensor plane from the origin of {C}. Further, if the sensor plane has surface normal n^i, then n^iTx´=z´o represents the equation of the sensor plane in frame {C} in Hessian normal form. Therefore, when k(λ)=x´, we obtain

λ=(z´od´en^iTr,3)1+(mp21)nR2n^iTRMpRTl.

Furthermore, if we represent the orientation of the sensor plane by RiR3×3, then n^i=Ri[0,0,1]T. Also, since the point ti=[0,0,z´o]T lies on the sensor plane, we can write z´o=n^iTti=n^i(3)z´o.

Substituting Eq. (9) into Eq. (8) and using z´o=n^i(3)z´o, we obtain the expression for the image point x´ as

x´=d´er,3+(n^i(3)z´od´en^iTr,3)n^iTRMpRTlRMpRT.

Let the location of the entrance pupil in {C} be xe=der,3. We express l in terms of x and xe as l=(xder,3)/(xex). Substituting l into Eq. (10) yields

x´=d´er,3+(n^i(3)z´od´en^iTr,3)n^iTRMpRT(xder,3)RMpRT(xder,3).

Equation (11) expresses the image point x´ in the camera frame. It is more useful to represent x´ in the two-dimensional image frame {I}. If we represent the coordinates of the image point in the camera frame {C} as x´C, and the equivalent image point coordinate in the image frame {I} as x´I, then x´I=RiT(x´Cti). Therefore, the expression for the image point in the two-dimensional image coordinates when the lens and sensor planes are free to rotate about their own pivots follows as

x´I=RiT(d´er,3ti)+(n^i(3)z´od´en^iTr,3)n^iTRMpRT(xCder,3)RiTRMpRT(xCder,3).

Equation (12) gives the coordinates of the image point (in the image frame) as a function of the corresponding object point (in the camera frame), the position and orientation of the sensor plane, the orientation of the lens, the locations of the entrance and exit pupils, and the pupil magnification. Note that the image coordinates obtained in Eq. (12) are expressed in physical units.

We establish the veracity of Eq. (12) in Section 3.A by comparing the image point coordinates computed using Eq. (12) against the corresponding values generated via ray tracing in Zemax for several different combinations of the parameters. But first, we provide a very brief qualitative study of the effects of lens rotations on the geometric properties of the image. Specifically, we investigate how pupil magnification mp and the location of the lens’s pivot effects the geometric distortions. This study will help us in understanding the underlying mechanisms of the omnifocus image synthesis technique presented in Section 4.

Figure 2 shows the type of distortions in “images” of two planes in the object space. For this qualitative study the term “image” just means the point of intersection (POI) of the chief ray from the object point with the sensor plane obtained using Eq. (12). The object space consists of two planes—a near plane and a far plane. The near plane is a square of 88.15 mm on each side, and the far plane is a square of 178.3 mm on each side placed at twice the distance of the near plane from the entrance pupil. The object points consist of 7×7 square grids on each of the object planes; however, the subplots in Fig. 2 show only three rows out of seven. The exact distance of the near plane (and consequently the far plane) from the lens varies depending upon the pupil magnification, such that the images of the two planes are 4.5 mm on each side on the sensor plane. The sensor plane is not tilted for this study. Furthermore, when the optical axis is perpendicular to the sensor and object planes, the images of the two object planes perfectly overlap. The rotation of the lens distorts the image fields (set of image points) from the two object planes. We can observe that the nature of the distortion is affected by both the pupil magnification mp and the location of the lens’s pivot point with respect to the entrance pupil (de). If mp=1, the extent of scaling and transverse shift is uniform across the field. More importantly, if the lens is pivoted at the entrance pupil, then the geometric warping of the image field becomes independent of the object distance.

 figure: Fig. 2.

Fig. 2. Points of intersection (POI) of chief rays with the sensor plane. The set of chief rays originate from two parallel planes at two different depths from the lens such that, in the absence of lens rotation, their POIs perfectly overlap in the image plane (due to differing transverse magnification). The red and blue markers represent the POIs of the chief ray originating from the planes nearer and further from the lens, respectively. For all subplots shown above, the lens is rotated by 10° and 3° about the x axis and y axis, respectively. The top row—subplots (a), (b), (c)—shows the POIs when the lens is pivoted away (5 mm) from the center of the entrance pupil. The bottom row—subplots (d), (e), (f)—shows the same POIs when the lens is rotated about the center of the entrance pupil. The left, middle, and right columns correspond to lenses with pupil magnification mp equal to 0.55, 1.0, and 2.0, respectively. We can observe that: (1) the POIs from different depths warp by different degrees, causing parallax when the lens is rotated about a point away from the entrance pupil, (2) the nature of geometric distortion induced by lens rotation depends on the pupil magnification. Specifically, if mp=1, the all image points experience the same amount of scaling and transverse shift.

Download Full Size | PDF

B. Object, Lens, and Image Plane Relationships for Focusing using a Scheimpflug Camera

Hitherto, we have expressed the coordinates of the image point corresponding to an object point when the lens and sensor planes are free to rotate about their respective pivots. However, we did not apply any constraints on the orientations of the object, lens, and sensor planes such that points on the object plane are brought to focus (geometric) on the sensor plane. To that effect, we use a variant of the Gaussian lens formula for the parallel plane imaging configuration that relates the object-plane-to-entrance-pupil distance u, exit-pupil-to-image-plane distance u´, pupil magnification mp, and focal length f as shown below [5,6]:

1mpu+mpu´=1f.

In Eq. (13) we specify the directed distances u and u´ along the optical axis. Let us suppose that the object plane is pivoted at (0,0,zo) in the camera frame {C}. Also, we represent the orientation of the object plane using the rotation matrix RoR3×3. Then, the object plane normal, following rotation, is the vector n^o=Ro[0,0,1]T. Now, suppose the orientations of the three planes are such that points in the arbitrarily tilted object plane form focused images on the arbitrarily tilted image plane. Then, the projection of the chief ray in the object space from x to xe on the optical axis and the projection of the chief ray in the image space from x´e to x´ on the optical axis must satisfy Eq. (13).

Following a similar formulation of the chief ray as in Section 2.A.2, we obtain u˜, the length of the chief ray from x to xe as (c^z=[0,0,1]T)

u˜=zo(n^oTc^z)+de(n^oTr,3)n^oTl,
and u´˜, the length of the chief ray from x´e to x´ as
u´˜=z´o(n^iTc^z)d´e(n^iTr,3)n^iTl´.

The ray vector of length u˜ and direction l in the object space is u˜l. The projection of this ray vector on the optical axis o^; (=Rc^z) is u˜(l·o^), and the corresponding directed distance (from xe toward x) is u=u˜(l·o^) Similarly, the projection of the ray in the image space on the optical axis (and the corresponding directed distance) is u´=u´˜(l´·o^). Substituting u and u´ into Eq. (13), and using Eq. (7), we obtain

n^oTlmp[zo(n^oTc^z)de(n^oTr,3)](l·o^)+mpn^iT(RMpRTl)[z´o(n^iTc^z)d´e(n^iTr,3)](RMpRTl)·o^=1f.

Following some algebraic manipulations, especially noting that n^iTRMpRTl is equivalent to (RMpRTn^i)Tl because Mp is a diagonal matrix and R is a rotation matrix, we obtain

lT[n^omp[zon^o(3)de(n^oTr,3)]+RMpRTn^i[z´on^i(3)d´e(n^iTr,3)]r,3f]=0.

The 2-Norm of the direction cosine l equals one, and l, in general, cannot be perpendicular to the vector [n^omp[zon^o(3)de(n^oTr,3]+RMpRTn^i[z´on^i(3)d´e(n^iTr,3)]r,3f]. Therefore, we obtain

n^omp[zon^o(3)de(n^oTr,3]+RMpRTn^i[z´on^i(3)d´e(n^iTr,3)]=r,3f.

Further, we can simplify Eq. (18) if we let n^˜o=n^on^o(3)=[n^o(1)n^o(3),n^o(2)n^o(3),1]T and n^˜i=n^in^i(3)=[n^i(1)n^i(3),n^i(2)n^i(3),1]T. Then, after factoring n^o(3) and n^i(3) out of the denominator terms, we can write Eq. (18) as

n^˜omp[zode(n^˜oTr,3)]+RMpRTn^˜i[z´od´e(n^˜iTr,3)]=r,3f.

This expedient simplification from Eq. (18) to Eq. (19) is possible because we can describe the unit normal vectors n^o and n^i using only the components along the x and y axis. In other words, if we know the x and y components of the normal, we can determine the z component uniquely because the planes are limited to rotations between π/2 and π/2 about both x and y axes (one of the assumptions in this model).

Equation (19) is most general in the sense that it yields the specific formulas for standard imaging configurations such as fronto-parallel imaging, focusing with only sensor tilt, focusing with only lens tilt, or focusing with both sensor and lens tilts.

3. VERIFICATION OF MODEL FOR IMAGING WITH TILTED LENS AND SENSOR

A. Verification of the Imaging Equation in Zemax

We verified the accuracy of the imaging equation Eq. (12) by comparing the numerically computed values of image points (intersection of the chief ray with a tilted image plane) using Eq. (12) with the corresponding image points obtained by tracing chief rays from a grid of object points belonging to a tilted object plane. Figure 3 shows the layout plot of the optical system implemented in Zemax showing (1) an object plane, (2) an ideal lens made from two paraxial surfaces and pivoted about a point away from the entrance pupil (de=5mm), and (3) an image plane pivoted about the image plane pivot along the z axis.

 figure: Fig. 3.

Fig. 3. Chief rays traced from a grid of points in the object plane through an ideal lens tilted about a point de=5mm away from the entrance pupil along the optical axis to the tilted image plane.

Download Full Size | PDF

The results of the simulation are tabulated in Table 1, which shows the set of object points, the numerically computed image points, the ray traced image points, and the absolute difference between the numerically computed and ray traced image points. We observe that the numerically computed and ray traced values of the image points are very close; the small difference in their values can be attributed to the error associated with floating point operations. This comparison demonstrates the accuracy of the analytically derived expression [Eq. (12)] representing the geometric relationship between a three-dimensional object point and its image point in the absence of optical aberrations.

Tables Icon

Table 1. Comparison of Numerically Computed Image Points using Eq. (12) and Ray Traced Image Points in Zemax for the Optical System Shown in Fig. 3

B. Verification of Equation for Focusing on Tilted Planes in Zemax

While several relationships between the object, lens, and image plane can be derived from Eq. (19) which correspond to specific cases of Scheimpflug imaging configurations, here we show and verify the relationships for focusing on an object plane tilted about the x axis by rotating a thick lens about the center of its entrance pupil. For this configuration, we obtain the following two relationships—expression for the image plane pivot distance z´o and the object tilt angle β—starting from Eq. (19):

z´o=dcosα+mpzof(mpcos2α+sin2α)mpzocosα+f,
and
tanβ=sinα[mpzo+f(1mp)cosα]f(mpcos2α+sin2α).

Table 2 enumerates the results of our test. To verify the above equations, we implemented a thick-lens model of focal length f=24.0mm in Zemax using two paraxial surfaces (to simulate aberration-free, geometric imaging) having pupil magnification mp=2. The lens surfaces were grouped within two coordinate break surfaces that allowed the lens to be tilted about the entrance pupil. The object plane surface was placed at zo=504.0mm from {C} (and from the entrance pupil). For every object plane orientation β (col. 1), the appropriate lens tilt angle α (col. 2) and image plane distance z´o (col. 3) were obtained using Zemax’s optimization function, to minimize the spot radius across the field. Following optimization for every β, the value of α obtained from Zemax (along with the values of mp, zo, f) was used to numerically compute β (col. 4) and z´o (col. 5) using the derived Eqs. (20) and (21). We can observe that the values of β and z´o obtained numerically using the derived equations are very closely matched.

Tables Icon

Table 2. Verification of Eq. (20) and Eq. (21) for Focusing on a Tilted Object Plane by Tilting a Lens about the Entrance Pupil

It must be noted that while Eq. (21) is useful for finding the value of the object plane tilt angle β for a given value of lens tilt angle α, obtaining the inverse function for evaluating α in terms of β is not straightforward. However, a simple iterative algorithm, which starts from an initial estimate of α by setting mp=1, can be used to estimate the required lens tilt angle α required for focusing on a tilted object surface.

4. APPLICATION OF THE MODEL FOR OMNIFOCUS IMAGING USING LENS TILT

A. Theory

We can infer several insights about the geometric properties of the image formed in a Scheimpflug camera from Eq. (12). In this section, we use one such interesting consequence of Eq. (12) that is useful for synthesizing an omnifocus image by selectively blending multiple images captured while rotating a lens about its entrance pupil center.

An omnifocus image has everything in the close foreground to far background in sharp focus [7]. Lenses can focus only on a single surface—usually, the plane of sharp focus—as dictated by the laws of optics. Consequently, objects fore and aft the plane of sharp focus gradually become out of focus and appear blurry in the image. This interplay of light and lenses leads to the limited depth of field (DOF) problem. Several methods have been proposed to circumvent this problem, for example, depth-dependent image deconvolution, wavefront coding, plenoptic imaging, Scheimpflug imaging, focus stacking, etc.

In Scheimpflug imaging, the lens or the sensor or both are rotated, which induces a rotation of the plane of sharp focus, allowing scenes with significant depths (or object planes that are tilted) to be in focus at the image plane [8].

In focus stacking (or z-stacking), a number of images are captured at multiple focus depths by changing either the focal length or the image plane distance. Consequently, regions of the scene that are a specific distance from the lens and within the DOF are in focus only in a single image. Collectively, however, the stack contains all or most regions of the scene in focus distributed among the images in the stack. An omnifocus image is created by registering the images, followed by identifying and blending the in-focus regions [7,9].

The DOF region in Scheimpflug imaging is still limited to a small region (approximately a wedge) around the plane of sharp focus. In focus stacking, significant portions of each DOF region extend perpendicular to the optical axis of the lens and beyond the field-of-view of the camera, resulting in suboptimal utilization of each DOF.

Our analysis of Eq. (12) suggests that we can borrow the central ideas of Scheimpflug imaging and focus stacking methods to devise a simple technique for creating omnifocus images while bypassing the above shortcomings of either method. Our technique relies on capturing multiple images of the scene while rotating a lens about the entrance pupil center. We also show that the proposed method is simplest if the pupil magnification mp of the lens equals one (i.e., a symmetric lens).

A critical step in the synthesis of an omnifocus image from a set of images is registration, which is the process of spatially aligning the images in the stack to a reference image by applying a mapping function—either known a priori from the model or estimated from the images. The degree of accuracy of image registration directly influences the quality of the synthesized image.

In general, a rotation of the lens about a pivot along the optical axis results in a complex depth-dependent warping of the image field. The extent of distortion of the points in the image is a function of the point’s depth in the object space. In other words, different parts of the scene warp by different amounts when the lens is rotated. This phenomenon is called the parallax effect. Although there are algorithms for registering images of the same scene exhibiting local variations, the methods are typically iterative in nature, and there are fundamental limits to the achievable registration accuracy [10], especially in the presence of noise and non-geometric distortions such as defocus blur.

If, however, the lens is rotated about its entrance pupil, then the image field warping is independent of the scene depth and we can unwarp the image using a single transformation matrix. Moreover, from a purely geometric standpoint, the images in the stack are pair-wise bilinear through a mapping H(δα):x´ix´j, where δα is the difference angle of the lens’s orientation between x´i and x´j. Further, we can derive this mapping, called the inter-image homography [11], from Eq. (19), allowing us to analytically register the images in the sequence. Thus, the registration process is efficient (not requiring an iterative algorithm) and exact.

The specific structure of the inter-image homography matrix depends on the pupil magnification mp. Interestingly, if the pupil magnification equals one (i.e., a perfectly symmetric lens), the inter-image homography between the image obtained under a lens tilt of α about the x axis (from +z axis toward +y axis) and the reference image that is obtained under no lens tilt, reduces to a simple similarity transformation consisting of only scaling and translation components. This mapping between x´0 and x´n is shown below:

x´n=[(dcosαz´o)dz´o000(dcosαz´o)dz´odsinα001]Inter-image homography,H(α,0)x´0,
where x´0 is the image point in the reference image (α=0), x´n is the corresponding image point obtained under lens rotation α, d is the distance of the exit pupil center from the entrance pupil center (the pivot point in this case) along the optical axis, and z´o is the location of the image plane’s pivot in the camera frame {C}. Note that we are not necessarily required to physically capture a reference image with α=0°, but because we can analytically register the images, we just choose to align all images to the nominal lens orientation.

In the following subsection, we verify the above theory of omnifocus image synthesis using a simulation in Zemax. Please note that our goal in the next section is not to present a new or best possible algorithm for detecting and fusing focused regions from the images in the stack, but rather to present another method of overcoming the depth of field problem which we believe has some advantages over existing methods.

B. Simulation

Figure 4(a) shows a schematic of the image simulation setup in Zemax. We implemented a F/2.5 thick-lens model using two paraxial surfaces of focal lengths f1=40mm and f2=30mm with s=20mm separation, resulting in an effective focal length f=24mm [1/f=1/f1+1/f2s/(f1f2)]. A circular stop (diameter=7.14mm) surface was placed behind the first paraxial surface at a distance a=11.43mm, resulting in a pupil magnification mp=1 [mp=(f2/f1)((af1)/(saf2))]. For tilting the object and lens independently, we set the object surface type as “Tilted,” and bracketed all surfaces associated with the lens within coordinate breaks.

 figure: Fig. 4.

Fig. 4. Image simulation using Zemax and PyZDDE: (a) schematic of setup, (b) captured image for α=8°, (c) focus-measure using Laplacian of Gaussian (LoG) filter showing the regions in focus, (d) resulting composite image, and (e) focus-measure of the composite image showing all three depths in focus.

Download Full Size | PDF

The Image Simulation analysis tool in Sequential Mode in Zemax is powerful and offers an extensive set of tuning parameters. However, in order to produce a representative simulation, the parameters must be chosen carefully based on the objective of the experiment. The most important parameters within the context of the current simulation are: (1) field height of the source bitmap, (2) oversampling factor (if required), (3) pupil sampling, (4) image sampling, (5) aberrations, (6) reference, (7) pixel size, and (8) X pixels and Y pixels. The image simulation process in Zemax essentially consists of the three steps [12]: (a) the source bitmap image is convolved with a point spread function (PSF) grid, that is space variant and accounts for optical aberrations, generated in the object space whose fidelity depends on the set field height, oversampling factor, and number of pixels; (b) the convolved image, in the object space, is transferred to the image space to account for geometric distortions and system magnification; and (c) the sampling effects of a discrete detector is simulated based on the set pixel size and detector size (inferred from pixel size and number of pixels). Since the paraxial surfaces are devoid of any aberrations, we inserted a Zernike Standard Phase surface at the location of the exit pupil to introduce slight spherical aberration. The small amount of spherical aberration also increased the spot size of the PSFs, ensuring adequate pixels to represent each PSF. Additionally, we set sufficiently fine pupil sampling and image sampling (both 64×64) that influences how accurately the PSFs represent system aberrations.

The three-dimensional scene consists of three playing cards (64mm×89mm) placed at 800 mm, 1000 mm, and 1200 mm from the lens’s vertex (before rotating the lens). However, the Image Simulation tool was not designed to simulate imaging three-dimensional scenes. Therefore, we run the image simulation for each depth plane (three), with identical settings and integrate the outputs of each simulation into a single image. An obvious shortcoming of the simple integration process is that it fails to accurately simulate imaging portions of the scene where objects overlap in the image space. To avoid this problem, we spatially separated the three cards along the transverse direction (using appropriate fields setting in Zemax) such that their images (following blurring) do not overlap in the image plane (by picking “Vertex” as the reference under detector settings). This limitation (and the workaround) does not, however, detract from the main purpose of the simulation—to test the feasibility of synthesizing an omnifocus image from a series of images captured under lens tilts.

To simulate the imaging of a scene consisting of m depth planes for n orientations of the lens, we need to execute the Image Simulation tool m×n times while setting the appropriate simulation parameters and integrating the m outputs for every orientation. We used PyZDDE [13] to automate the entire process of tilting the lens about the x axis pivoted at the center of the entrance pupil to create a sequence of 13 images between ±8°.

Figure 4(b) shows the integrated image of the scene for lens tilt angle α=8°. Note the transverse shift (downward) of the image field. Although not apparent in the figure, the individual images of the three cards in the image plane are vertically shifted and de-magnified by the same amount, as predicted by Eq. (22). The in-focus regions in this image, detected using a Laplacian of Gaussian (LoG) filter, are shown in Fig. 4(c). Note that no single plane is in complete focus, but parts of each plane that lie within the wedge-shaped DOF surrounding the tilted plane of sharp focus form sharp regions in the image plane. The 13 images were analytically registered (geometric transformation) using the inter-image homography matrix H(α) shown in Eq. (22). Following registration, a composite image was created by blending the in-focus regions (detected using LoG) from the images. Figure 4(d) shows the synthesized image in which the complete scene consisting of three depth planes is in focus. Figure 4(e) shows the degree of focus on the three planes in the composite image measured using the LoG filter.

We have made the simulation code (including Zemax files, Python scripts, and computational notebook) and results available for the interested reader. See Code 1, Ref. [14].

5. DISCUSSION AND CONCLUSION

We have proposed a new geometric model for imaging in systems in which the lens and sensor are free to rotate about independent pivots. The proposed model is useful for describing and predicting the properties of images in such systems because it incorporates all the optical parameters that directly influence image formation. The pair of equations—Eq. (12) and Eq. (19)—completely describe the image and focusing relationships in these systems, such as in a Scheimpflug camera. Following the verification of these two equations, we presented an application for addressing the problem of limited depth of field in optical imaging systems. Specifically, we showed a method of computationally generating an omnifocus (all-in-focus) image from a sequence of images obtained while rotating a lens about the entrance pupil. We demonstrated, using a simulation in Zemax, that we can analytically register the images in the stack if the lens is rotated about its entrance pupil. Furthermore, if the lens has unity pupil magnification (symmetric lenses), then the transformation required for registering the images is a simple combination of scaling and transverse shift. The mechanisms underlying our technique for generating omnifocus image can be fully appreciated only in light of the geometric model presented. The closed form expressions for analytic registration were obtained directly from Eq. (12). At this point, it should be noted that if the exact values of the sensor pivot z´o and the inter-pupil distance d are unknown, then we must rely on algorithmic registration. Furthermore, the above technique can also be used to increase the depth of field of a Scheimpflug camera from a sequence of images obtained while perturbing the lens’s orientation around the baseline orientation obtained using Eq. (19).

Funding

U.S. Army Research Laboratory (ARL) (W911NF-06-2-0035).

Acknowledgment

The work described in this paper was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-06-2-0035. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation heron.

REFERENCES

1. A. Walther, The Ray and Wave Theory of Lenses, 1st ed. (Cambridge University, 2006).

2. R. Kingslake and R. B. Johnson, Lens Design Fundamentals, 2nd ed. (Academic, 2009).

3. R. R. Shannon, The Art and Science of Optical Design, 1st ed. (Cambridge University, 1997).

4. J. E. Greivenkamp, Field Guide to Geometrical Optics (SPIE Publications, 2003).

5. A. Hornberg, Handbook of Machine Vision, 1st ed. (Wiley-VCH, 2006).

6. P. Rangarajan, Pushing the Limits of Imaging Using Patterned Illumination (Southern Methodist University, 2014).

7. N. Xu, K.-H. Tan, H. Arora, and N. Ahuja, Generating Omnifocus Images Using Graph Cuts and a New Focus Measure (IEEE, 2004), pp. 697–700.

8. R. Jacobson, S. Ray, G. G. Attridge, and N. Axford, Manual of Photography, 9th ed. (Focal, 2000).

9. C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden, Pyramid Methods in Image Processing (RCA Engineers, 1984).

10. L. C. G. Brown, “A survey of image registration techniques,” ACM Comp. Surv. 24, 325–376 (1992). [CrossRef]  

11. A. Criminisi, Accurate Visual Metrology from Single and Multiple Uncalibrated Images (University of Oxford, 1999).

12. ZEMAX, Optical Design Program, User’s Manual (ZEMAX Development Corporation, 2011).

13. I. Sinharoy, C. Holloway, and J. Stuermer, “PyZDDE,” in Zenodo (2016).

14. I. Sinharoy, cosi2016_omnifocus: release of simulation code, files and dataset [Software] (2016), Zenodo. http://doi.org/10.5281/zenodo.59647.

Supplementary Material (1)

NameDescription
Code 1       cosi2016_omnifocus: Release of simulation code, files and dataset [Software].

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic of the general optical system with the lens pivoted at { C } , the sensor plane pivoted at { I } , and the object plane pivoted at { O } .
Fig. 2.
Fig. 2. Points of intersection (POI) of chief rays with the sensor plane. The set of chief rays originate from two parallel planes at two different depths from the lens such that, in the absence of lens rotation, their POIs perfectly overlap in the image plane (due to differing transverse magnification). The red and blue markers represent the POIs of the chief ray originating from the planes nearer and further from the lens, respectively. For all subplots shown above, the lens is rotated by 10° and 3° about the x axis and y axis, respectively. The top row—subplots (a), (b), (c)—shows the POIs when the lens is pivoted away (5 mm) from the center of the entrance pupil. The bottom row—subplots (d), (e), (f)—shows the same POIs when the lens is rotated about the center of the entrance pupil. The left, middle, and right columns correspond to lenses with pupil magnification m p equal to 0.55, 1.0, and 2.0, respectively. We can observe that: (1) the POIs from different depths warp by different degrees, causing parallax when the lens is rotated about a point away from the entrance pupil, (2) the nature of geometric distortion induced by lens rotation depends on the pupil magnification. Specifically, if m p = 1 , the all image points experience the same amount of scaling and transverse shift.
Fig. 3.
Fig. 3. Chief rays traced from a grid of points in the object plane through an ideal lens tilted about a point d e = 5 mm away from the entrance pupil along the optical axis to the tilted image plane.
Fig. 4.
Fig. 4. Image simulation using Zemax and PyZDDE: (a) schematic of setup, (b) captured image for α = 8 ° , (c) focus-measure using Laplacian of Gaussian (LoG) filter showing the regions in focus, (d) resulting composite image, and (e) focus-measure of the composite image showing all three depths in focus.

Tables (2)

Tables Icon

Table 1. Comparison of Numerically Computed Image Points using Eq. (12) and Ray Traced Image Points in Zemax for the Optical System Shown in Fig. 3

Tables Icon

Table 2. Verification of Eq. (20) and Eq. (21) for Focusing on a Tilted Object Plane by Tilting a Lens about the Entrance Pupil

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

tan ( ω ) tan ( ω ´ ) = h ´ e h e y Ω y ´ Ω ´ .
tan ( ω ) tan ( ω ´ ) = m p .
l = [ l , m , n ] T = [ cos ( φ ) sin ( ω ) , sin ( φ ) sin ( ω ) , cos ( ω ) ] T l ´ = [ l ´ , m ´ , n ´ ] T = [ cos ( φ ´ ) sin ( ω ´ ) , sin ( φ ´ ) sin ( ω ´ ) , cos ( ω ´ ) ] T .
l ´ = 1 m p n ´ n l , m ´ = 1 m p n ´ n m , n ´ = ± m p 1 + ( m p 2 1 ) n 2 n .
l ´ = ± 1 1 + ( m p 2 1 ) n 2 M p l ,
l ´ L = 1 1 + ( m p 2 1 ) n R 2 M p R T l .
l ´ = 1 1 + ( m p 2 1 ) n R 2 R M p R T l ,
k ( λ ) = d ´ e r , 3 + λ 1 + ( m p 2 1 ) n R 2 R M p R T l .
λ = ( z ´ o d ´ e n ^ i T r , 3 ) 1 + ( m p 2 1 ) n R 2 n ^ i T R M p R T l .
x ´ = d ´ e r , 3 + ( n ^ i ( 3 ) z ´ o d ´ e n ^ i T r , 3 ) n ^ i T R M p R T l R M p R T .
x ´ = d ´ e r , 3 + ( n ^ i ( 3 ) z ´ o d ´ e n ^ i T r , 3 ) n ^ i T R M p R T ( x d e r , 3 ) R M p R T ( x d e r , 3 ) .
x ´ I = R i T ( d ´ e r , 3 t i ) + ( n ^ i ( 3 ) z ´ o d ´ e n ^ i T r , 3 ) n ^ i T R M p R T ( x C d e r , 3 ) R i T R M p R T ( x C d e r , 3 ) .
1 m p u + m p u ´ = 1 f .
u ˜ = z o ( n ^ o T c ^ z ) + d e ( n ^ o T r , 3 ) n ^ o T l ,
u ´ ˜ = z ´ o ( n ^ i T c ^ z ) d ´ e ( n ^ i T r , 3 ) n ^ i T l ´ .
n ^ o T l m p [ z o ( n ^ o T c ^ z ) d e ( n ^ o T r , 3 ) ] ( l · o ^ ) + m p n ^ i T ( R M p R T l ) [ z ´ o ( n ^ i T c ^ z ) d ´ e ( n ^ i T r , 3 ) ] ( R M p R T l ) · o ^ = 1 f .
l T [ n ^ o m p [ z o n ^ o ( 3 ) d e ( n ^ o T r , 3 ) ] + R M p R T n ^ i [ z ´ o n ^ i ( 3 ) d ´ e ( n ^ i T r , 3 ) ] r , 3 f ] = 0 .
n ^ o m p [ z o n ^ o ( 3 ) d e ( n ^ o T r , 3 ] + R M p R T n ^ i [ z ´ o n ^ i ( 3 ) d ´ e ( n ^ i T r , 3 ) ] = r , 3 f .
n ^ ˜ o m p [ z o d e ( n ^ ˜ o T r , 3 ) ] + R M p R T n ^ ˜ i [ z ´ o d ´ e ( n ^ ˜ i T r , 3 ) ] = r , 3 f .
z ´ o = d cos α + m p z o f ( m p cos 2 α + sin 2 α ) m p z o cos α + f ,
tan β = sin α [ m p z o + f ( 1 m p ) cos α ] f ( m p cos 2 α + sin 2 α ) .
x ´ n = [ ( d cos α z ´ o ) d z ´ o 0 0 0 ( d cos α z ´ o ) d z ´ o d sin α 0 0 1 ] Inter-image homography , H ( α , 0 ) x ´ 0 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.